Spaces:
Sleeping
Sleeping
Upload 24 files
Browse files- README.md +63 -9
- app.py +825 -0
- audio_recorder.html +57 -0
- botidinamix-g.json +13 -0
- datos_guardados_datos guardados.txt +0 -0
- datos_guardados_memoria_asistente.txt +0 -0
- datos_guardados_presupuestos.txt +0 -0
- datos_guardados_radiografias.txt +0 -0
- datos_guardados_trabajos_laboratorio.txt +0 -0
- env (1) +5 -0
- gitattributes +37 -0
- gitignore +0 -0
- google_calendar (1).py +43 -0
- pages_agendamiento.py +37 -0
- pages_buscar_datos.py +30 -0
- pages_comunicacion.py +25 -0
- pages_correo.py +5 -0
- pages_galatea_asistente.py +161 -0
- pages_home.py +65 -0
- pages_insumos.py +6 -0
- pages_notificaciones.py +6 -0
- pages_presupuesto.py +123 -0
- pages_radiografias.py +48 -0
- requirements.txt +70 -0
README.md
CHANGED
|
@@ -1,13 +1,67 @@
|
|
| 1 |
---
|
| 2 |
-
title: VIRTUALOMARDENT
|
| 3 |
-
emoji: 💻
|
| 4 |
-
colorFrom: indigo
|
| 5 |
-
colorTo: yellow
|
| 6 |
-
sdk: streamlit
|
| 7 |
-
sdk_version: 1.36.0
|
| 8 |
-
app_file: app.py
|
| 9 |
-
pinned: false
|
| 10 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
-
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: mit
|
| 3 |
+
title: VIRTUAL OMARDENT pro
|
| 4 |
+
sdk: streamlit
|
| 5 |
+
emoji: 🐨
|
| 6 |
+
colorFrom: blue
|
| 7 |
+
colorTo: purple
|
| 8 |
+
short_description: Una aplicacion para gestionar una clinica dental mediante IA
|
| 9 |
+
sdk_version: 1.35.0
|
| 10 |
---
|
| 11 |
+
# Chatbot con OpenAI
|
| 12 |
+
|
| 13 |
+
Este proyecto consiste en un chatbot desarrollado utilizando la API de OpenAI. El chatbot es capaz de responder preguntas sobre un texto proporcionado por el usuario, utilizando modelos de lenguaje avanzados de OpenAI.
|
| 14 |
+
|
| 15 |
+
## Descripción
|
| 16 |
+
|
| 17 |
+
El chatbot puede cargar un documento PDF, extraer su contenido de texto, preprocesarlo y luego responder preguntas sobre ese texto utilizando la API de OpenAI. Los usuarios pueden seleccionar diferentes modelos de lenguaje de OpenAI para generar respuestas.
|
| 18 |
+
|
| 19 |
+
## Instalación
|
| 20 |
+
|
| 21 |
+
Para ejecutar este proyecto localmente, sigue estos pasos:
|
| 22 |
+
|
| 23 |
+
1. Clona este repositorio en tu máquina local utilizando Git:
|
| 24 |
+
|
| 25 |
+
```bash
|
| 26 |
+
git clone https://github.com/tu_usuario/tu_repositorio.git
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
2. Navega al directorio del proyecto:
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
cd tu_repositorio
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
3. Instala las dependencias necesarias utilizando pip:
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
pip install -r requirements.txt
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
4. Configura tu clave de API de OpenAI. Puedes hacerlo estableciendo la variable de entorno `OPENAI_API_KEY` o directamente en el archivo `chat.py`.
|
| 42 |
+
|
| 43 |
+
## Uso
|
| 44 |
+
|
| 45 |
+
Una vez que hayas configurado el proyecto y las dependencias, puedes ejecutar el chatbot utilizando el siguiente comando:
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
streamlit run chat.py
|
| 49 |
+
```
|
| 50 |
+
## Dependencias
|
| 51 |
+
|
| 52 |
+
Este proyecto utiliza las siguientes dependencias de Python:
|
| 53 |
+
|
| 54 |
+
- Streamlit: Para la interfaz de usuario interactiva.
|
| 55 |
+
- OpenAI: Para acceder a la API de OpenAI y obtener respuestas del chatbot.
|
| 56 |
+
- NLTK: Para el preprocesamiento de texto, incluyendo tokenización y eliminación de palabras vacías.
|
| 57 |
+
- PyPDF2: Para extraer texto de documentos PDF.
|
| 58 |
+
|
| 59 |
+
Puedes encontrar las versiones específicas de las dependencias en el archivo `requirements.txt`.
|
| 60 |
+
|
| 61 |
+
## Contribuciones
|
| 62 |
+
|
| 63 |
+
Las contribuciones son bienvenidas. Si deseas contribuir a este proyecto, por favor, abre un issue para discutir los cambios propuestos antes de enviar un pull request.
|
| 64 |
+
|
| 65 |
+
## Licencia
|
| 66 |
|
| 67 |
+
Este proyecto está bajo la Licencia MIT. Para más detalles, consulta el archivo `LICENSE`.
|
app.py
ADDED
|
@@ -0,0 +1,825 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import tempfile
|
| 3 |
+
import openai
|
| 4 |
+
from dotenv import load_dotenv
|
| 5 |
+
import PyPDF2
|
| 6 |
+
import nltk
|
| 7 |
+
from nltk.tokenize import word_tokenize
|
| 8 |
+
from nltk.corpus import stopwords
|
| 9 |
+
from nltk.stem import SnowballStemmer
|
| 10 |
+
import pandas as pd
|
| 11 |
+
from fpdf import FPDF
|
| 12 |
+
import streamlit as st
|
| 13 |
+
import requests
|
| 14 |
+
from google.cloud import texttospeech, vision
|
| 15 |
+
import base64
|
| 16 |
+
|
| 17 |
+
nltk.download('punkt', quiet=True)
|
| 18 |
+
nltk.download('stopwords', quiet=True)
|
| 19 |
+
|
| 20 |
+
# Cargar las claves API desde el archivo .env
|
| 21 |
+
load_dotenv()
|
| 22 |
+
openai_api_key = os.getenv("OPENAI_API_KEY")
|
| 23 |
+
brevo_api_key = os.getenv("BREVO_API_KEY")
|
| 24 |
+
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "botidinamix-g.json"
|
| 25 |
+
|
| 26 |
+
# Verifica que las claves API están configuradas
|
| 27 |
+
if not openai_api_key:
|
| 28 |
+
st.error("No API key provided for OpenAI. Please set your API key in the .env file.")
|
| 29 |
+
else:
|
| 30 |
+
openai.api_key = openai_api_key
|
| 31 |
+
|
| 32 |
+
if not brevo_api_key:
|
| 33 |
+
st.error("No API key provided for Brevo. Please set your API key in the .env file.")
|
| 34 |
+
|
| 35 |
+
def extraer_texto_pdf(archivo):
|
| 36 |
+
texto = ""
|
| 37 |
+
if archivo:
|
| 38 |
+
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
|
| 39 |
+
temp_file.write(archivo.read())
|
| 40 |
+
temp_file_path = temp_file.name
|
| 41 |
+
try:
|
| 42 |
+
with open(temp_file_path, 'rb') as file:
|
| 43 |
+
reader = PyPDF2.PdfReader(file)
|
| 44 |
+
for page in range(len(reader.pages)):
|
| 45 |
+
texto += reader.pages[page].extract_text()
|
| 46 |
+
except Exception as e:
|
| 47 |
+
st.error(f"Error al extraer texto del PDF: {e}")
|
| 48 |
+
finally:
|
| 49 |
+
os.unlink(temp_file_path)
|
| 50 |
+
return texto
|
| 51 |
+
|
| 52 |
+
def preprocesar_texto(texto):
|
| 53 |
+
tokens = word_tokenize(texto, language='spanish')
|
| 54 |
+
tokens = [word.lower() for word in tokens if word.isalpha()]
|
| 55 |
+
stopwords_es = set(stopwords.words('spanish'))
|
| 56 |
+
tokens = [word for word in tokens if word not in stopwords_es]
|
| 57 |
+
stemmer = SnowballStemmer('spanish')
|
| 58 |
+
tokens = [stemmer.stem(word) for word in tokens]
|
| 59 |
+
return " ".join(tokens)
|
| 60 |
+
|
| 61 |
+
def obtener_respuesta(pregunta, texto_preprocesado, modelo, temperatura=0.5, assistant_id="asst_4ZYvBvf4IUVQPjnugSZGLdV2", contexto=""):
|
| 62 |
+
try:
|
| 63 |
+
response = openai.ChatCompletion.create(
|
| 64 |
+
model=modelo,
|
| 65 |
+
messages=[
|
| 66 |
+
{"role": "system", "content": "Actua como Galatea la asistente de la clinica Odontologica OMARDENT y resuelve las inquietudes"},
|
| 67 |
+
{"role": "user", "content": f"{contexto}\n\n{pregunta}\n\nContexto: {texto_preprocesado}"}
|
| 68 |
+
],
|
| 69 |
+
temperature=temperatura
|
| 70 |
+
)
|
| 71 |
+
respuesta = response.choices[0].message['content'].strip()
|
| 72 |
+
|
| 73 |
+
# Configura la solicitud de síntesis de voz
|
| 74 |
+
client = texttospeech.TextToSpeechClient()
|
| 75 |
+
input_text = texttospeech.SynthesisInput(text=respuesta)
|
| 76 |
+
voice = texttospeech.VoiceSelectionParams(
|
| 77 |
+
language_code="es-ES", ssml_gender=texttospeech.SsmlVoiceGender.FEMALE
|
| 78 |
+
)
|
| 79 |
+
audio_config = texttospeech.AudioConfig(
|
| 80 |
+
audio_encoding=texttospeech.AudioEncoding.MP3
|
| 81 |
+
)
|
| 82 |
+
|
| 83 |
+
# Realiza la solicitud de síntesis de voz
|
| 84 |
+
response = client.synthesize_speech(
|
| 85 |
+
input=input_text, voice=voice, audio_config=audio_config
|
| 86 |
+
)
|
| 87 |
+
|
| 88 |
+
# Reproduce el audio en Streamlit
|
| 89 |
+
st.audio(response.audio_content, format="audio/mp3")
|
| 90 |
+
return respuesta
|
| 91 |
+
|
| 92 |
+
except openai.OpenAIError as e:
|
| 93 |
+
st.error(f"Error al comunicarse con OpenAI: {e}")
|
| 94 |
+
return "Lo siento, no puedo procesar tu solicitud en este momento."
|
| 95 |
+
|
| 96 |
+
except Exception as e:
|
| 97 |
+
st.error(f"Error al generar la respuesta y el audio: {e}")
|
| 98 |
+
return "Lo siento, ocurrió un error al procesar tu solicitud."
|
| 99 |
+
|
| 100 |
+
def guardar_en_txt(nombre_archivo, datos):
|
| 101 |
+
carpeta = "datos_guardados"
|
| 102 |
+
os.makedirs(carpeta, exist_ok=True)
|
| 103 |
+
ruta_archivo = os.path.join(carpeta, nombre_archivo)
|
| 104 |
+
try:
|
| 105 |
+
with open(ruta_archivo, 'a', encoding='utf-8') as archivo: # Append mode
|
| 106 |
+
archivo.write(datos + "\n")
|
| 107 |
+
except Exception as e:
|
| 108 |
+
st.error(f"Error al guardar datos en el archivo: {e}")
|
| 109 |
+
return ruta_archivo
|
| 110 |
+
|
| 111 |
+
def cargar_desde_txt(nombre_archivo):
|
| 112 |
+
carpeta = "datos_guardados"
|
| 113 |
+
ruta_archivo = os.path.join(carpeta, nombre_archivo)
|
| 114 |
+
try:
|
| 115 |
+
if os.path.exists(ruta_archivo):
|
| 116 |
+
with open(ruta_archivo, 'r', encoding='utf-8') as archivo:
|
| 117 |
+
return archivo.read()
|
| 118 |
+
else:
|
| 119 |
+
st.warning("Archivo no encontrado.")
|
| 120 |
+
return ""
|
| 121 |
+
except Exception as e:
|
| 122 |
+
st.error(f"Error al cargar datos desde el archivo: {e}")
|
| 123 |
+
return ""
|
| 124 |
+
|
| 125 |
+
def listar_archivos_txt():
|
| 126 |
+
carpeta = "datos_guardados"
|
| 127 |
+
try:
|
| 128 |
+
if not os.path.exists(carpeta):
|
| 129 |
+
return []
|
| 130 |
+
archivos = [f for f in os.listdir(carpeta) if f.endswith('.txt')]
|
| 131 |
+
archivos_ordenados = sorted(archivos, key=lambda x: os.path.getctime(os.path.join(carpeta, x)), reverse=True)
|
| 132 |
+
return archivos_ordenados
|
| 133 |
+
except Exception as e:
|
| 134 |
+
st.error(f"Error al listar archivos: {e}")
|
| 135 |
+
return []
|
| 136 |
+
|
| 137 |
+
def generar_pdf(dataframe, titulo, filename):
|
| 138 |
+
pdf = FPDF()
|
| 139 |
+
pdf.add_page()
|
| 140 |
+
pdf.set_font("Arial", size=12)
|
| 141 |
+
pdf.cell(200, 10, txt=titulo, ln=True, align='C')
|
| 142 |
+
|
| 143 |
+
for i, row in dataframe.iterrows():
|
| 144 |
+
row_text = ", ".join(f"{col}: {val}" for col, val in row.items())
|
| 145 |
+
pdf.cell(200, 10, txt=row_text, ln=True)
|
| 146 |
+
|
| 147 |
+
try:
|
| 148 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix='.pdf') as tmp_file:
|
| 149 |
+
pdf.output(tmp_file.name)
|
| 150 |
+
return tmp_file.name
|
| 151 |
+
except Exception as e:
|
| 152 |
+
st.error(f"Error al generar PDF: {e}")
|
| 153 |
+
return None
|
| 154 |
+
|
| 155 |
+
def enviar_correo(destinatario, asunto, contenido):
|
| 156 |
+
url = "https://api.brevo.com/v3/smtp/email"
|
| 157 |
+
headers = {
|
| 158 |
+
"accept": "application/json",
|
| 159 |
+
"api-key": brevo_api_key,
|
| 160 |
+
"content-type": "application/json"
|
| 161 |
+
}
|
| 162 |
+
payload = {
|
| 163 |
+
"sender": {"email": "tu_correo@dominio.com"},
|
| 164 |
+
"to": [{"email": destinatario}],
|
| 165 |
+
"subject": asunto,
|
| 166 |
+
"htmlContent": contenido
|
| 167 |
+
}
|
| 168 |
+
try:
|
| 169 |
+
response = requests.post(url, json=payload, headers=headers)
|
| 170 |
+
if response.status_code == 201:
|
| 171 |
+
st.success(f"Correo enviado a {destinatario}")
|
| 172 |
+
else:
|
| 173 |
+
st.error(f"Error al enviar el correo: {response.text}")
|
| 174 |
+
except Exception as e:
|
| 175 |
+
st.error(f"Error al enviar el correo: {e}")
|
| 176 |
+
|
| 177 |
+
def enviar_whatsapp(numero, mensaje):
|
| 178 |
+
url = "https://api.brevo.com/v3/whatsapp/send"
|
| 179 |
+
headers = {
|
| 180 |
+
"accept": "application/json",
|
| 181 |
+
"api-key": brevo_api_key,
|
| 182 |
+
"content-type": "application/json"
|
| 183 |
+
}
|
| 184 |
+
payload = {
|
| 185 |
+
"recipient": {"number": numero},
|
| 186 |
+
"sender": {"number": "tu_numero_whatsapp"},
|
| 187 |
+
"content": mensaje
|
| 188 |
+
}
|
| 189 |
+
try:
|
| 190 |
+
response = requests.post(url, json=payload, headers=headers)
|
| 191 |
+
if response.status_code == 201:
|
| 192 |
+
st.success(f"Mensaje de WhatsApp enviado a {numero}")
|
| 193 |
+
else:
|
| 194 |
+
st.error(f"Error al enviar el mensaje de WhatsApp: {response.text}")
|
| 195 |
+
except Exception as e:
|
| 196 |
+
st.error(f"Error al enviar el mensaje de WhatsApp: {e}")
|
| 197 |
+
|
| 198 |
+
def flujo_laboratorio():
|
| 199 |
+
st.title("🦷 Gestión de Trabajos de Laboratorio")
|
| 200 |
+
|
| 201 |
+
if 'laboratorio' not in st.session_state:
|
| 202 |
+
st.session_state.laboratorio = []
|
| 203 |
+
|
| 204 |
+
with st.form("laboratorio_form"):
|
| 205 |
+
tipo_trabajo = st.selectbox("Tipo de trabajo:", [
|
| 206 |
+
"Protesis total", "Protesis removible metal-acrilico", "Parcialita acrilico",
|
| 207 |
+
"Placa de blanqueamiento", "Placa de bruxismo", "Corona de acrilico",
|
| 208 |
+
"Corona en zirconio", "Protesis flexible", "Acker flexible"
|
| 209 |
+
])
|
| 210 |
+
doctor = st.selectbox("Doctor que requiere el trabajo:", ["Dr. Jose Daniel C", "Dr. Jose Omar C"])
|
| 211 |
+
fecha_entrega = st.date_input("Fecha de entrega:")
|
| 212 |
+
fecha_envio = st.date_input("Fecha de envío:")
|
| 213 |
+
laboratorio = st.selectbox("Laboratorio dental:", ["Ernesto Correa lab", "Formando Sonrisas"])
|
| 214 |
+
nombre_paciente = st.text_input("Nombre paciente:")
|
| 215 |
+
observaciones = st.text_input("Observaciones:")
|
| 216 |
+
numero_orden = st.text_input("Número de orden:")
|
| 217 |
+
cantidad = st.number_input("Cantidad:", min_value=1, step=1)
|
| 218 |
+
|
| 219 |
+
submitted = st.form_submit_button("Registrar Trabajo")
|
| 220 |
+
|
| 221 |
+
if submitted:
|
| 222 |
+
trabajo = {
|
| 223 |
+
"tipo_trabajo": tipo_trabajo,
|
| 224 |
+
"doctor": doctor,
|
| 225 |
+
"fecha_entrega": str(fecha_entrega),
|
| 226 |
+
"fecha_envio": str(fecha_envio),
|
| 227 |
+
"laboratorio": laboratorio,
|
| 228 |
+
"nombre_paciente": nombre_paciente,
|
| 229 |
+
"observaciones": observaciones,
|
| 230 |
+
"numero_orden": numero_orden,
|
| 231 |
+
"cantidad": cantidad,
|
| 232 |
+
"estado": "pendiente"
|
| 233 |
+
}
|
| 234 |
+
st.session_state.laboratorio.append(trabajo)
|
| 235 |
+
datos_guardados = mostrar_datos_como_texto([trabajo]) # Append only the new entry
|
| 236 |
+
guardar_en_txt('trabajos_laboratorio.txt', datos_guardados)
|
| 237 |
+
st.success("Trabajo registrado con éxito.")
|
| 238 |
+
|
| 239 |
+
if st.session_state.laboratorio:
|
| 240 |
+
st.write("### Trabajos Registrados")
|
| 241 |
+
df_trabajos = pd.DataFrame(st.session_state.laboratorio)
|
| 242 |
+
st.write(df_trabajos)
|
| 243 |
+
|
| 244 |
+
pdf_file = generar_pdf(df_trabajos, "Registro de Trabajos de Laboratorio", "trabajos_laboratorio.pdf")
|
| 245 |
+
st.download_button(
|
| 246 |
+
label="📥 Descargar PDF",
|
| 247 |
+
data=open(pdf_file, 'rb').read(),
|
| 248 |
+
file_name="trabajos_laboratorio.pdf",
|
| 249 |
+
mime="application/pdf"
|
| 250 |
+
)
|
| 251 |
+
|
| 252 |
+
def flujo_insumos():
|
| 253 |
+
st.title("📦 Gestión de Insumos")
|
| 254 |
+
|
| 255 |
+
if 'insumos' not in st.session_state:
|
| 256 |
+
st.session_state.insumos = []
|
| 257 |
+
|
| 258 |
+
with st.form("insumos_form"):
|
| 259 |
+
insumo_nombre = st.text_input("Nombre del Insumo:")
|
| 260 |
+
insumo_cantidad = st.number_input("Cantidad Faltante:", min_value=0, step=1)
|
| 261 |
+
submitted = st.form_submit_button("Agregar Insumo")
|
| 262 |
+
|
| 263 |
+
if submitted and insumo_nombre:
|
| 264 |
+
insumo = {"nombre": insumo_nombre, "cantidad": insumo_cantidad}
|
| 265 |
+
st.session_state.insumos.append(insumo)
|
| 266 |
+
datos_guardados = mostrar_datos_como_texto([insumo]) # Append only the new entry
|
| 267 |
+
guardar_en_txt('insumos.txt', datos_guardados)
|
| 268 |
+
st.success(f"Insumo '{insumo_nombre}' agregado con éxito.")
|
| 269 |
+
|
| 270 |
+
if st.session_state.insumos:
|
| 271 |
+
st.write("### Insumos Registrados")
|
| 272 |
+
insumos_df = pd.DataFrame(st.session_state.insumos)
|
| 273 |
+
st.write(insumos_df)
|
| 274 |
+
|
| 275 |
+
pdf_file = generar_pdf(insumos_df, "Registro de Insumos Faltantes", "insumos.pdf")
|
| 276 |
+
st.download_button(
|
| 277 |
+
label="📥 Descargar PDF",
|
| 278 |
+
data=open(pdf_file, 'rb').read(),
|
| 279 |
+
file_name="insumos_faltantes.pdf",
|
| 280 |
+
mime="application/pdf"
|
| 281 |
+
)
|
| 282 |
+
|
| 283 |
+
def buscar_datos_guardados():
|
| 284 |
+
st.title("🔍 Buscar Datos Guardados")
|
| 285 |
+
|
| 286 |
+
carpeta = "datos_guardados"
|
| 287 |
+
if not os.path.exists(carpeta):
|
| 288 |
+
st.info("No se encontraron archivos de datos guardados.")
|
| 289 |
+
return
|
| 290 |
+
|
| 291 |
+
archivos = listar_archivos_txt()
|
| 292 |
+
|
| 293 |
+
if archivos:
|
| 294 |
+
archivo_seleccionado = st.selectbox("Selecciona un archivo para ver:", archivos)
|
| 295 |
+
|
| 296 |
+
if archivo_seleccionado:
|
| 297 |
+
datos = cargar_desde_txt(archivo_seleccionado)
|
| 298 |
+
if datos:
|
| 299 |
+
st.write(f"### Datos del archivo {archivo_seleccionado}")
|
| 300 |
+
for linea in datos.split('\n'):
|
| 301 |
+
if linea.strip(): # Verificar si la línea no está vacía
|
| 302 |
+
st.markdown(f"**{linea.split(':')[0]}:** {linea.split(':')[1]}")
|
| 303 |
+
# Link to download the file
|
| 304 |
+
try:
|
| 305 |
+
with open(os.path.join(carpeta, archivo_seleccionado), 'rb') as file:
|
| 306 |
+
st.download_button(
|
| 307 |
+
label="📥 Descargar Archivo TXT",
|
| 308 |
+
data=file,
|
| 309 |
+
file_name=archivo_seleccionado,
|
| 310 |
+
mime="text/plain"
|
| 311 |
+
)
|
| 312 |
+
except Exception as e:
|
| 313 |
+
st.error(f"Error al preparar la descarga: {e}")
|
| 314 |
+
|
| 315 |
+
# Enviar el archivo seleccionado por correo
|
| 316 |
+
if st.button("Enviar por correo"):
|
| 317 |
+
contenido = f"Datos del archivo {archivo_seleccionado}:\n\n{datos}"
|
| 318 |
+
enviar_correo("josedcape@gmail.com", f"Datos del archivo {archivo_seleccionado}", contenido)
|
| 319 |
+
|
| 320 |
+
# Enviar el archivo seleccionado por WhatsApp
|
| 321 |
+
if st.button("Enviar por WhatsApp"):
|
| 322 |
+
mensaje = f"Datos del archivo {archivo_seleccionado}:\n\n{datos}"
|
| 323 |
+
enviar_whatsapp("3114329322", mensaje)
|
| 324 |
+
|
| 325 |
+
else:
|
| 326 |
+
st.warning(f"No se encontraron datos en el archivo {archivo_seleccionado}")
|
| 327 |
+
else:
|
| 328 |
+
st.info("No se encontraron archivos de datos guardados.")
|
| 329 |
+
|
| 330 |
+
def generar_notificaciones_pendientes():
|
| 331 |
+
if 'laboratorio' not in st.session_state or not st.session_state.laboratorio:
|
| 332 |
+
st.info("No hay trabajos pendientes.")
|
| 333 |
+
return
|
| 334 |
+
|
| 335 |
+
pendientes = [trabajo for trabajo in st.session_state.laboratorio if trabajo["estado"] == "pendiente"]
|
| 336 |
+
if pendientes:
|
| 337 |
+
st.write("### Notificaciones de Trabajos Pendientes")
|
| 338 |
+
for trabajo in pendientes:
|
| 339 |
+
st.info(f"Pendiente: {trabajo['tipo_trabajo']} - {trabajo['numero_orden']} para {trabajo['doctor']}. Enviado a {trabajo['laboratorio']} el {trabajo['fecha_envio']}.")
|
| 340 |
+
|
| 341 |
+
def mostrar_datos_como_texto(datos):
|
| 342 |
+
texto = ""
|
| 343 |
+
if isinstance(datos, dict):
|
| 344 |
+
for key, value in datos.items():
|
| 345 |
+
texto += f"{key}: {value}\n"
|
| 346 |
+
elif isinstance(datos, list):
|
| 347 |
+
for item in datos:
|
| 348 |
+
if isinstance(item, dict):
|
| 349 |
+
for key, value in item.items():
|
| 350 |
+
texto += f"{key}: {value}\n"
|
| 351 |
+
texto += "\n"
|
| 352 |
+
else:
|
| 353 |
+
texto += f"{item}\n"
|
| 354 |
+
return texto
|
| 355 |
+
|
| 356 |
+
def flujo_presupuestos():
|
| 357 |
+
st.title("💰 Asistente de Presupuestos")
|
| 358 |
+
st.markdown("Hola Dr. cuénteme en que puedo ayudarle?")
|
| 359 |
+
|
| 360 |
+
lista_precios = {
|
| 361 |
+
"Restauraciones en resina de una superficie": 75000,
|
| 362 |
+
"Restauraciones en resina de dos superficies": 95000,
|
| 363 |
+
"Restauraciones en resina de tres o más superficies": 120000,
|
| 364 |
+
"Restauración en resina cervical": 60000,
|
| 365 |
+
"Coronas metal-porcelana": 750000,
|
| 366 |
+
"Provisional": 80000,
|
| 367 |
+
"Profilaxis simple": 75000,
|
| 368 |
+
"Profilaxis completa": 90000,
|
| 369 |
+
"Corona en zirconio": 980000,
|
| 370 |
+
"Blanqueamiento dental láser por sesión": 150000,
|
| 371 |
+
"Blanqueamiento dental casero": 330000,
|
| 372 |
+
"Blanqueamiento mixto": 430000,
|
| 373 |
+
"Prótesis parcial acrílico hasta 6 dientes": 530000,
|
| 374 |
+
"Prótesis parcial acrílico de más de 6 dientes": 580000,
|
| 375 |
+
"Prótesis flexible hasta 6 dientes": 800000,
|
| 376 |
+
"Prótesis flexible de más de 6 dientes": 900000,
|
| 377 |
+
"Prótesis total de alto impacto": 650000,
|
| 378 |
+
"Acker flexible hasta 2 dientes": 480000,
|
| 379 |
+
"Exodoncia por diente": 85000,
|
| 380 |
+
"Exodoncia cordal": 130000,
|
| 381 |
+
"Endodoncia con dientes terminados en 6": 580000,
|
| 382 |
+
"Endodoncia de un conducto": 380000,
|
| 383 |
+
"Endodoncia de premolares superiores": 480000,
|
| 384 |
+
}
|
| 385 |
+
|
| 386 |
+
if 'presupuesto' not in st.session_state:
|
| 387 |
+
st.session_state['presupuesto'] = []
|
| 388 |
+
|
| 389 |
+
with st.form("presupuesto_form"):
|
| 390 |
+
tratamiento = st.selectbox("Selecciona el tratamiento", list(lista_precios.keys()))
|
| 391 |
+
cantidad = st.number_input("Cantidad", min_value=1, step=1)
|
| 392 |
+
agregar = st.form_submit_button("Agregar al Presupuesto")
|
| 393 |
+
|
| 394 |
+
if agregar:
|
| 395 |
+
precio_total = lista_precios[tratamiento] * cantidad
|
| 396 |
+
st.session_state['presupuesto'].append({"tratamiento": tratamiento, "cantidad": cantidad, "precio_total": precio_total})
|
| 397 |
+
st.success(f"Agregado: {cantidad} {tratamiento} - Total: {precio_total} COP")
|
| 398 |
+
|
| 399 |
+
if st.session_state['presupuesto']:
|
| 400 |
+
st.write("### Servicios Seleccionados")
|
| 401 |
+
total_presupuesto = sum(item['precio_total'] for item in st.session_state['presupuesto'])
|
| 402 |
+
for item in st.session_state['presupuesto']:
|
| 403 |
+
st.write(f"{item['cantidad']} x {item['tratamiento']} - {item['precio_total']} COP")
|
| 404 |
+
st.write(f"**Total: {total_presupuesto} COP**")
|
| 405 |
+
|
| 406 |
+
if st.button("Copiar Presupuesto al Asistente"):
|
| 407 |
+
servicios = "\n".join([f"{item['cantidad']} x {item['tratamiento']} - {item['precio_total']} COP" for item in st.session_state['presupuesto']])
|
| 408 |
+
total = f"**Total: {total_presupuesto} COP**"
|
| 409 |
+
st.session_state['presupuesto_texto'] = f"{servicios}\n{total}"
|
| 410 |
+
st.success("Presupuesto copiado al asistente de chat.")
|
| 411 |
+
st.session_state['mostrar_chat'] = True
|
| 412 |
+
|
| 413 |
+
if st.session_state['mostrar_chat']:
|
| 414 |
+
st.markdown("### Chat con Asistente")
|
| 415 |
+
pregunta_usuario = st.text_input("Escribe tu pregunta aquí:", value=st.session_state.get('presupuesto_texto', ''))
|
| 416 |
+
if st.button("Enviar Pregunta"):
|
| 417 |
+
manejar_pregunta_usuario(pregunta_usuario)
|
| 418 |
+
|
| 419 |
+
def flujo_radiografias():
|
| 420 |
+
st.title("📸 Registro de Radiografías")
|
| 421 |
+
|
| 422 |
+
if 'radiografias' not in st.session_state:
|
| 423 |
+
st.session_state.radiografias = []
|
| 424 |
+
|
| 425 |
+
with st.form("radiografias_form"):
|
| 426 |
+
nombre_paciente = st.text_input("Nombre del Paciente:")
|
| 427 |
+
tipo_radiografia = st.selectbox("Tipo de Radiografía:", ["Periapical", "Panorámica", "Cefalométrica"])
|
| 428 |
+
fecha_realizacion = st.date_input("Fecha de Realización:")
|
| 429 |
+
observaciones = st.text_area("Observaciones:")
|
| 430 |
+
|
| 431 |
+
submitted = st.form_submit_button("Registrar Radiografía")
|
| 432 |
+
|
| 433 |
+
if submitted:
|
| 434 |
+
radiografia = {
|
| 435 |
+
"nombre_paciente": nombre_paciente,
|
| 436 |
+
"tipo_radiografia": tipo_radiografia,
|
| 437 |
+
"fecha_realizacion": str(fecha_realizacion),
|
| 438 |
+
"observaciones": observaciones
|
| 439 |
+
}
|
| 440 |
+
st.session_state.radiografias.append(radiografia)
|
| 441 |
+
datos_guardados = mostrar_datos_como_texto([radiografia])
|
| 442 |
+
guardar_en_txt('radiografias.txt', datos_guardados)
|
| 443 |
+
st.success("Radiografía registrada con éxito.")
|
| 444 |
+
|
| 445 |
+
if st.session_state.radiografias:
|
| 446 |
+
st.write("### Radiografías Registradas")
|
| 447 |
+
df_radiografias = pd.DataFrame(st.session_state.radiografias)
|
| 448 |
+
st.write(df_radiografias)
|
| 449 |
+
|
| 450 |
+
pdf_file = generar_pdf(df_radiografias, "Registro de Radiografías", "radiografias.pdf")
|
| 451 |
+
st.download_button(
|
| 452 |
+
label="📥 Descargar PDF",
|
| 453 |
+
data=open(pdf_file, 'rb').read(),
|
| 454 |
+
file_name="radiografias.pdf",
|
| 455 |
+
mime="application/pdf"
|
| 456 |
+
)
|
| 457 |
+
|
| 458 |
+
def mostrar_recomendaciones():
|
| 459 |
+
st.title("⭐ Recomendaciones")
|
| 460 |
+
st.write("Aquí puedes encontrar recomendaciones y consejos útiles.")
|
| 461 |
+
|
| 462 |
+
def interpretar_imagen(imagen):
|
| 463 |
+
client = vision.ImageAnnotatorClient()
|
| 464 |
+
content = imagen.read()
|
| 465 |
+
image = vision.Image(content=content)
|
| 466 |
+
response = client.document_text_detection(image=image)
|
| 467 |
+
|
| 468 |
+
if response.error.message:
|
| 469 |
+
st.error(f"Error en la interpretación de la imagen: {response.error.message}")
|
| 470 |
+
return None
|
| 471 |
+
|
| 472 |
+
return response.full_text_annotation.text
|
| 473 |
+
|
| 474 |
+
def mostrar_interpretacion_imagen():
|
| 475 |
+
st.title("🖼️ Interpretación de Imágenes con Google Vision")
|
| 476 |
+
|
| 477 |
+
imagen = st.file_uploader("Sube una imagen para interpretar", type=['png', 'jpg', 'jpeg'])
|
| 478 |
+
|
| 479 |
+
if imagen:
|
| 480 |
+
texto_interpretado = interpretar_imagen(imagen)
|
| 481 |
+
if texto_interpretado:
|
| 482 |
+
st.write("### Texto Interpretado:")
|
| 483 |
+
st.write(texto_interpretado)
|
| 484 |
+
|
| 485 |
+
st.session_state['mensajes_chat'].append({"role": "user", "content": texto_interpretado})
|
| 486 |
+
pregunta_usuario = st.text_input("Haz una pregunta sobre la imagen interpretada:")
|
| 487 |
+
|
| 488 |
+
if st.button("Enviar Pregunta"):
|
| 489 |
+
manejar_pregunta_usuario(pregunta_usuario, archivo_pdf=None, contexto=texto_interpretado)
|
| 490 |
+
|
| 491 |
+
def main():
|
| 492 |
+
st.set_page_config(page_title="Galatea OMARDENT", layout="wide")
|
| 493 |
+
|
| 494 |
+
# Inicializar el estado de la sesión
|
| 495 |
+
if 'modelo' not in st.session_state:
|
| 496 |
+
st.session_state['modelo'] = "gpt-3.5-turbo"
|
| 497 |
+
if 'temperatura' not in st.session_state:
|
| 498 |
+
st.session_state['temperatura'] = 0.5
|
| 499 |
+
if 'mensajes_chat' not in st.session_state:
|
| 500 |
+
st.session_state['mensajes_chat'] = []
|
| 501 |
+
if 'transcripcion_voz' not in st.session_state:
|
| 502 |
+
st.session_state['transcripcion_voz'] = ""
|
| 503 |
+
if 'imagen_asistente' not in st.session_state:
|
| 504 |
+
st.session_state['imagen_asistente'] = None
|
| 505 |
+
if 'video_estado' not in st.session_state:
|
| 506 |
+
st.session_state['video_estado'] = 'paused'
|
| 507 |
+
if 'assistant_id' not in st.session_state:
|
| 508 |
+
st.session_state['assistant_id'] = 'asst_4ZYvBvf4IUVQPjnugSZGLdV2'
|
| 509 |
+
if 'presupuesto_texto' not in st.session_state:
|
| 510 |
+
st.session_state['presupuesto_texto'] = ''
|
| 511 |
+
if 'mostrar_chat' not in st.session_state:
|
| 512 |
+
st.session_state['mostrar_chat'] = False
|
| 513 |
+
if 'memoria' not in st.session_state:
|
| 514 |
+
st.session_state['memoria'] = {}
|
| 515 |
+
|
| 516 |
+
# Cargar y preprocesar el texto del PDF predefinido
|
| 517 |
+
with open("assets/instrucciones.pdf", "rb") as file:
|
| 518 |
+
texto_pdf = extraer_texto_pdf(file)
|
| 519 |
+
st.session_state['texto_preprocesado_pdf'] = preprocesar_texto(texto_pdf)
|
| 520 |
+
|
| 521 |
+
# Barra lateral
|
| 522 |
+
ruta_logo = os.path.join("assets", "Logo Omardent.png")
|
| 523 |
+
if os.path.exists(ruta_logo):
|
| 524 |
+
st.sidebar.image(ruta_logo, use_column_width=True)
|
| 525 |
+
else:
|
| 526 |
+
st.sidebar.warning(f"Error: No se pudo encontrar la imagen en la ruta: {ruta_logo}")
|
| 527 |
+
|
| 528 |
+
st.sidebar.title("🤖 Galatea OMARDENT")
|
| 529 |
+
st.sidebar.markdown("---")
|
| 530 |
+
st.sidebar.subheader("🧠 Configuración del Modelo")
|
| 531 |
+
st.session_state['modelo'] = st.sidebar.selectbox(
|
| 532 |
+
"Selecciona el modelo:",
|
| 533 |
+
["gpt-3.5-turbo", "gpt-4", "gpt-4-32k", "gpt-4o"],
|
| 534 |
+
index=0,
|
| 535 |
+
key='modelo_selectbox', # Clave única
|
| 536 |
+
help="Elige el modelo de lenguaje de OpenAI que prefieras."
|
| 537 |
+
)
|
| 538 |
+
st.sidebar.markdown("---")
|
| 539 |
+
st.session_state['temperatura'] = st.sidebar.slider(
|
| 540 |
+
"🌡️ Temperatura",
|
| 541 |
+
min_value=0.0, max_value=1.0,
|
| 542 |
+
value=st.session_state['temperatura'],
|
| 543 |
+
step=0.1,
|
| 544 |
+
key='temperatura_slider' # Clave única
|
| 545 |
+
)
|
| 546 |
+
assistant_id = st.sidebar.text_input("Assistant ID", key="assistant_id", help="Introduce el Assistant ID del playground de OpenAI")
|
| 547 |
+
|
| 548 |
+
st.sidebar.markdown("---")
|
| 549 |
+
st.sidebar.subheader("🌟 Navegación")
|
| 550 |
+
lateral_page = st.sidebar.radio("Ir a", ["Página Principal", "Gestión de Trabajos", "Gestión de Insumos", "Registro de Radiografías", "Buscar Datos", "Notificaciones", "Recomendaciones", "Asistente de Presupuestos", "Comunicación", "Asistente de Agendamiento", "Interpretación de Imágenes"])
|
| 551 |
+
|
| 552 |
+
top_page = st.selectbox("Navegación Superior", ["Página Principal", "Galatea-Asistente"])
|
| 553 |
+
|
| 554 |
+
if top_page == "Galatea-Asistente":
|
| 555 |
+
mostrar_galatea_asistente()
|
| 556 |
+
else:
|
| 557 |
+
if lateral_page == "Página Principal":
|
| 558 |
+
mostrar_pagina_principal()
|
| 559 |
+
elif lateral_page == "Gestión de Trabajos":
|
| 560 |
+
flujo_laboratorio()
|
| 561 |
+
elif lateral_page == "Gestión de Insumos":
|
| 562 |
+
flujo_insumos()
|
| 563 |
+
elif lateral_page == "Registro de Radiografías":
|
| 564 |
+
flujo_radiografias()
|
| 565 |
+
elif lateral_page == "Buscar Datos":
|
| 566 |
+
buscar_datos_guardados()
|
| 567 |
+
elif lateral_page == "Notificaciones":
|
| 568 |
+
generar_notificaciones_pendientes()
|
| 569 |
+
elif lateral_page == "Recomendaciones":
|
| 570 |
+
mostrar_recomendaciones()
|
| 571 |
+
elif lateral_page == "Asistente de Presupuestos":
|
| 572 |
+
flujo_presupuestos()
|
| 573 |
+
elif lateral_page == "Comunicación":
|
| 574 |
+
st.write("Página de Comunicación") # Implementar según sea necesario
|
| 575 |
+
elif lateral_page == "Asistente de Agendamiento":
|
| 576 |
+
st.write("Página de Agendamiento") # Implementar según sea necesario
|
| 577 |
+
elif lateral_page == "Interpretación de Imágenes":
|
| 578 |
+
mostrar_interpretacion_imagen()
|
| 579 |
+
|
| 580 |
+
def mostrar_pagina_principal():
|
| 581 |
+
st.title("VIRTUAL OMARDENT AI-BOTIDINAMIX")
|
| 582 |
+
st.markdown(
|
| 583 |
+
f"""
|
| 584 |
+
<style>
|
| 585 |
+
#video-container {{
|
| 586 |
+
position: relative;
|
| 587 |
+
width: 100%;
|
| 588 |
+
padding-bottom: 56.25%;
|
| 589 |
+
background-color: lightblue;
|
| 590 |
+
overflow: hidden;
|
| 591 |
+
}}
|
| 592 |
+
#background-video {{
|
| 593 |
+
position: absolute;
|
| 594 |
+
top: 0;
|
| 595 |
+
left: 0;
|
| 596 |
+
width: 100%;
|
| 597 |
+
height: 100%;
|
| 598 |
+
}}
|
| 599 |
+
</style>
|
| 600 |
+
<div id="video-container">
|
| 601 |
+
<video id="background-video" autoplay loop muted playsinline>
|
| 602 |
+
<source src="https://cdn.leonardo.ai/users/645c3d5c-ca1b-4ce8-aefa-a091494e0d09/generations/0c4f0fe7-5937-4644-b984-bdbd95018990/0c4f0fe7-5937-4644-b984-bdbd95018990.mp4" type="video/mp4">
|
| 603 |
+
</video>
|
| 604 |
+
</div>
|
| 605 |
+
""",
|
| 606 |
+
unsafe_allow_html=True
|
| 607 |
+
)
|
| 608 |
+
|
| 609 |
+
archivo_pdf = st.file_uploader("📂 Cargar PDF", type='pdf', key='chat_pdf')
|
| 610 |
+
|
| 611 |
+
col1, col2 = st.columns([3, 1])
|
| 612 |
+
with col1:
|
| 613 |
+
pregunta_usuario = st.text_input("Pregunta:", key='unique_chat_input_key', value=st.session_state['transcripcion_voz'])
|
| 614 |
+
with col2:
|
| 615 |
+
capturar_voz()
|
| 616 |
+
|
| 617 |
+
if pregunta_usuario:
|
| 618 |
+
manejar_pregunta_usuario(pregunta_usuario, archivo_pdf)
|
| 619 |
+
|
| 620 |
+
def mostrar_galatea_asistente():
|
| 621 |
+
st.markdown(
|
| 622 |
+
"""
|
| 623 |
+
<style>
|
| 624 |
+
#video-container {
|
| 625 |
+
position: relative;
|
| 626 |
+
width: 100%;
|
| 627 |
+
height: 40vh;
|
| 628 |
+
background-color: lightblue;
|
| 629 |
+
overflow: hidden;
|
| 630 |
+
display: flex;
|
| 631 |
+
justify-content: center;
|
| 632 |
+
align-items: center;
|
| 633 |
+
}
|
| 634 |
+
#background-video {
|
| 635 |
+
width: 100%;
|
| 636 |
+
height: auto;
|
| 637 |
+
}
|
| 638 |
+
#chat-container {
|
| 639 |
+
margin-top: 20px;
|
| 640 |
+
width: 100%;
|
| 641 |
+
display: flex;
|
| 642 |
+
flex-direction: column;
|
| 643 |
+
justify-content: center;
|
| 644 |
+
align-items: center;
|
| 645 |
+
}
|
| 646 |
+
.chat-message {
|
| 647 |
+
background: rgba(255, 255, 255, 0.8);
|
| 648 |
+
border-radius: 10px;
|
| 649 |
+
padding: 10px;
|
| 650 |
+
margin-bottom: 10px;
|
| 651 |
+
max-width: 70%;
|
| 652 |
+
border: 2px solid #007BFF;
|
| 653 |
+
}
|
| 654 |
+
.chat-message.user {
|
| 655 |
+
align-self: flex-start;
|
| 656 |
+
}
|
| 657 |
+
.chat-message.assistant {
|
| 658 |
+
align-self: flex-end;
|
| 659 |
+
background: rgba(0, 123, 255, 0.8);
|
| 660 |
+
color: white;
|
| 661 |
+
}
|
| 662 |
+
</style>
|
| 663 |
+
<div id="video-container">
|
| 664 |
+
<video id="background-video" autoplay loop muted playsinline>
|
| 665 |
+
<source src="https://cdn.pika.art/v1/081128be-944b-4999-9c2e-16f61d7e7a83/lip_sync.mp4" type="video/mp4">
|
| 666 |
+
</video>
|
| 667 |
+
</div>
|
| 668 |
+
<div id="chat-container">
|
| 669 |
+
""",
|
| 670 |
+
unsafe_allow_html=True
|
| 671 |
+
)
|
| 672 |
+
|
| 673 |
+
for mensaje in st.session_state['mensajes_chat']:
|
| 674 |
+
clase = "user" if mensaje["role"] == "user" else "assistant"
|
| 675 |
+
st.markdown(f'<div class="chat-message {clase}">{mensaje["content"]}</div>', unsafe_allow_html=True)
|
| 676 |
+
|
| 677 |
+
pregunta_usuario = st.text_input("Escribe tu pregunta aquí:", key='unique_chat_input_key', value=st.session_state['transcripcion_voz'])
|
| 678 |
+
if st.button("Enviar Pregunta"):
|
| 679 |
+
manejar_pregunta_usuario(pregunta_usuario)
|
| 680 |
+
|
| 681 |
+
st.markdown("</div>", unsafe_allow_html=True)
|
| 682 |
+
|
| 683 |
+
if st.session_state['imagen_asistente']:
|
| 684 |
+
st.image(st.session_state['imagen_asistente'], use_column_width=True)
|
| 685 |
+
else:
|
| 686 |
+
st.warning("No se ha cargado ninguna imagen. Por favor, carga una imagen en la página principal.")
|
| 687 |
+
|
| 688 |
+
def manejar_pregunta_usuario(pregunta_usuario, archivo_pdf=None, contexto=""):
|
| 689 |
+
st.session_state['mensajes_chat'].append({"role": "user", "content": pregunta_usuario})
|
| 690 |
+
with st.chat_message("user"):
|
| 691 |
+
st.markdown(pregunta_usuario)
|
| 692 |
+
|
| 693 |
+
texto_preprocesado = ""
|
| 694 |
+
if archivo_pdf:
|
| 695 |
+
texto_pdf = extraer_texto_pdf(archivo_pdf)
|
| 696 |
+
texto_preprocesado = preprocesar_texto(texto_pdf)
|
| 697 |
+
else:
|
| 698 |
+
texto_preprocesado = st.session_state['texto_preprocesado_pdf']
|
| 699 |
+
|
| 700 |
+
# Obtener respuesta del modelo usando Assistant ID si está presente
|
| 701 |
+
assistant_id = st.session_state.get('assistant_id', '')
|
| 702 |
+
if assistant_id:
|
| 703 |
+
prompt = f"{contexto}\n\n{pregunta_usuario}"
|
| 704 |
+
response = openai.ChatCompletion.create(
|
| 705 |
+
model=st.session_state['modelo'],
|
| 706 |
+
messages=[
|
| 707 |
+
{"role": "system", "content": "Actúa como Galatea, la asistente de la clínica Odontológica Omardent, y resuelve las inquietudes."},
|
| 708 |
+
{"role": "user", "content": prompt}
|
| 709 |
+
],
|
| 710 |
+
temperature=st.session_state['temperatura'],
|
| 711 |
+
user=assistant_id
|
| 712 |
+
)
|
| 713 |
+
respuesta = response.choices[0].message['content'].strip()
|
| 714 |
+
else:
|
| 715 |
+
respuesta = obtener_respuesta(
|
| 716 |
+
pregunta_usuario,
|
| 717 |
+
texto_preprocesado,
|
| 718 |
+
st.session_state['modelo'],
|
| 719 |
+
st.session_state['temperatura'],
|
| 720 |
+
assistant_id,
|
| 721 |
+
contexto
|
| 722 |
+
)
|
| 723 |
+
|
| 724 |
+
st.session_state['mensajes_chat'].append({"role": "assistant", "content": respuesta})
|
| 725 |
+
with st.chat_message("assistant"):
|
| 726 |
+
st.markdown(respuesta)
|
| 727 |
+
|
| 728 |
+
# Convertir la respuesta en voz
|
| 729 |
+
client = texttospeech.TextToSpeechClient()
|
| 730 |
+
synthesis_input = texttospeech.SynthesisInput(text=respuesta)
|
| 731 |
+
voice = texttospeech.VoiceSelectionParams(language_code="es-ES", ssml_gender=texttospeech.SsmlVoiceGender.FEMALE)
|
| 732 |
+
audio_config = texttospeech.AudioConfig(audio_encoding=texttospeech.AudioEncoding.MP3)
|
| 733 |
+
response = client.synthesize_speech(input=synthesis_input, voice=voice, audio_config=audio_config)
|
| 734 |
+
|
| 735 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
|
| 736 |
+
tmp_file.write(response.audio_content)
|
| 737 |
+
audio_file_path = tmp_file.name
|
| 738 |
+
|
| 739 |
+
# Incrustar el audio en la página y reproducirlo automáticamente
|
| 740 |
+
audio_html = f"""
|
| 741 |
+
<audio id="response-audio" src="data:audio/mp3;base64,{base64.b64encode(response.audio_content).decode()}" autoplay></audio>
|
| 742 |
+
<script>
|
| 743 |
+
document.getElementById('response-audio').onended = function() {{
|
| 744 |
+
document.getElementById('background-video').pause();
|
| 745 |
+
}};
|
| 746 |
+
</script>
|
| 747 |
+
"""
|
| 748 |
+
st.markdown(audio_html, unsafe_allow_html=True)
|
| 749 |
+
|
| 750 |
+
# Reproducir el video solo cuando el chat está activo
|
| 751 |
+
st.session_state['video_estado'] = 'playing'
|
| 752 |
+
st.markdown(f"<script>document.getElementById('background-video').play();</script>", unsafe_allow_html=True)
|
| 753 |
+
|
| 754 |
+
# Almacenar la información importante en la memoria
|
| 755 |
+
if "nombre" in pregunta_usuario.lower() or "teléfono" in pregunta_usuario.lower():
|
| 756 |
+
st.session_state['memoria']['última_interacción'] = pregunta_usuario
|
| 757 |
+
guardar_en_txt('memoria_asistente.txt', f"Última interacción: {pregunta_usuario}")
|
| 758 |
+
|
| 759 |
+
def capturar_voz():
|
| 760 |
+
st.markdown(
|
| 761 |
+
"""
|
| 762 |
+
<style>
|
| 763 |
+
.assistant-button {
|
| 764 |
+
display: flex;
|
| 765 |
+
align-items: center;
|
| 766 |
+
justify-content: center;
|
| 767 |
+
background-color: #4CAF50;
|
| 768 |
+
color: white;
|
| 769 |
+
padding: 10px;
|
| 770 |
+
border: none;
|
| 771 |
+
border-radius: 5px;
|
| 772 |
+
cursor: pointer;
|
| 773 |
+
font-size: 16px;
|
| 774 |
+
margin-top: 10px;
|
| 775 |
+
}
|
| 776 |
+
.assistant-button img {
|
| 777 |
+
margin-right: 10px;
|
| 778 |
+
}
|
| 779 |
+
</style>
|
| 780 |
+
<button class="assistant-button" onclick="startRecording()">
|
| 781 |
+
<img src='https://img2.gratispng.com/20180808/cxq/kisspng-robotics-science-computer-icons-robot-technology-robo-to-logo-svg-png-icon-free-download-45527-5b6baa46a5e322.4713113715337825986795.jpg' alt='icon' width='20' height='20'/>
|
| 782 |
+
Capturar Voz
|
| 783 |
+
</button>
|
| 784 |
+
<script>
|
| 785 |
+
function startRecording() {
|
| 786 |
+
const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
|
| 787 |
+
recognition.lang = 'es-ES';
|
| 788 |
+
recognition.interimResults = false;
|
| 789 |
+
recognition.maxAlternatives = 1;
|
| 790 |
+
recognition.start();
|
| 791 |
+
recognition.onresult = (event) => {
|
| 792 |
+
const lastResult = event.results.length - 1;
|
| 793 |
+
const text = event.results[lastResult][0].transcript;
|
| 794 |
+
const customEvent = new CustomEvent('audioTranscription', { detail: text });
|
| 795 |
+
document.dispatchEvent(customEvent);
|
| 796 |
+
};
|
| 797 |
+
recognition.onspeechend = () => {
|
| 798 |
+
recognition.stop();
|
| 799 |
+
};
|
| 800 |
+
recognition.onerror = (event) => {
|
| 801 |
+
console.error(event.error);
|
| 802 |
+
};
|
| 803 |
+
}
|
| 804 |
+
document.addEventListener('audioTranscription', (event) => {
|
| 805 |
+
const transcription = event.detail;
|
| 806 |
+
document.querySelector("input[name='unique_chat_input_key']").value = transcription;
|
| 807 |
+
// También puedes actualizar el estado de Streamlit aquí si es necesario
|
| 808 |
+
fetch('/process_audio', {
|
| 809 |
+
method: 'POST',
|
| 810 |
+
headers: {
|
| 811 |
+
'Content-Type': 'application/json'
|
| 812 |
+
},
|
| 813 |
+
body: JSON.stringify({ transcription })
|
| 814 |
+
}).then(response => response.json())
|
| 815 |
+
.then(data => {
|
| 816 |
+
// Manejo de la respuesta de Flask si es necesario
|
| 817 |
+
});
|
| 818 |
+
});
|
| 819 |
+
</script>
|
| 820 |
+
""",
|
| 821 |
+
unsafe_allow_html=True
|
| 822 |
+
)
|
| 823 |
+
|
| 824 |
+
if __name__ == "__main__":
|
| 825 |
+
main()
|
audio_recorder.html
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<title>Audio Recorder</title>
|
| 6 |
+
</head>
|
| 7 |
+
<body>
|
| 8 |
+
<button id="startRecording" onclick="startRecording()">Start Recording</button>
|
| 9 |
+
<button id="stopRecording" onclick="stopRecording()" disabled>Stop Recording</button>
|
| 10 |
+
<script>
|
| 11 |
+
let mediaRecorder;
|
| 12 |
+
let audioChunks = [];
|
| 13 |
+
|
| 14 |
+
function startRecording() {
|
| 15 |
+
navigator.mediaDevices.getUserMedia({ audio: true })
|
| 16 |
+
.then(stream => {
|
| 17 |
+
mediaRecorder = new MediaRecorder(stream);
|
| 18 |
+
mediaRecorder.start();
|
| 19 |
+
|
| 20 |
+
mediaRecorder.addEventListener("dataavailable", event => {
|
| 21 |
+
audioChunks.push(event.data);
|
| 22 |
+
});
|
| 23 |
+
|
| 24 |
+
mediaRecorder.addEventListener("stop", () => {
|
| 25 |
+
const audioBlob = new Blob(audioChunks);
|
| 26 |
+
audioChunks = [];
|
| 27 |
+
const reader = new FileReader();
|
| 28 |
+
reader.readAsDataURL(audioBlob);
|
| 29 |
+
reader.onloadend = () => {
|
| 30 |
+
const base64AudioMessage = reader.result.split(',')[1];
|
| 31 |
+
fetch('/process_audio', {
|
| 32 |
+
method: 'POST',
|
| 33 |
+
headers: {
|
| 34 |
+
'Content-Type': 'application/json'
|
| 35 |
+
},
|
| 36 |
+
body: JSON.stringify({ audio_data: base64AudioMessage })
|
| 37 |
+
}).then(response => response.json())
|
| 38 |
+
.then(data => {
|
| 39 |
+
const event = new CustomEvent('audioTranscription', { detail: data.transcription });
|
| 40 |
+
document.dispatchEvent(event);
|
| 41 |
+
});
|
| 42 |
+
};
|
| 43 |
+
});
|
| 44 |
+
|
| 45 |
+
document.getElementById("startRecording").disabled = true;
|
| 46 |
+
document.getElementById("stopRecording").disabled = false;
|
| 47 |
+
});
|
| 48 |
+
}
|
| 49 |
+
|
| 50 |
+
function stopRecording() {
|
| 51 |
+
mediaRecorder.stop();
|
| 52 |
+
document.getElementById("startRecording").disabled = false;
|
| 53 |
+
document.getElementById("stopRecording").disabled = true;
|
| 54 |
+
}
|
| 55 |
+
</script>
|
| 56 |
+
</body>
|
| 57 |
+
</html>
|
botidinamix-g.json
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"type": "service_account",
|
| 3 |
+
"project_id": "assistant-9nteod",
|
| 4 |
+
"private_key_id": "2a6b434c79db1c61fd01a4aa0fdb4581abd03134",
|
| 5 |
+
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDf3Kus0HiXPybh\n06C05KvyuM/LfHBQXIMdMRDHaCp8Wj17h8HOmND7P+e/DcM45AevY2xfvq5ZOMCa\nCEFGXQHaiy6kDYpBiEzBYaLK2mfamUdkURejsE4yZr3piZkhJ4+lGke+jouAey01\nxwM5bVaCUSP1iWcryHrohaOr9rhbniCDheXu4J7d+q/ajyC8iw2UHKbL2RXwuK1y\nl7BAUxEGxLtu4yREWBo3dK0f9z/s3Vpva4zUuDbXqflgazDm0ak5ruRqAk/x6Xqy\nz9jhX3IuJc+MDlGiGDtNo40obT9jyknkbD03lmdRLlPeFNqNoiOeUOWIFX+YtsA0\noB0LixiHAgMBAAECggEAHdKn8PUG6Qip6pIxydeK9reF60hv2AIzGhZOyMSeSwRk\nm3U0WGHFuYBuebvgzaPx8tIj08Td35dZb/Z4kmH9R2WdDZBGN8EWOfmtYSk5MQc+\n2mTulB5+lGVZQz2oWkvgbZuI16CQLMomyuVxObB7bWSs4zRDu91Qo8h7S63/Kn6F\nwt41fHsJmRwSV21ZMzrYtFXrmYuNZxsfhzLV1JFiPF+LgJT+AojhZO1r4FtDS+kI\nBtoD8QZn+T5+TDgj7NLDL+kMn8EWQU2o+PQDtbi69hQxHTo4rxdXB5z6BmZbPt9B\n7jLIRS/rfeaPgvdjSi0z7IsJWEbZlu9KSeUfkgZoxQKBgQD1XU/7sGZqiF7GO9yr\ncm2rAvFh1e+/jOd1iwQ48KJ8lITO52lEyLq53T+OAJbULWseHXi0VQNsd7kzs0Zl\nIVImOuHsG087vXVrxk75yzJBuwPCBMakBKRtkcpETEti477yo+Szw3mKEmsBfHYr\nMW5+MISaWbPzqQ7O51Fx0h1MjQKBgQDpkMFs7ID+ski1sRm4K1Szo+RWhm1ag8Mb\nIYJhcL0K1WRFfIyTyzm5krrXsUQTydX/oNinMtv/hS379CgyqobWO85+YBh3kYbZ\nn8FOJOhJ176sE4soti8p5aktyQSpqjVVtsf7OKw50CgGE5lAxxGOolQTD4d8zYjL\n9y7o8QP2YwKBgEvRYLS6Rntm1jpVJxQHUOIGD8aWj/XVuXP12AEsQllSn1M76Khr\nil+CgXAEuJapzi7JFpJKrrsmp6DVJcx8JmFP0p3dtncUTSNXbPH9GvN6sWeTiDoI\ngTKmWSUPmj/ddhSOFk6B+Z1zoYMdDXq9VJJDtcXoMBX7yGqgyebs8UbFAoGAAY6y\ni3xkO86Kh5OfvUekr/H20tDgp8rbITIvAWFUEV9s5L243j9rqh4dWtTWxF8DK0oy\nR6MiLmj/7n8pSXUzovgRH2yanSl+QbM8Ab5jQiLLJbCXq/TTCa97Wk/N1SfKZUDr\nwnQVSelmauv0iKcLKe1RLDNdTuq778g9KtZ4lUsCgYBebKvZbG8S9xHZcUNBxEty\nKCtknf2DoO3cvQ6zCEaIaMbdRH+NEuBxgGYKX2brHLSxJSOaAWoX+yiIxFlHdAAz\n0BiMkgqnJThevrfEs30zTYQBU78HP8WUnzUqsqwooqr7ry4hq5z78RXL0NP8BepH\nBavhIQYovXSFy6l0m6JFng==\n-----END PRIVATE KEY-----\n",
|
| 6 |
+
"client_email": "botidinamix@assistant-9nteod.iam.gserviceaccount.com",
|
| 7 |
+
"client_id": "104652076455278864571",
|
| 8 |
+
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
| 9 |
+
"token_uri": "https://oauth2.googleapis.com/token",
|
| 10 |
+
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
| 11 |
+
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/botidinamix%40assistant-9nteod.iam.gserviceaccount.com",
|
| 12 |
+
"universe_domain": "googleapis.com"
|
| 13 |
+
}
|
datos_guardados_datos guardados.txt
ADDED
|
File without changes
|
datos_guardados_memoria_asistente.txt
ADDED
|
File without changes
|
datos_guardados_presupuestos.txt
ADDED
|
File without changes
|
datos_guardados_radiografias.txt
ADDED
|
File without changes
|
datos_guardados_trabajos_laboratorio.txt
ADDED
|
File without changes
|
env (1)
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
OPENAI_API_KEY ='sk-proj-3LjZDUxPCK7NOxAGsBrcT3BlbkFJPwq200vDWf5JG2YWgAsx'
|
| 2 |
+
GOOGLE_APPLICATION_CREDENTIALS='botidinamix-g.json'
|
| 3 |
+
BREVO_API_KEY='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'
|
| 4 |
+
SUPABASE_URL='https://tifknbrtyuhxhqtmvywx.supabase.co
|
| 5 |
+
SUPABASE_API_KEY='eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InRpZmtuYnJ0eXVoeGhxdG12eXd4Iiwicm9sZSI6ImFub24iLCJpYXQiOjE3MDAxMzM4OTUsImV4cCI6MjAxNTcwOTg5NX0.M-kT-5VSN1FSKtecO0luQvEGaOz7B28q44RWxTMeOus'
|
gitattributes
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Hola[[:space:]]Bienv(1).mp4 filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Welcome[[:space:]]to.mp4 filter=lfs diff=lfs merge=lfs -text
|
gitignore
ADDED
|
File without changes
|
google_calendar (1).py
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import pickle
|
| 3 |
+
import datetime
|
| 4 |
+
from google.oauth2.credentials import Credentials
|
| 5 |
+
from google_auth_oauthlib.flow import InstalledAppFlow
|
| 6 |
+
from google.auth.transport.requests import Request
|
| 7 |
+
from googleapiclient.discovery import build
|
| 8 |
+
|
| 9 |
+
SCOPES = ['https://www.googleapis.com/auth/calendar']
|
| 10 |
+
|
| 11 |
+
def get_calendar_service():
|
| 12 |
+
creds = None
|
| 13 |
+
if os.path.exists('token.pickle'):
|
| 14 |
+
with open('token.pickle', 'rb') as token:
|
| 15 |
+
creds = pickle.load(token)
|
| 16 |
+
if not creds or not creds.valid:
|
| 17 |
+
if creds and creds.expired and creds.refresh_token:
|
| 18 |
+
creds.refresh(Request())
|
| 19 |
+
else:
|
| 20 |
+
flow = InstalledAppFlow.from_client_secrets_file(
|
| 21 |
+
'credentials.json', SCOPES)
|
| 22 |
+
creds = flow.run_local_server(port=0)
|
| 23 |
+
with open('token.pickle', 'wb') as token:
|
| 24 |
+
pickle.dump(creds, token)
|
| 25 |
+
service = build('calendar', 'v3', credentials=creds)
|
| 26 |
+
return service
|
| 27 |
+
|
| 28 |
+
def add_event(summary, description, start_time, end_time):
|
| 29 |
+
service = get_calendar_service()
|
| 30 |
+
event = {
|
| 31 |
+
'summary': summary,
|
| 32 |
+
'description': description,
|
| 33 |
+
'start': {
|
| 34 |
+
'dateTime': start_time,
|
| 35 |
+
'timeZone': 'America/Bogota',
|
| 36 |
+
},
|
| 37 |
+
'end': {
|
| 38 |
+
'dateTime': end_time,
|
| 39 |
+
'timeZone': 'America/Bogota',
|
| 40 |
+
},
|
| 41 |
+
}
|
| 42 |
+
event = service.events().insert(calendarId='primary', body=event).execute()
|
| 43 |
+
return event.get('htmlLink')
|
pages_agendamiento.py
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
from datetime import datetime, timedelta
|
| 3 |
+
from google.oauth2.service_account import Credentials
|
| 4 |
+
from googleapiclient.discovery import build
|
| 5 |
+
|
| 6 |
+
def show():
|
| 7 |
+
st.title("📅 Asistente de Agendamiento")
|
| 8 |
+
|
| 9 |
+
SCOPES = ['https://www.googleapis.com/auth/calendar']
|
| 10 |
+
creds = Credentials.from_service_account_file('botidinamix-g.json', scopes=SCOPES)
|
| 11 |
+
service = build('calendar', 'v3', credentials=creds)
|
| 12 |
+
|
| 13 |
+
with st.form("agendamiento_form"):
|
| 14 |
+
paciente = st.text_input("Nombre del Paciente:")
|
| 15 |
+
doctor = st.selectbox("Doctor", ["Dr. José Daniel", "Dr. José Omar"])
|
| 16 |
+
motivo = st.text_area("Motivo de la consulta:")
|
| 17 |
+
fecha_cita = st.date_input("Fecha de la cita:")
|
| 18 |
+
hora_cita = st.time_input("Hora de la cita:")
|
| 19 |
+
submitted = st.form_submit_button("Agendar Cita")
|
| 20 |
+
|
| 21 |
+
if submitted:
|
| 22 |
+
start_time = datetime.combine(fecha_cita, hora_cita)
|
| 23 |
+
end_time = start_time + timedelta(hours=1)
|
| 24 |
+
event = {
|
| 25 |
+
'summary': f'Cita con {doctor} - {paciente}',
|
| 26 |
+
'description': motivo,
|
| 27 |
+
'start': {
|
| 28 |
+
'dateTime': start_time.isoformat(),
|
| 29 |
+
'timeZone': 'America/Bogota',
|
| 30 |
+
},
|
| 31 |
+
'end': {
|
| 32 |
+
'dateTime': end_time.isoformat(),
|
| 33 |
+
'timeZone': 'America/Bogota',
|
| 34 |
+
},
|
| 35 |
+
}
|
| 36 |
+
event = service.events().insert(calendarId='primary', body=event).execute()
|
| 37 |
+
st.success(f'Cita agendada para {paciente} con {doctor} el {start_time}')
|
pages_buscar_datos.py
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# pages/buscar_datos.py
|
| 2 |
+
import streamlit as st
|
| 3 |
+
from utils.data_manager import cargar_desde_txt, listar_archivos_txt
|
| 4 |
+
|
| 5 |
+
def show():
|
| 6 |
+
st.title("🔍 Buscar Datos Guardados")
|
| 7 |
+
|
| 8 |
+
archivos = listar_archivos_txt()
|
| 9 |
+
|
| 10 |
+
if archivos:
|
| 11 |
+
archivo_seleccionado = st.selectbox("Selecciona un archivo para ver:", archivos)
|
| 12 |
+
|
| 13 |
+
if archivo_seleccionado:
|
| 14 |
+
datos = cargar_desde_txt(archivo_seleccionado)
|
| 15 |
+
if datos:
|
| 16 |
+
st.write(f"### Datos del archivo {archivo_seleccionado}")
|
| 17 |
+
st.text_area("Datos", datos, height=300)
|
| 18 |
+
|
| 19 |
+
# Link to download the file
|
| 20 |
+
with open(archivo_seleccionado, 'rb') as file:
|
| 21 |
+
st.download_button(
|
| 22 |
+
label="📥 Descargar Archivo TXT",
|
| 23 |
+
data=file,
|
| 24 |
+
file_name=archivo_seleccionado,
|
| 25 |
+
mime="text/plain"
|
| 26 |
+
)
|
| 27 |
+
else:
|
| 28 |
+
st.warning(f"No se encontraron datos en el archivo {archivo_seleccionado}")
|
| 29 |
+
else:
|
| 30 |
+
st.info("No se encontraron archivos de datos guardados.")
|
pages_comunicacion.py
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
from utils.data_manager import enviar_correo, enviar_whatsapp
|
| 3 |
+
|
| 4 |
+
def show():
|
| 5 |
+
st.title("📧 Comunicación")
|
| 6 |
+
|
| 7 |
+
st.subheader("Enviar Correo Electrónico")
|
| 8 |
+
destinatario_correo = st.text_input("Correo del destinatario:")
|
| 9 |
+
asunto_correo = st.text_input("Asunto del correo:")
|
| 10 |
+
contenido_correo = st.text_area("Contenido del correo:")
|
| 11 |
+
if st.button("Enviar Correo"):
|
| 12 |
+
if destinatario_correo and asunto_correo and contenido_correo:
|
| 13 |
+
enviar_correo(destinatario_correo, asunto_correo, contenido_correo)
|
| 14 |
+
else:
|
| 15 |
+
st.warning("Por favor, completa todos los campos del correo.")
|
| 16 |
+
|
| 17 |
+
st.markdown("---")
|
| 18 |
+
st.subheader("📱 Enviar Mensaje de WhatsApp")
|
| 19 |
+
numero_whatsapp = st.text_input("Número de WhatsApp del destinatario:")
|
| 20 |
+
mensaje_whatsapp = st.text_area("Mensaje de WhatsApp:")
|
| 21 |
+
if st.button("Enviar WhatsApp"):
|
| 22 |
+
if numero_whatsapp and mensaje_whatsapp:
|
| 23 |
+
enviar_whatsapp(numero_whatsapp, mensaje_whatsapp)
|
| 24 |
+
else:
|
| 25 |
+
st.warning("Por favor, completa todos los campos del mensaje de WhatsApp.")
|
pages_correo.py
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
from utils.data_manager import flujo_enviar_correo
|
| 3 |
+
|
| 4 |
+
def show():
|
| 5 |
+
flujo_enviar_correo()
|
pages_galatea_asistente.py
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import tempfile
|
| 3 |
+
import openai
|
| 4 |
+
import streamlit as st
|
| 5 |
+
import PyPDF2
|
| 6 |
+
|
| 7 |
+
from dotenv import load_dotenv
|
| 8 |
+
|
| 9 |
+
# Configurar página de Streamlit
|
| 10 |
+
st.set_page_config(page_title="Galatea Asistente", layout="wide")
|
| 11 |
+
|
| 12 |
+
# Función para inicializar el estado de la sesión
|
| 13 |
+
def inicializar_estado():
|
| 14 |
+
if 'modelo' not in st.session_state:
|
| 15 |
+
st.session_state['modelo'] = "gpt-3.5-turbo"
|
| 16 |
+
if 'temperatura' not in st.session_state:
|
| 17 |
+
st.session_state['temperatura'] = 0.5
|
| 18 |
+
if 'mensajes_chat' not in st.session_state:
|
| 19 |
+
st.session_state['mensajes_chat'] = []
|
| 20 |
+
if 'transcripcion_voz' not in st.session_state:
|
| 21 |
+
st.session_state['transcripcion_voz'] = ""
|
| 22 |
+
if 'texto_preprocesado_pdf' not in st.session_state:
|
| 23 |
+
st.session_state['texto_preprocesado_pdf'] = ""
|
| 24 |
+
|
| 25 |
+
inicializar_estado()
|
| 26 |
+
|
| 27 |
+
# Configuración de la barra lateral
|
| 28 |
+
st.sidebar.title("Configuración del Asistente")
|
| 29 |
+
st.sidebar.subheader("Modelo de Lenguaje")
|
| 30 |
+
st.session_state['modelo'] = st.sidebar.selectbox(
|
| 31 |
+
"Selecciona el modelo:",
|
| 32 |
+
["gpt-3.5-turbo", "gpt-4"],
|
| 33 |
+
index=0
|
| 34 |
+
)
|
| 35 |
+
st.sidebar.subheader("Ajustes del Modelo")
|
| 36 |
+
st.session_state['temperatura'] = st.sidebar.slider(
|
| 37 |
+
"Temperatura",
|
| 38 |
+
min_value=0.0, max_value=1.0,
|
| 39 |
+
value=st.session_state['temperatura'],
|
| 40 |
+
step=0.1
|
| 41 |
+
)
|
| 42 |
+
st.sidebar.text_input("Assistant ID", key="assistant_id", help="Introduce el Assistant ID del playground de OpenAI")
|
| 43 |
+
|
| 44 |
+
# Función para extraer texto de un PDF
|
| 45 |
+
def extraer_texto_pdf(ruta_archivo):
|
| 46 |
+
texto = ""
|
| 47 |
+
try:
|
| 48 |
+
with open(ruta_archivo, 'rb') as file:
|
| 49 |
+
reader = PyPDF2.PdfReader(file)
|
| 50 |
+
for page in range(len(reader.pages)):
|
| 51 |
+
texto += reader.pages[page].extract_text()
|
| 52 |
+
except Exception as e:
|
| 53 |
+
st.error(f"Error al extraer texto del PDF: {e}")
|
| 54 |
+
return texto
|
| 55 |
+
|
| 56 |
+
# Cargar y preprocesar el texto del PDF predefinido
|
| 57 |
+
ruta_pdf_predefinido = "/assets/instrucciones.pdf" # Actualiza esta ruta con la ruta real de tu archivo PDF
|
| 58 |
+
texto_pdf = extraer_texto_pdf(ruta_pdf_predefinido)
|
| 59 |
+
st.session_state['texto_preprocesado_pdf'] = texto_pdf
|
| 60 |
+
|
| 61 |
+
# Función para manejar la pregunta del usuario
|
| 62 |
+
def manejar_pregunta_usuario(pregunta_usuario):
|
| 63 |
+
st.session_state['mensajes_chat'].append({"role": "user", "content": pregunta_usuario})
|
| 64 |
+
with st.chat_message("user"):
|
| 65 |
+
st.markdown(pregunta_usuario)
|
| 66 |
+
|
| 67 |
+
# Aquí debes agregar la lógica para obtener la respuesta del modelo
|
| 68 |
+
contexto = st.session_state['texto_preprocesado_pdf']
|
| 69 |
+
respuesta = obtener_respuesta(pregunta_usuario, contexto)
|
| 70 |
+
|
| 71 |
+
st.session_state['mensajes_chat'].append({"role": "assistant", "content": respuesta})
|
| 72 |
+
with st.chat_message("assistant"):
|
| 73 |
+
st.markdown(respuesta)
|
| 74 |
+
|
| 75 |
+
# Función para obtener la respuesta del modelo
|
| 76 |
+
def obtener_respuesta(pregunta, contexto):
|
| 77 |
+
try:
|
| 78 |
+
response = openai.Completion.create(
|
| 79 |
+
model=st.session_state['modelo'],
|
| 80 |
+
prompt=f"Eres Galatea, un auxiliar de odontología en la clínica odontológica Omardent. Resuelve las inquietudes de los pacientes basándote en el siguiente contexto:\n\n{contexto}\n\nPregunta: {pregunta}\nRespuesta:",
|
| 81 |
+
max_tokens=150,
|
| 82 |
+
temperature=st.session_state['temperatura']
|
| 83 |
+
)
|
| 84 |
+
return response.choices[0].text.strip()
|
| 85 |
+
except Exception as e:
|
| 86 |
+
st.error(f"Error al obtener respuesta del modelo: {e}")
|
| 87 |
+
return "Lo siento, ocurrió un error al procesar tu solicitud."
|
| 88 |
+
|
| 89 |
+
# Mostrar el fondo de video y superponer texto
|
| 90 |
+
st.markdown(
|
| 91 |
+
"""
|
| 92 |
+
<style>
|
| 93 |
+
#video-container {
|
| 94 |
+
position: relative;
|
| 95 |
+
width: 100%;
|
| 96 |
+
height: 90vh;
|
| 97 |
+
overflow: hidden;
|
| 98 |
+
}
|
| 99 |
+
#background-video {
|
| 100 |
+
position: absolute;
|
| 101 |
+
top: 0;
|
| 102 |
+
left: 0;
|
| 103 |
+
width: 90%;
|
| 104 |
+
height: 90%;
|
| 105 |
+
object-fit: cover;
|
| 106 |
+
z-index: -1;
|
| 107 |
+
}
|
| 108 |
+
#chat-container {
|
| 109 |
+
position: absolute;
|
| 110 |
+
top: 0;
|
| 111 |
+
left: 0;
|
| 112 |
+
width: 100%;
|
| 113 |
+
height: 100%;
|
| 114 |
+
display: flex;
|
| 115 |
+
flex-direction: column;
|
| 116 |
+
justify-content: flex-end;
|
| 117 |
+
padding: 20px;
|
| 118 |
+
box-sizing: border-box;
|
| 119 |
+
}
|
| 120 |
+
.chat-message {
|
| 121 |
+
background: rgba(255, 255, 255, 0.8);
|
| 122 |
+
border-radius: 10px;
|
| 123 |
+
padding: 10px;
|
| 124 |
+
margin-bottom: 10px;
|
| 125 |
+
max-width: 70%;
|
| 126 |
+
border: 2px solid #007BFF; /* Borde azul */
|
| 127 |
+
}
|
| 128 |
+
.chat-message.user {
|
| 129 |
+
align-self: flex-start;
|
| 130 |
+
border-color: #007BFF; /* Borde azul para usuario */
|
| 131 |
+
}
|
| 132 |
+
.chat-message.assistant {
|
| 133 |
+
align-self: flex-end;
|
| 134 |
+
background: rgba(0, 123, 255, 0.8);
|
| 135 |
+
color: white;
|
| 136 |
+
border-color: #0056b3; /* Borde azul oscuro para asistente */
|
| 137 |
+
}
|
| 138 |
+
</style>
|
| 139 |
+
<div id="video-container">
|
| 140 |
+
<video id="background-video" autoplay loop muted>
|
| 141 |
+
<source src="https://cdn.leonardo.ai/users/645c3d5c-ca1b-4ce8-aefa-a091494e0d09/generations/0c4f0fe7-5937-4644-b984-bdbd95018990/0c4f0fe7-5937-4644-b984-bdbd95018990.mp4" type="video/mp4">
|
| 142 |
+
</video>
|
| 143 |
+
<div id="chat-container">
|
| 144 |
+
""",
|
| 145 |
+
unsafe_allow_html=True
|
| 146 |
+
)
|
| 147 |
+
|
| 148 |
+
# Mostrar los mensajes del chat
|
| 149 |
+
for mensaje in st.session_state['mensajes_chat']:
|
| 150 |
+
clase = "user" if mensaje["role"] == "user" else "assistant"
|
| 151 |
+
st.markdown(f'<div class="chat-message {clase}">{mensaje["content"]}</div>', unsafe_allow_html=True)
|
| 152 |
+
|
| 153 |
+
# Campo de entrada de texto para el usuario
|
| 154 |
+
pregunta_usuario = st.text_input("Escribe tu pregunta aquí:", key='unique_chat_input_key', value=st.session_state['transcripcion_voz'])
|
| 155 |
+
if st.button("Enviar Pregunta"):
|
| 156 |
+
manejar_pregunta_usuario(pregunta_usuario)
|
| 157 |
+
|
| 158 |
+
st.markdown("</div></div>", unsafe_allow_html=True)
|
| 159 |
+
|
| 160 |
+
# Incluir imagen de fondo (opcional)
|
| 161 |
+
st.image("/mnt/data/clara asesora.jpg", use_column_width=True)
|
pages_home.py
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# pages/home.py
|
| 2 |
+
import os
|
| 3 |
+
import streamlit as st
|
| 4 |
+
from utils.data_manager import (
|
| 5 |
+
extraer_texto_pdf,
|
| 6 |
+
preprocesar_texto,
|
| 7 |
+
obtener_respuesta
|
| 8 |
+
)
|
| 9 |
+
|
| 10 |
+
def show():
|
| 11 |
+
st.header("💬 Hablar con Galatea OMARDENT")
|
| 12 |
+
|
| 13 |
+
# Inicializa los valores predeterminados si no existen
|
| 14 |
+
if 'modelo' not in st.session_state:
|
| 15 |
+
st.session_state.modelo = "gpt-3.5-turbo"
|
| 16 |
+
if 'temperatura' not in st.session_state:
|
| 17 |
+
st.session_state.temperatura = 0.5
|
| 18 |
+
|
| 19 |
+
# --- Barra lateral ---
|
| 20 |
+
with st.sidebar:
|
| 21 |
+
st.image(os.path.join("assets", "Logo Omardent.png"), use_column_width=True)
|
| 22 |
+
st.title("🤖 Galatea OMARDENT")
|
| 23 |
+
st.markdown("---")
|
| 24 |
+
|
| 25 |
+
# Selección de modelo de lenguaje
|
| 26 |
+
st.subheader("🧠 Configuración del Modelo")
|
| 27 |
+
st.session_state.modelo = st.selectbox(
|
| 28 |
+
"Selecciona el modelo:",
|
| 29 |
+
["gpt-3.5-turbo", "gpt-4", "gpt-4-32k"],
|
| 30 |
+
index=0,
|
| 31 |
+
help="Elige el modelo de lenguaje de OpenAI que prefieras."
|
| 32 |
+
)
|
| 33 |
+
|
| 34 |
+
# Opciones adicionales
|
| 35 |
+
st.markdown("---")
|
| 36 |
+
st.session_state.temperatura = st.slider("🌡️ Temperatura", min_value=0.0, max_value=1.0, value=st.session_state.temperatura, step=0.1)
|
| 37 |
+
|
| 38 |
+
# --- Área principal de la aplicación ---
|
| 39 |
+
# Carga de archivo PDF
|
| 40 |
+
archivo_pdf = st.file_uploader("📂 Cargar PDF", type='pdf')
|
| 41 |
+
|
| 42 |
+
# --- Chatbot ---
|
| 43 |
+
if 'mensajes' not in st.session_state:
|
| 44 |
+
st.session_state.mensajes = []
|
| 45 |
+
|
| 46 |
+
for mensaje in st.session_state.mensajes:
|
| 47 |
+
with st.chat_message(mensaje["role"]):
|
| 48 |
+
st.markdown(mensaje["content"])
|
| 49 |
+
|
| 50 |
+
pregunta_usuario = st.chat_input("Pregunta:")
|
| 51 |
+
if pregunta_usuario:
|
| 52 |
+
st.session_state.mensajes.append({"role": "user", "content": pregunta_usuario})
|
| 53 |
+
with st.chat_message("user"):
|
| 54 |
+
st.markdown(pregunta_usuario)
|
| 55 |
+
|
| 56 |
+
if archivo_pdf:
|
| 57 |
+
texto_pdf = extraer_texto_pdf(archivo_pdf)
|
| 58 |
+
texto_preprocesado = preprocesar_texto(texto_pdf)
|
| 59 |
+
else:
|
| 60 |
+
texto_preprocesado = ""
|
| 61 |
+
|
| 62 |
+
respuesta = obtener_respuesta(pregunta_usuario, texto_preprocesado, st.session_state.modelo, st.session_state.temperatura)
|
| 63 |
+
st.session_state.mensajes.append({"role": "assistant", "content": respuesta})
|
| 64 |
+
with st.chat_message("assistant"):
|
| 65 |
+
st.markdown(respuesta)
|
pages_insumos.py
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
from utils.data_manager import flujo_insumos
|
| 3 |
+
|
| 4 |
+
def show():
|
| 5 |
+
flujo_insumos()
|
| 6 |
+
|
pages_notificaciones.py
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# pages/notificaciones.py
|
| 2 |
+
import streamlit as st
|
| 3 |
+
from utils.data_manager import generar_notificaciones_pendientes
|
| 4 |
+
|
| 5 |
+
def show():
|
| 6 |
+
generar_notificaciones_pendientes()
|
pages_presupuesto.py
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
from utils.data_manager import flujo_presupuestos, obtener_respuesta
|
| 3 |
+
|
| 4 |
+
def show():
|
| 5 |
+
st.title("💰 Asistente de Presupuestos")
|
| 6 |
+
st.markdown("Hola Dr. cuénteme en qué puedo ayudarle?")
|
| 7 |
+
|
| 8 |
+
# Listado de precios
|
| 9 |
+
lista_precios = {
|
| 10 |
+
"Restauraciones en resina de una superficie": 75000,
|
| 11 |
+
"Restauraciones en resina de dos superficies": 95000,
|
| 12 |
+
"Restauraciones en resina de tres o más superficies": 120000,
|
| 13 |
+
"Restauración en resina cervical": 60000,
|
| 14 |
+
"Coronas metal-porcelana": 750000,
|
| 15 |
+
"Provisional": 80000,
|
| 16 |
+
"Profilaxis simple": 75000,
|
| 17 |
+
"Profilaxis completa": 90000,
|
| 18 |
+
"Corona en zirconio": 980000,
|
| 19 |
+
"Blanqueamiento dental láser por sesión": 150000,
|
| 20 |
+
"Blanqueamiento dental casero": 330000,
|
| 21 |
+
"Blanqueamiento mixto": 430000,
|
| 22 |
+
"Prótesis parcial acrílico hasta 6 dientes": 530000,
|
| 23 |
+
"Prótesis parcial acrílico de más de 6 dientes": 580000,
|
| 24 |
+
"Prótesis flexible hasta 6 dientes": 800000,
|
| 25 |
+
"Prótesis flexible de más de 6 dientes": 900000,
|
| 26 |
+
"Prótesis total de alto impacto": 650000,
|
| 27 |
+
"Acker flexible hasta 2 dientes": 480000,
|
| 28 |
+
"Exodoncia por diente": 85000,
|
| 29 |
+
"Exodoncia cordal": 130000,
|
| 30 |
+
"Endodoncia con dientes terminados en 6": 580000,
|
| 31 |
+
"Endodoncia de un conducto": 380000,
|
| 32 |
+
"Endodoncia de premolares superiores": 480000,
|
| 33 |
+
}
|
| 34 |
+
|
| 35 |
+
if 'presupuesto' not in st.session_state:
|
| 36 |
+
st.session_state['presupuesto'] = []
|
| 37 |
+
|
| 38 |
+
# Mostrar lista de precios en un menú desplegable
|
| 39 |
+
with st.expander("Mostrar Lista de Precios"):
|
| 40 |
+
st.markdown("### Lista de Precios")
|
| 41 |
+
for tratamiento, precio in lista_precios.items():
|
| 42 |
+
st.markdown(f"**{tratamiento}:** {precio} COP")
|
| 43 |
+
|
| 44 |
+
# Selección de tratamientos y cantidad
|
| 45 |
+
tratamiento = st.selectbox("Selecciona el tratamiento", list(lista_precios.keys()))
|
| 46 |
+
cantidad = st.number_input("Cantidad", min_value=1, step=1)
|
| 47 |
+
agregar = st.button("Agregar al Presupuesto")
|
| 48 |
+
|
| 49 |
+
if agregar:
|
| 50 |
+
precio_total = lista_precios[tratamiento] * cantidad
|
| 51 |
+
st.session_state['presupuesto'].append({"tratamiento": tratamiento, "cantidad": cantidad, "precio_total": precio_total})
|
| 52 |
+
st.success(f"Agregado: {cantidad} {tratamiento} - Total: {precio_total} COP")
|
| 53 |
+
|
| 54 |
+
# Mostrar servicios seleccionados y total
|
| 55 |
+
if st.session_state['presupuesto']:
|
| 56 |
+
st.write("### Servicios Seleccionados")
|
| 57 |
+
total_presupuesto = sum(item['precio_total'] for item in st.session_state['presupuesto'])
|
| 58 |
+
for item in st.session_state['presupuesto']:
|
| 59 |
+
st.write(f"{item['cantidad']} x {item['tratamiento']} - {item['precio_total']} COP")
|
| 60 |
+
st.write(f"**Total: {total_presupuesto} COP**")
|
| 61 |
+
|
| 62 |
+
# Copiar presupuesto al asistente de chat
|
| 63 |
+
copiar_presupuesto = st.button("Copiar Presupuesto al Asistente")
|
| 64 |
+
if copiar_presupuesto:
|
| 65 |
+
servicios = "\n".join([f"{item['cantidad']} x {item['tratamiento']} - {item['precio_total']} COP" for item in st.session_state['presupuesto']])
|
| 66 |
+
total = f"**Total: {total_presupuesto} COP**"
|
| 67 |
+
st.session_state['presupuesto_texto'] = f"{servicios}\n{total}"
|
| 68 |
+
st.success("Presupuesto copiado al asistente de chat.")
|
| 69 |
+
|
| 70 |
+
if 'mostrar_chat' not in st.session_state:
|
| 71 |
+
st.session_state['mostrar_chat'] = False
|
| 72 |
+
|
| 73 |
+
def mostrar_chat():
|
| 74 |
+
st.session_state['mostrar_chat'] = not st.session_state['mostrar_chat']
|
| 75 |
+
|
| 76 |
+
st.markdown(
|
| 77 |
+
"""
|
| 78 |
+
<style>
|
| 79 |
+
.assistant-button {
|
| 80 |
+
display: flex;
|
| 81 |
+
align-items: center;
|
| 82 |
+
justify-content: center;
|
| 83 |
+
background-color: #4CAF50;
|
| 84 |
+
color: white;
|
| 85 |
+
padding: 10px;
|
| 86 |
+
border: none;
|
| 87 |
+
border-radius: 5px;
|
| 88 |
+
cursor: pointer;
|
| 89 |
+
font-size: 16px;
|
| 90 |
+
margin-top: 10px;
|
| 91 |
+
}
|
| 92 |
+
.assistant-button img {
|
| 93 |
+
margin-right: 10px;
|
| 94 |
+
}
|
| 95 |
+
</style>
|
| 96 |
+
<button class="assistant-button" onclick="window.location.href='#assistant_chat'">
|
| 97 |
+
<img src='https://img2.gratispng.com/20180808/cxq/kisspng-robotics-science-computer-icons-robot-technology-robo-to-logo-svg-png-icon-free-download-45527-5b6baa46a5e322.4713113715337825986795.jpg' alt='icon' width='20' height='20'/>
|
| 98 |
+
Hablar con Asistente
|
| 99 |
+
</button>
|
| 100 |
+
""",
|
| 101 |
+
unsafe_allow_html=True
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
if st.button("Mostrar Chat", key="assistant_button"):
|
| 105 |
+
mostrar_chat()
|
| 106 |
+
|
| 107 |
+
if st.session_state['mostrar_chat']:
|
| 108 |
+
st.markdown("<div id='assistant_chat'></div>", unsafe_allow_html=True)
|
| 109 |
+
st.markdown("### Chat con Asistente")
|
| 110 |
+
|
| 111 |
+
pregunta_usuario = st.text_input("Escribe tu pregunta aquí:", value=st.session_state.get('presupuesto_texto', ''))
|
| 112 |
+
if st.button("Enviar Pregunta"):
|
| 113 |
+
respuesta = obtener_respuesta(
|
| 114 |
+
pregunta_usuario,
|
| 115 |
+
"", # No se necesita texto preprocesado en este caso
|
| 116 |
+
st.session_state['modelo'],
|
| 117 |
+
st.session_state['temperatura'],
|
| 118 |
+
st.session_state.get('assistant_id', 'asst_KngkX6sbRccg5a6fcnDHO06R')
|
| 119 |
+
)
|
| 120 |
+
st.markdown(f"**Asistente**: {respuesta}")
|
| 121 |
+
|
| 122 |
+
if __name__ == "__main__":
|
| 123 |
+
show()
|
pages_radiografias.py
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
import pandas as pd
|
| 3 |
+
from utils.data_manager import guardar_en_txt, cargar_desde_txt, mostrar_datos_como_texto, generar_pdf
|
| 4 |
+
|
| 5 |
+
def show():
|
| 6 |
+
st.title("📸 Registro de Radiografías")
|
| 7 |
+
|
| 8 |
+
if 'radiografias' not in st.session_state:
|
| 9 |
+
st.session_state.radiografias = []
|
| 10 |
+
|
| 11 |
+
with st.form("radiografias_form"):
|
| 12 |
+
paciente = st.text_input("Nombre del Paciente:")
|
| 13 |
+
fecha = st.date_input("Fecha:")
|
| 14 |
+
motivo_consulta = st.text_input("Motivo de Consulta:")
|
| 15 |
+
doctor = st.text_input("Doctor que envía la radiografía:")
|
| 16 |
+
diagnostico = st.text_area("Diagnóstico:")
|
| 17 |
+
respuesta = st.text_area("Respuesta:")
|
| 18 |
+
descripcion = st.text_area("Descripción:")
|
| 19 |
+
submitted = st.form_submit_button("Registrar Radiografía")
|
| 20 |
+
|
| 21 |
+
if submitted:
|
| 22 |
+
radiografia = {
|
| 23 |
+
"paciente": paciente,
|
| 24 |
+
"fecha": str(fecha),
|
| 25 |
+
"motivo_consulta": motivo_consulta,
|
| 26 |
+
"doctor": doctor,
|
| 27 |
+
"diagnostico": diagnostico,
|
| 28 |
+
"respuesta": respuesta,
|
| 29 |
+
"descripcion": descripcion
|
| 30 |
+
}
|
| 31 |
+
st.session_state.radiografias.append(radiografia)
|
| 32 |
+
datos_guardados = mostrar_datos_como_texto([radiografia]) # Append only the new entry
|
| 33 |
+
guardar_en_txt('radiografias.txt', datos_guardados)
|
| 34 |
+
st.success("Radiografía registrada con éxito.")
|
| 35 |
+
|
| 36 |
+
if st.session_state.radiografias:
|
| 37 |
+
st.write("### Radiografías Registradas")
|
| 38 |
+
df_radiografias = pd.DataFrame(st.session_state.radiografias)
|
| 39 |
+
st.write(df_radiografias)
|
| 40 |
+
|
| 41 |
+
pdf_file = generar_pdf(df_radiografias, "Registro de Radiografías", "radiografias.pdf")
|
| 42 |
+
st.download_button(
|
| 43 |
+
label="📥 Descargar PDF",
|
| 44 |
+
data=open(pdf_file, 'rb').read(),
|
| 45 |
+
file_name="radiografias.pdf",
|
| 46 |
+
mime="application/pdf"
|
| 47 |
+
)
|
| 48 |
+
|
requirements.txt
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
altair==5.3.0
|
| 2 |
+
annotated-types==0.6.0
|
| 3 |
+
anyio==4.3.0
|
| 4 |
+
attrs==23.2.0
|
| 5 |
+
blinker==1.7.0
|
| 6 |
+
cachetools==5.3.3
|
| 7 |
+
certifi==2024.2.2
|
| 8 |
+
charset-normalizer==3.3.2
|
| 9 |
+
click==8.1.7
|
| 10 |
+
distro==1.9.0
|
| 11 |
+
exceptiongroup==1.2.0
|
| 12 |
+
gitdb==4.0.11
|
| 13 |
+
GitPython==3.1.43
|
| 14 |
+
h11==0.14.0
|
| 15 |
+
httpcore==1.0.5
|
| 16 |
+
httpx==0.27.0
|
| 17 |
+
idna==3.6
|
| 18 |
+
Jinja2==3.1.3
|
| 19 |
+
joblib==1.3.2
|
| 20 |
+
jsonschema==4.21.1
|
| 21 |
+
jsonschema-specifications==2023.12.1
|
| 22 |
+
markdown-it-py==3.0.0
|
| 23 |
+
MarkupSafe==2.1.5
|
| 24 |
+
mdurl==0.1.2
|
| 25 |
+
nltk==3.8.1
|
| 26 |
+
numpy==1.26.4
|
| 27 |
+
openai==0.28
|
| 28 |
+
packaging==23.2
|
| 29 |
+
pandas==2.2.1
|
| 30 |
+
pillow==10.3.0
|
| 31 |
+
protobuf==4.25.3
|
| 32 |
+
pyarrow==15.0.2
|
| 33 |
+
pydantic==2.6.4
|
| 34 |
+
pydantic_core==2.16.3
|
| 35 |
+
pydeck==0.8.1b0
|
| 36 |
+
Pygments==2.17.2
|
| 37 |
+
PyPDF2==3.0.1
|
| 38 |
+
python-dateutil==2.9.0.post0
|
| 39 |
+
python-docx==0.8.11
|
| 40 |
+
pytz==2024.1
|
| 41 |
+
referencing==0.34.0
|
| 42 |
+
regex==2023.12.25
|
| 43 |
+
requests==2.31.0
|
| 44 |
+
rich==13.7.1
|
| 45 |
+
rpds-py==0.18.0
|
| 46 |
+
six==1.16.0
|
| 47 |
+
smmap==5.0.1
|
| 48 |
+
sniffio==1.3.1
|
| 49 |
+
streamlit==1.32.2
|
| 50 |
+
tenacity==8.2.3
|
| 51 |
+
toml==0.10.2
|
| 52 |
+
toolz==0.12.1
|
| 53 |
+
tornado==6.4
|
| 54 |
+
tqdm==4.66.2
|
| 55 |
+
typing_extensions==4.10.0
|
| 56 |
+
tzdata==2024.1
|
| 57 |
+
urllib3==2.2.1
|
| 58 |
+
watchdog==4.0.0
|
| 59 |
+
transformers==4.25.1
|
| 60 |
+
torch==2.0.1
|
| 61 |
+
fpdf==1.7.2
|
| 62 |
+
SpeechRecognition==3.10.4
|
| 63 |
+
google-cloud-texttospeech==2.16.3
|
| 64 |
+
Flask==3.0.3
|
| 65 |
+
google-auth==2.30.0
|
| 66 |
+
google-auth-oauthlib==1.2.0
|
| 67 |
+
google-auth-httplib2==0.2.0
|
| 68 |
+
google-api-python-client==2.134.0
|
| 69 |
+
supabase==2.5.1
|
| 70 |
+
google-cloud-vision==3.7.2
|