Spaces:
Runtime error
Runtime error
PiotrtWitek
commited on
Commit
·
c4c5bb8
1
Parent(s):
d54a7c0
Dodanie aplikacji Szuflada
Browse files- README.md +96 -1
- app.py +105 -0
- chat_utils.py +87 -0
- database_setup.py +159 -0
- requirements.txt +11 -0
- scrap.py +177 -0
README.md
CHANGED
|
@@ -1 +1,96 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Szuflada
|
| 3 |
+
emoji: 💬
|
| 4 |
+
colorFrom: yellow
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 4.44.1 # PW zmieniłem wersję Radio
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
license: cc-by-4.0
|
| 11 |
+
short_description: Chatbot korzystający z zasobów serwisu mojaszuflada.pl
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Szuflada
|
| 15 |
+
|
| 16 |
+
Czatbot korzystający z zasobów serwisu [mojaszuflada.pl](https://mojaszuflada.pl), wykorzystujący Gradio oraz RAG z lokalną bazą Chroma i Hugging Face Inference API.
|
| 17 |
+
|
| 18 |
+
## Instalacja
|
| 19 |
+
|
| 20 |
+
1. Sklonuj repozytorium:
|
| 21 |
+
|
| 22 |
+
```bash
|
| 23 |
+
git clone https://github.com/<użytkownik>/szuflada.git
|
| 24 |
+
cd szuflada
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
2. Utwórz i aktywuj środowisko wirtualne:
|
| 28 |
+
|
| 29 |
+
```bash
|
| 30 |
+
python3 -m venv venv
|
| 31 |
+
source venv/bin/activate # Linux/macOS
|
| 32 |
+
venv\Scripts\activate # Windows
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
3. Zainstaluj zależności:
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
pip install -r requirements.txt
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
4. Zaloguj się do Hugging Face (jeśli korzystasz z API):
|
| 42 |
+
|
| 43 |
+
```bash
|
| 44 |
+
huggingface-cli login
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
5. (Opcjonalnie) Ustaw zmienną środowiskową z tokenem:
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
export HUGGINGFACEHUB_API_TOKEN=Twój_Token
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Użycie
|
| 54 |
+
|
| 55 |
+
Uruchom aplikację lokalnie:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
python app.py
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
Aplikacja będzie dostępna pod adresem <http://localhost:7860>. Otwórz tę stronę w przeglądarce, aby rozpocząć czat.
|
| 62 |
+
|
| 63 |
+
## Osadzanie na innych stronach internetowych
|
| 64 |
+
|
| 65 |
+
### Metoda 1: iframe
|
| 66 |
+
|
| 67 |
+
```html
|
| 68 |
+
<iframe src="http://adres_twojej_aplikacji:7860" width="700" height="800" frameborder="0"></iframe>
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
### Metoda 2: Embed Gradio (Hugging Face Spaces)
|
| 72 |
+
|
| 73 |
+
Jeśli aplikacja jest wdrożona jako Space na Hugging Face, użyj oficjalnego skryptu:
|
| 74 |
+
|
| 75 |
+
```html
|
| 76 |
+
<script src="https://cdn.jsdelivr.net/npm/@gradio/embed"></script>
|
| 77 |
+
<gradio-embed
|
| 78 |
+
src="username/szuflada" <!-- zastąp username swoją nazwą użytkownika/Space -->
|
| 79 |
+
width="700"
|
| 80 |
+
height="800"
|
| 81 |
+
></gradio-embed>
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
### Metoda 3: JavaScript SDK
|
| 85 |
+
|
| 86 |
+
```html
|
| 87 |
+
<script type="module">
|
| 88 |
+
import Gradio from "https://cdn.jsdelivr.net/npm/@gradio/embed@2.0.0/+esm";
|
| 89 |
+
new Gradio.Embed("https://huggingface.co/embed/username/szuflada", {
|
| 90 |
+
container: document.getElementById("gradio-container"),
|
| 91 |
+
width: 700,
|
| 92 |
+
height: 800
|
| 93 |
+
});
|
| 94 |
+
</script>
|
| 95 |
+
<div id="gradio-container"></div>
|
| 96 |
+
```
|
app.py
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Wersja zmodyfikowana przez ChatGPT
|
| 2 |
+
import os
|
| 3 |
+
import sys
|
| 4 |
+
import uuid
|
| 5 |
+
from typing import List, Dict, Any
|
| 6 |
+
|
| 7 |
+
import gradio as gr
|
| 8 |
+
|
| 9 |
+
from database_setup import initialize_database
|
| 10 |
+
from chat_utils import create_rag_chain, format_sources, create_session_history_manager
|
| 11 |
+
from langchain_core.runnables.history import RunnableWithMessageHistory
|
| 12 |
+
|
| 13 |
+
print("Inicjalizacja bazy danych...")
|
| 14 |
+
baza = initialize_database()
|
| 15 |
+
if baza is None:
|
| 16 |
+
print("Nie udało się zainicjalizować bazy danych. Zakończenie pracy.")
|
| 17 |
+
sys.exit(1)
|
| 18 |
+
|
| 19 |
+
rag_chain = create_rag_chain(baza)
|
| 20 |
+
get_session_history = create_session_history_manager()
|
| 21 |
+
conversational_rag_chain = RunnableWithMessageHistory(
|
| 22 |
+
rag_chain,
|
| 23 |
+
get_session_history,
|
| 24 |
+
input_messages_key="input",
|
| 25 |
+
history_messages_key="chat_history",
|
| 26 |
+
output_messages_key="answer",
|
| 27 |
+
)
|
| 28 |
+
|
| 29 |
+
def respond(user_input: str, messages: List[Dict[str, Any]] | None, sess_id: str | None):
|
| 30 |
+
"""
|
| 31 |
+
Obsługuje odpowiedź na wiadomość użytkownika.
|
| 32 |
+
Wejście/wyjście to lista słowników w formacie:
|
| 33 |
+
{"role": "user" | "assistant", "content": "tekst"}
|
| 34 |
+
"""
|
| 35 |
+
# Upewnij się, że mamy listę wiadomości
|
| 36 |
+
if messages is None:
|
| 37 |
+
messages = []
|
| 38 |
+
|
| 39 |
+
# Dopisz wiadomość użytkownika do UI
|
| 40 |
+
messages.append({"role": "user", "content": user_input})
|
| 41 |
+
|
| 42 |
+
# Zapewnij tekstowe session_id (State nie powinien trzymać funkcji)
|
| 43 |
+
sid = sess_id if isinstance(sess_id, str) and sess_id else str(uuid.uuid4())
|
| 44 |
+
|
| 45 |
+
try:
|
| 46 |
+
result = conversational_rag_chain.invoke(
|
| 47 |
+
{"input": user_input},
|
| 48 |
+
config={"configurable": {"session_id": sid}},
|
| 49 |
+
)
|
| 50 |
+
except Exception as e:
|
| 51 |
+
messages.append({"role": "assistant", "content": f"Błąd podczas przetwarzania: {e}"})
|
| 52 |
+
return messages
|
| 53 |
+
|
| 54 |
+
context_docs = result.get("context", [])
|
| 55 |
+
# Debug podobieństw (bez przerywania działania)
|
| 56 |
+
try:
|
| 57 |
+
debug_scores = baza.similarity_search_with_score(user_input, k=len(context_docs) or 4)
|
| 58 |
+
for i, (doc, score) in enumerate(debug_scores):
|
| 59 |
+
print(f"Chunk {i+1}: similarity_score={score}, title={doc.metadata.get('title')}")
|
| 60 |
+
except Exception:
|
| 61 |
+
pass
|
| 62 |
+
|
| 63 |
+
sources_md = format_sources(context_docs)
|
| 64 |
+
answer = result.get("answer") or ""
|
| 65 |
+
answer_with_sources = f"{answer}\n\nŹródła:\n{sources_md}"
|
| 66 |
+
|
| 67 |
+
messages.append({"role": "assistant", "content": answer_with_sources})
|
| 68 |
+
return messages
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue"), title="Szuflada Chatbot") as demo:
|
| 72 |
+
# State musi przechowywać wartość, nie funkcję
|
| 73 |
+
session_id = gr.State(str(uuid.uuid4()))
|
| 74 |
+
|
| 75 |
+
gr.Markdown(
|
| 76 |
+
"# Czat z Moją Szufladą\n"
|
| 77 |
+
"### Zadaj pytanie na temat treści ze strony mojaszuflada.pl"
|
| 78 |
+
)
|
| 79 |
+
|
| 80 |
+
# Nowy format wiadomości
|
| 81 |
+
chatbot = gr.Chatbot(label="Rozmowa", height=500, type="messages")
|
| 82 |
+
|
| 83 |
+
with gr.Row():
|
| 84 |
+
msg = gr.Textbox(
|
| 85 |
+
show_label=False,
|
| 86 |
+
placeholder="Wpisz swoje pytanie...",
|
| 87 |
+
container=False,
|
| 88 |
+
scale=7,
|
| 89 |
+
)
|
| 90 |
+
submit_btn = gr.Button("Wyślij", variant="primary", scale=1)
|
| 91 |
+
|
| 92 |
+
# Kolejność argumentów musi odpowiadać sygnaturze respond:
|
| 93 |
+
# (user_input, messages, sess_id)
|
| 94 |
+
submit_btn.click(respond, [msg, chatbot, session_id], [chatbot]) \
|
| 95 |
+
.then(lambda: gr.update(value=""), None, [msg], queue=False)
|
| 96 |
+
msg.submit(respond, [msg, chatbot, session_id], [chatbot]) \
|
| 97 |
+
.then(lambda: gr.update(value=""), None, [msg], queue=False)
|
| 98 |
+
|
| 99 |
+
if __name__ == "__main__":
|
| 100 |
+
# Na HF Spaces wiąż na wszystkie interfejsy i użyj portu z ENV
|
| 101 |
+
demo.launch(
|
| 102 |
+
server_name="0.0.0.0",
|
| 103 |
+
server_port=int(os.getenv("PORT", 7860)),
|
| 104 |
+
ssr_mode=False
|
| 105 |
+
)
|
chat_utils.py
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from langchain_chroma import Chroma
|
| 2 |
+
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
|
| 3 |
+
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
|
| 4 |
+
from langchain.chains.combine_documents import create_stuff_documents_chain
|
| 5 |
+
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
| 6 |
+
from langchain_core.chat_history import BaseChatMessageHistory
|
| 7 |
+
from langchain_community.chat_message_histories import ChatMessageHistory
|
| 8 |
+
from langchain_core.runnables.history import RunnableWithMessageHistory
|
| 9 |
+
|
| 10 |
+
def create_rag_chain(database: Chroma):
|
| 11 |
+
"""
|
| 12 |
+
Tworzy łańcuch RAG z obsługą historii konwersacji.
|
| 13 |
+
"""
|
| 14 |
+
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.0)
|
| 15 |
+
retriever = database.as_retriever(search_kwargs={"k": 3})
|
| 16 |
+
|
| 17 |
+
# Prompt do kontekstualizacji pytania
|
| 18 |
+
contextualize_q_system_prompt = (
|
| 19 |
+
"Biorąc pod uwagę historię czatu i ostatnie pytanie użytkownika, "
|
| 20 |
+
"które może odnosić się do kontekstu w historii czatu, "
|
| 21 |
+
"sformułuj samodzielne pytanie, które można zrozumieć bez historii czatu. "
|
| 22 |
+
"NIE odpowiadaj na pytanie, po prostu przeformułuj je, jeśli to konieczne, "
|
| 23 |
+
"a w przeciwnym razie zwróć je w niezmienionej formie."
|
| 24 |
+
)
|
| 25 |
+
contextualize_q_prompt = ChatPromptTemplate.from_messages([
|
| 26 |
+
("system", contextualize_q_system_prompt),
|
| 27 |
+
MessagesPlaceholder("chat_history"),
|
| 28 |
+
("human", "{input}"),
|
| 29 |
+
])
|
| 30 |
+
|
| 31 |
+
history_aware_retriever = create_history_aware_retriever(
|
| 32 |
+
llm, retriever, contextualize_q_prompt
|
| 33 |
+
)
|
| 34 |
+
|
| 35 |
+
# Prompt do generowania odpowiedzi
|
| 36 |
+
qa_system_prompt = (
|
| 37 |
+
"Jesteś asystentem do zadawania pytań i odpowiedzi na temat treści ze strony mojaszuflada.pl. "
|
| 38 |
+
"Użyj poniższych fragmentów odzyskanego kontekstu, aby odpowiedzieć na pytanie. "
|
| 39 |
+
"Odpowiadaj zawsze w języku polskim. "
|
| 40 |
+
"Jeśli nie znasz odpowiedzi, po prostu powiedz, że tego nie wiesz. "
|
| 41 |
+
"Zachowaj zwięzłość odpowiedzi, ale bądź pomocny i przyjazny."
|
| 42 |
+
"\n\n{context}"
|
| 43 |
+
)
|
| 44 |
+
qa_prompt = ChatPromptTemplate.from_messages([
|
| 45 |
+
("system", qa_system_prompt),
|
| 46 |
+
MessagesPlaceholder("chat_history"),
|
| 47 |
+
("human", "{input}"),
|
| 48 |
+
])
|
| 49 |
+
|
| 50 |
+
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
|
| 51 |
+
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
|
| 52 |
+
|
| 53 |
+
return rag_chain
|
| 54 |
+
|
| 55 |
+
def format_sources(source_docs):
|
| 56 |
+
"""
|
| 57 |
+
Formatuje listę źródeł do wyświetlenia w odpowiedzi.
|
| 58 |
+
"""
|
| 59 |
+
if not source_docs:
|
| 60 |
+
return "?"
|
| 61 |
+
|
| 62 |
+
sources = []
|
| 63 |
+
for doc in source_docs:
|
| 64 |
+
metadata = doc.metadata
|
| 65 |
+
title = metadata.get("title", "Brak tytułu")
|
| 66 |
+
source_url = metadata.get("source", "Brak URL")
|
| 67 |
+
|
| 68 |
+
pub_date_raw = metadata.get("published_time")
|
| 69 |
+
if pub_date_raw:
|
| 70 |
+
pub_date = pub_date_raw.split("T")[0]
|
| 71 |
+
sources.append(f"- [{title}]({source_url}) ({pub_date})")
|
| 72 |
+
else:
|
| 73 |
+
sources.append(f"- [{title}]({source_url})")
|
| 74 |
+
return "\n".join(sources)
|
| 75 |
+
|
| 76 |
+
def create_session_history_manager():
|
| 77 |
+
"""
|
| 78 |
+
Tworzy menedżer historii sesji.
|
| 79 |
+
"""
|
| 80 |
+
store = {}
|
| 81 |
+
|
| 82 |
+
def get_session_history(session_id: str) -> BaseChatMessageHistory:
|
| 83 |
+
if session_id not in store:
|
| 84 |
+
store[session_id] = ChatMessageHistory()
|
| 85 |
+
return store[session_id]
|
| 86 |
+
|
| 87 |
+
return get_session_history
|
database_setup.py
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from bs4 import BeautifulSoup
|
| 2 |
+
import re
|
| 3 |
+
from langchain_chroma import Chroma
|
| 4 |
+
from langchain_openai import OpenAIEmbeddings
|
| 5 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
| 6 |
+
from langchain_core.documents import Document
|
| 7 |
+
import requests
|
| 8 |
+
from tqdm import tqdm
|
| 9 |
+
|
| 10 |
+
def process_documents(docs: list[Document]) -> list[Document]:
|
| 11 |
+
"""
|
| 12 |
+
Przetwarza listę dokumentów, wyodrębniając treść i metadane z HTML.
|
| 13 |
+
"""
|
| 14 |
+
processed_docs = []
|
| 15 |
+
for doc in docs:
|
| 16 |
+
soup = BeautifulSoup(doc.page_content, "lxml")
|
| 17 |
+
|
| 18 |
+
# Wyodrębnienie głównej treści
|
| 19 |
+
article = soup.find("article")
|
| 20 |
+
if article:
|
| 21 |
+
content = article.get_text(separator="\n", strip=True)
|
| 22 |
+
else:
|
| 23 |
+
content = soup.get_text(separator="\n", strip=True)
|
| 24 |
+
|
| 25 |
+
# Wyodrębnienie metadanych
|
| 26 |
+
metadata = doc.metadata.copy()
|
| 27 |
+
|
| 28 |
+
# Title ze znacznika <title>
|
| 29 |
+
if soup.title:
|
| 30 |
+
title_text = soup.title.get_text(strip=True)
|
| 31 |
+
if title_text:
|
| 32 |
+
metadata["title"] = title_text
|
| 33 |
+
|
| 34 |
+
# Data publikacji
|
| 35 |
+
pub_date_tag = soup.find("meta", property="article:published_time")
|
| 36 |
+
if pub_date_tag and pub_date_tag.get("content"):
|
| 37 |
+
metadata["published_time"] = pub_date_tag["content"]
|
| 38 |
+
else:
|
| 39 |
+
time_tag = soup.find("time")
|
| 40 |
+
if time_tag and time_tag.get("datetime"):
|
| 41 |
+
metadata["published_time"] = time_tag.get("datetime")
|
| 42 |
+
elif time_tag and time_tag.get_text(strip=True):
|
| 43 |
+
metadata["published_time"] = time_tag.get_text(strip=True)
|
| 44 |
+
else:
|
| 45 |
+
text = soup.get_text(separator="\n", strip=True)
|
| 46 |
+
m = re.search(r"Opublikowano(?: w dniu)?[:\s]+([0-9]{1,2}\s+\w+\s+\d{4})", text, re.IGNORECASE)
|
| 47 |
+
if m:
|
| 48 |
+
metadata["published_time"] = m.group(1)
|
| 49 |
+
|
| 50 |
+
# Kategorie
|
| 51 |
+
categories = [
|
| 52 |
+
tag["content"]
|
| 53 |
+
for tag in soup.find_all("meta", property="article:section")
|
| 54 |
+
if tag.get("content")
|
| 55 |
+
]
|
| 56 |
+
if categories:
|
| 57 |
+
metadata["categories"] = ", ".join(categories)
|
| 58 |
+
|
| 59 |
+
# Słowa kluczowe
|
| 60 |
+
keywords = [
|
| 61 |
+
tag["content"]
|
| 62 |
+
for tag in soup.find_all("meta", property="article:tag")
|
| 63 |
+
if tag.get("content")
|
| 64 |
+
]
|
| 65 |
+
if keywords:
|
| 66 |
+
metadata["keywords"] = ", ".join(keywords)
|
| 67 |
+
|
| 68 |
+
processed_docs.append(Document(page_content=content, metadata=metadata))
|
| 69 |
+
return processed_docs
|
| 70 |
+
|
| 71 |
+
def initialize_database(persist_directory="./szuflada", clear_existing=True):
|
| 72 |
+
"""
|
| 73 |
+
Inicjalizuje bazę danych Chroma z danymi ze strony mojaszuflada.pl
|
| 74 |
+
"""
|
| 75 |
+
embedder = OpenAIEmbeddings(model="text-embedding-3-small", show_progress_bar=True)
|
| 76 |
+
baza = Chroma(collection_name="szuflada", embedding_function=embedder, persist_directory=persist_directory)
|
| 77 |
+
|
| 78 |
+
if clear_existing:
|
| 79 |
+
print("Czyszczenie istniejącej kolekcji w bazie danych...")
|
| 80 |
+
try:
|
| 81 |
+
baza.delete_collection()
|
| 82 |
+
print("Kolekcja została wyczyszczona.")
|
| 83 |
+
baza = Chroma(collection_name="szuflada", embedding_function=embedder, persist_directory=persist_directory)
|
| 84 |
+
except Exception as e:
|
| 85 |
+
print(f"Nie można było wyczyścić kolekcji (może nie istniała): {e}")
|
| 86 |
+
|
| 87 |
+
print("Pobieranie i parsowanie mapy strony...")
|
| 88 |
+
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
|
| 89 |
+
sitemap_url = "https://mojaszuflada.pl/wp-sitemap.xml"
|
| 90 |
+
docs = []
|
| 91 |
+
|
| 92 |
+
try:
|
| 93 |
+
response = requests.get(sitemap_url, headers=headers)
|
| 94 |
+
response.raise_for_status()
|
| 95 |
+
sitemap_xml = response.text
|
| 96 |
+
|
| 97 |
+
sitemap_soup = BeautifulSoup(sitemap_xml, "xml")
|
| 98 |
+
urls = [loc.text for loc in sitemap_soup.find_all("loc")]
|
| 99 |
+
|
| 100 |
+
sitemap_urls = [url for url in urls if url.endswith(".xml")]
|
| 101 |
+
page_urls = [url for url in urls if not url.endswith(".xml")]
|
| 102 |
+
|
| 103 |
+
for sub_sitemap_url in tqdm(sitemap_urls, desc="Parsowanie pod-map"):
|
| 104 |
+
try:
|
| 105 |
+
response = requests.get(sub_sitemap_url, headers=headers)
|
| 106 |
+
response.raise_for_status()
|
| 107 |
+
sub_sitemap_xml = response.text
|
| 108 |
+
sub_sitemap_soup = BeautifulSoup(sub_sitemap_xml, "xml")
|
| 109 |
+
page_urls.extend([loc.text for loc in sub_sitemap_soup.find_all("loc")])
|
| 110 |
+
except requests.RequestException as e:
|
| 111 |
+
print(f"Pominięto pod-mapę {sub_sitemap_url}: {e}")
|
| 112 |
+
|
| 113 |
+
print(f"Znaleziono {len(page_urls)} adresów URL do przetworzenia.")
|
| 114 |
+
|
| 115 |
+
for url in tqdm(page_urls, desc="Pobieranie stron"):
|
| 116 |
+
try:
|
| 117 |
+
response = requests.get(url, headers=headers)
|
| 118 |
+
response.raise_for_status()
|
| 119 |
+
doc = Document(
|
| 120 |
+
page_content=response.text,
|
| 121 |
+
metadata={"source": url, "loc": url}
|
| 122 |
+
)
|
| 123 |
+
docs.append(doc)
|
| 124 |
+
except requests.RequestException as e:
|
| 125 |
+
print(f"Pominięto stronę {url}: {e}")
|
| 126 |
+
|
| 127 |
+
except requests.RequestException as e:
|
| 128 |
+
print(f"Krytyczny błąd: Nie udało się pobrać głównej mapy strony: {e}")
|
| 129 |
+
|
| 130 |
+
if not docs:
|
| 131 |
+
print("Nie załadowano żadnych dokumentów.")
|
| 132 |
+
return None
|
| 133 |
+
|
| 134 |
+
processed_docs = process_documents(docs)
|
| 135 |
+
print(f"\nPrzetworzono {len(processed_docs)} dokumentów.")
|
| 136 |
+
|
| 137 |
+
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
|
| 138 |
+
chunks = text_splitter.split_documents(processed_docs)
|
| 139 |
+
|
| 140 |
+
# Walidacja metadanych
|
| 141 |
+
required_meta_keys = ["source", "title", "published_time"]
|
| 142 |
+
missing_counts = {k: 0 for k in required_meta_keys}
|
| 143 |
+
for chunk in chunks:
|
| 144 |
+
md = chunk.metadata or {}
|
| 145 |
+
for k in required_meta_keys:
|
| 146 |
+
if not md.get(k):
|
| 147 |
+
missing_counts[k] += 1
|
| 148 |
+
|
| 149 |
+
print(f"Liczba chunków: {len(chunks)}")
|
| 150 |
+
print("Braki metadanych:", missing_counts)
|
| 151 |
+
|
| 152 |
+
# Dodawanie chunków do bazy
|
| 153 |
+
batch_size = 1000
|
| 154 |
+
for i in range(0, len(chunks), batch_size):
|
| 155 |
+
baza.add_documents(documents=chunks[i:i + batch_size])
|
| 156 |
+
|
| 157 |
+
print("Baza danych została zainicjalizowana pomyślnie.")
|
| 158 |
+
return baza
|
| 159 |
+
return baza
|
requirements.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
beautifulsoup4==4.13.4
|
| 2 |
+
langchain==0.3.27
|
| 3 |
+
langchain_chroma==0.2.5
|
| 4 |
+
langchain_community==0.3.27
|
| 5 |
+
langchain_core==0.3.74
|
| 6 |
+
langchain_openai==0.3.30
|
| 7 |
+
gradio==4.44.1 # WP dodane dwa wiersze dot. Gradio
|
| 8 |
+
gradio_client==1.3.0
|
| 9 |
+
Requests==2.32.4
|
| 10 |
+
tqdm==4.66.4
|
| 11 |
+
lxml
|
scrap.py
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Ten plik jest odpowiedzialny za scrapowanie danych ze strony.
|
| 2 |
+
from langchain_community.document_loaders import SitemapLoader
|
| 3 |
+
from bs4 import BeautifulSoup
|
| 4 |
+
import re
|
| 5 |
+
from langchain_chroma import Chroma
|
| 6 |
+
from langchain_openai import OpenAIEmbeddings
|
| 7 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
| 8 |
+
from langchain_core.documents import Document
|
| 9 |
+
import requests
|
| 10 |
+
from tqdm import tqdm
|
| 11 |
+
|
| 12 |
+
def process_documents(docs: list[Document]) -> list[Document]:
|
| 13 |
+
"""
|
| 14 |
+
Przetwarza listę dokumentów, wyodrębniając treść i metadane z HTML.
|
| 15 |
+
"""
|
| 16 |
+
processed_docs = []
|
| 17 |
+
for doc in docs:
|
| 18 |
+
soup = BeautifulSoup(doc.page_content, "lxml")
|
| 19 |
+
|
| 20 |
+
# Wyodrębnienie głównej treści
|
| 21 |
+
article = soup.find("article")
|
| 22 |
+
if article:
|
| 23 |
+
content = article.get_text(separator="\n", strip=True)
|
| 24 |
+
else:
|
| 25 |
+
content = soup.get_text(separator="\n", strip=True)
|
| 26 |
+
|
| 27 |
+
# Wyodrębnienie metadanych
|
| 28 |
+
metadata = doc.metadata.copy() # Kopiujemy istniejące metadane (np. source)
|
| 29 |
+
|
| 30 |
+
# Title: Zgodnie z sugestią, tytuł jest pobierany tylko ze znacznika <title>
|
| 31 |
+
if soup.title:
|
| 32 |
+
title_text = soup.title.get_text(strip=True)
|
| 33 |
+
if title_text:
|
| 34 |
+
metadata["title"] = title_text
|
| 35 |
+
|
| 36 |
+
# Data publikacji
|
| 37 |
+
# Published time: prefer meta[property=article:published_time], then <time>, then regex search
|
| 38 |
+
pub_date_tag = soup.find("meta", property="article:published_time")
|
| 39 |
+
if pub_date_tag and pub_date_tag.get("content"):
|
| 40 |
+
metadata["published_time"] = pub_date_tag["content"]
|
| 41 |
+
else:
|
| 42 |
+
time_tag = soup.find("time")
|
| 43 |
+
if time_tag and time_tag.get("datetime"):
|
| 44 |
+
metadata["published_time"] = time_tag.get("datetime")
|
| 45 |
+
elif time_tag and time_tag.get_text(strip=True):
|
| 46 |
+
metadata["published_time"] = time_tag.get_text(strip=True)
|
| 47 |
+
else:
|
| 48 |
+
# Polish pages often have 'Opublikowano w dniu 8 marca 2011' as plain text
|
| 49 |
+
text = soup.get_text(separator="\n", strip=True)
|
| 50 |
+
m = re.search(r"Opublikowano(?: w dniu)?[:\s]+([0-9]{1,2}\s+\w+\s+\d{4})", text, re.IGNORECASE)
|
| 51 |
+
if m:
|
| 52 |
+
metadata["published_time"] = m.group(1)
|
| 53 |
+
|
| 54 |
+
# Kategorie
|
| 55 |
+
categories = [
|
| 56 |
+
tag["content"]
|
| 57 |
+
for tag in soup.find_all("meta", property="article:section")
|
| 58 |
+
if tag.get("content")
|
| 59 |
+
]
|
| 60 |
+
if categories:
|
| 61 |
+
metadata["categories"] = ", ".join(categories)
|
| 62 |
+
|
| 63 |
+
# Słowa kluczowe (tagi)
|
| 64 |
+
keywords = [
|
| 65 |
+
tag["content"]
|
| 66 |
+
for tag in soup.find_all("meta", property="article:tag")
|
| 67 |
+
if tag.get("content")
|
| 68 |
+
]
|
| 69 |
+
if keywords:
|
| 70 |
+
metadata["keywords"] = ", ".join(keywords)
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
processed_docs.append(Document(page_content=content, metadata=metadata))
|
| 75 |
+
return processed_docs
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
embedder=OpenAIEmbeddings(model="text-embedding-3-small", show_progress_bar=True)
|
| 79 |
+
|
| 80 |
+
baza=Chroma(collection_name="szuflada", embedding_function=embedder, persist_directory="./szuflada")
|
| 81 |
+
|
| 82 |
+
# --- DODANA SEKCJA ---
|
| 83 |
+
# Czyszczenie istniejącej kolekcji przed dodaniem nowych danych
|
| 84 |
+
# To zapewnia, że pracujemy na świeżych danych z metadanymi.
|
| 85 |
+
print("Czyszczenie istniejącej kolekcji w bazie danych...")
|
| 86 |
+
try:
|
| 87 |
+
baza.delete_collection()
|
| 88 |
+
print("Kolekcja została wyczyszczona.")
|
| 89 |
+
# Po usunięciu kolekcji, musimy ponownie zainicjować obiekt Chroma
|
| 90 |
+
baza=Chroma(collection_name="szuflada", embedding_function=embedder, persist_directory="./szuflada")
|
| 91 |
+
except Exception as e:
|
| 92 |
+
print(f"Nie można było wyczyścić kolekcji (może nie istniała): {e}")
|
| 93 |
+
# --- KONIEC DODANEJ SEKCJI ---
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
# --- Nowa logika ładowania danych ---
|
| 98 |
+
print("Pobieranie i parsowanie mapy strony...")
|
| 99 |
+
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
|
| 100 |
+
sitemap_url = "https://mojaszuflada.pl/wp-sitemap.xml"
|
| 101 |
+
docs = []
|
| 102 |
+
|
| 103 |
+
try:
|
| 104 |
+
response = requests.get(sitemap_url, headers=headers)
|
| 105 |
+
response.raise_for_status()
|
| 106 |
+
sitemap_xml = response.text
|
| 107 |
+
|
| 108 |
+
sitemap_soup = BeautifulSoup(sitemap_xml, "xml")
|
| 109 |
+
urls = [loc.text for loc in sitemap_soup.find_all("loc")]
|
| 110 |
+
|
| 111 |
+
sitemap_urls = [url for url in urls if url.endswith(".xml")]
|
| 112 |
+
page_urls = [url for url in urls if not url.endswith(".xml")]
|
| 113 |
+
|
| 114 |
+
for sub_sitemap_url in tqdm(sitemap_urls, desc="Parsowanie pod-map"):
|
| 115 |
+
try:
|
| 116 |
+
response = requests.get(sub_sitemap_url, headers=headers)
|
| 117 |
+
response.raise_for_status()
|
| 118 |
+
sub_sitemap_xml = response.text
|
| 119 |
+
sub_sitemap_soup = BeautifulSoup(sub_sitemap_xml, "xml")
|
| 120 |
+
page_urls.extend([loc.text for loc in sub_sitemap_soup.find_all("loc")])
|
| 121 |
+
except requests.RequestException as e:
|
| 122 |
+
print(f"Pominięto pod-mapę {sub_sitemap_url}: {e}")
|
| 123 |
+
|
| 124 |
+
print(f"Znaleziono {len(page_urls)} adresów URL do przetworzenia.")
|
| 125 |
+
|
| 126 |
+
for url in tqdm(page_urls, desc="Pobieranie stron"):
|
| 127 |
+
try:
|
| 128 |
+
response = requests.get(url, headers=headers)
|
| 129 |
+
response.raise_for_status()
|
| 130 |
+
doc = Document(
|
| 131 |
+
page_content=response.text,
|
| 132 |
+
metadata={"source": url, "loc": url}
|
| 133 |
+
)
|
| 134 |
+
docs.append(doc)
|
| 135 |
+
except requests.RequestException as e:
|
| 136 |
+
print(f"Pominięto stronę {url}: {e}")
|
| 137 |
+
|
| 138 |
+
except requests.RequestException as e:
|
| 139 |
+
print(f"Krytyczny błąd: Nie udało się pobrać głównej mapy strony: {e}")
|
| 140 |
+
# docs will be empty and the script will exit gracefully later
|
| 141 |
+
|
| 142 |
+
if not docs:
|
| 143 |
+
print("Nie załadowano żadnych dokumentów. Zakończenie pracy.")
|
| 144 |
+
exit()
|
| 145 |
+
|
| 146 |
+
|
| 147 |
+
processed_docs = process_documents(docs)
|
| 148 |
+
|
| 149 |
+
print("\nPrzykładowe metadane przetworzonych dokumentów (pierwsze 5):")
|
| 150 |
+
for pd in processed_docs[:5]:
|
| 151 |
+
print(pd.metadata)
|
| 152 |
+
|
| 153 |
+
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
|
| 154 |
+
chunks = text_splitter.split_documents(processed_docs)
|
| 155 |
+
|
| 156 |
+
|
| 157 |
+
batch_size = 1000
|
| 158 |
+
# --- WALIDACJA METADANYCH DLA CHUNKÓW ---
|
| 159 |
+
# Sprawdzamy, czy każdy chunk zawiera oczekiwane metadane (źródło, tytuł, data publikacji)
|
| 160 |
+
required_meta_keys = ["source", "title", "published_time"]
|
| 161 |
+
missing_counts = {k: 0 for k in required_meta_keys}
|
| 162 |
+
for idx, chunk in enumerate(chunks):
|
| 163 |
+
md = chunk.metadata or {}
|
| 164 |
+
for k in required_meta_keys:
|
| 165 |
+
if not md.get(k):
|
| 166 |
+
missing_counts[k] += 1
|
| 167 |
+
|
| 168 |
+
print(f"Liczba chunków: {len(chunks)}")
|
| 169 |
+
print("Braki metadanych (liczba chunków bez klucza/wartości):", missing_counts)
|
| 170 |
+
print("Przykładowe metadane dla pierwszych 5 chunków:")
|
| 171 |
+
for sample in chunks[:5]:
|
| 172 |
+
print(sample.metadata)
|
| 173 |
+
# --- KONIEC WALIDACJI ---
|
| 174 |
+
|
| 175 |
+
for i in range(0, len(chunks), batch_size):
|
| 176 |
+
baza.add_documents(documents=chunks[i:i + batch_size])
|
| 177 |
+
|