Commit ·
7e85729
0
Parent(s):
feat: Add cluster analysis and semantic filtering modules
Browse files- Implemented `cluster_page.py` for PNRR project clustering analysis with Streamlit UI.
- Created `column_query_agent.py` to handle query analysis and column mapping using LLM.
- Developed `create.py` for building and loading FAISS indexes from DataFrames.
- Added `semantic_filter.py` for applying semantic filtering on PNRR projects based on user queries.
- Introduced `search.py` for performing multi-column semantic searches using FAISS.
- Added relevant queries documentation for better user guidance.
- Updated requirements.txt with necessary dependencies for the new features.
- Configured Streamlit settings for increased file upload size.
- .env.example +6 -0
- .gitignore +18 -0
- Dockerfile +28 -0
- README.md +150 -0
- app.py +78 -0
- docker/compose.base.yaml +15 -0
- modules/cluster_analysis.py +681 -0
- modules/cluster_page.py +350 -0
- modules/column_query_agent.py +83 -0
- modules/create.py +75 -0
- modules/fixtures/Scheda metadatazione_Progetti_Lozalizzazioni_PNRR_Italiadomani_V2.xlsx +0 -0
- modules/home.py +113 -0
- modules/search.py +46 -0
- modules/semantic_filter.py +120 -0
- relevant-queries.md +77 -0
- requirements.txt +17 -0
- streamlit_config/config.toml +5 -0
.env.example
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
OPENAI_API_KEY='<your-api-key>'
|
| 2 |
+
|
| 3 |
+
# Credenziali login (formato: username:password)
|
| 4 |
+
LOGIN_USER1=admin:password_admin
|
| 5 |
+
LOGIN_USER2=<username>:<password>
|
| 6 |
+
LOGIN_USER3=<username>:<password>
|
.gitignore
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__pycache__/
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
# Jupyter Notebook
|
| 5 |
+
.ipynb_checkpoints
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
# Environments
|
| 9 |
+
.env
|
| 10 |
+
|
| 11 |
+
data/
|
| 12 |
+
faiss_index
|
| 13 |
+
semantic_filter_results.xlsx
|
| 14 |
+
|
| 15 |
+
# JetBrains IDE
|
| 16 |
+
-.idea/
|
| 17 |
+
|
| 18 |
+
*.xlsx
|
Dockerfile
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM python:3.9
|
| 2 |
+
|
| 3 |
+
# Create a user with a specified UID
|
| 4 |
+
RUN useradd -m -u 1000 user
|
| 5 |
+
|
| 6 |
+
# Set working directory
|
| 7 |
+
WORKDIR /app
|
| 8 |
+
|
| 9 |
+
# Copy and install dependencies as root (for permissions)
|
| 10 |
+
COPY requirements.txt requirements.txt
|
| 11 |
+
RUN pip install --no-cache-dir --upgrade pip \
|
| 12 |
+
&& pip install --no-cache-dir --upgrade -r requirements.txt
|
| 13 |
+
|
| 14 |
+
# Create results directory with proper permissions
|
| 15 |
+
RUN mkdir -p /app/results && chown -R user:user /app/results
|
| 16 |
+
|
| 17 |
+
# Copy application files
|
| 18 |
+
COPY --chown=user:user . /app
|
| 19 |
+
|
| 20 |
+
# Switch to the user
|
| 21 |
+
USER user
|
| 22 |
+
|
| 23 |
+
ENV PATH="/home/user/.local/bin:$PATH"
|
| 24 |
+
ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
|
| 25 |
+
|
| 26 |
+
# Expose the port for huggingface spaces
|
| 27 |
+
EXPOSE 7860
|
| 28 |
+
CMD ["streamlit", "run", "app.py", "--server.port=7860", "--server.address=0.0.0.0"]
|
README.md
ADDED
|
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PNRR Data Processor
|
| 2 |
+
|
| 3 |
+
Un'applicazione web per l'analisi e l'elaborazione dei dati dei progetti PNRR (Piano Nazionale di Ripresa e Resilienza). L'applicazione offre strumenti avanzati di analisi basati su intelligenza artificiale per identificare pattern, raggruppamenti tematici e filtraggio semantico dei progetti.
|
| 4 |
+
|
| 5 |
+
## 🚀 Setup ed Esecuzione (Docker)
|
| 6 |
+
|
| 7 |
+
### Prerequisiti
|
| 8 |
+
- Docker e Docker Compose installati
|
| 9 |
+
- File Excel con i dati dei progetti PNRR
|
| 10 |
+
|
| 11 |
+
### Configurazione Ambiente
|
| 12 |
+
|
| 13 |
+
1. **Crea il file di configurazione ambiente:**
|
| 14 |
+
```bash
|
| 15 |
+
cp .env.example .env
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
2. **Configura la chiave API OpenAI nel file `.env`:**
|
| 19 |
+
```
|
| 20 |
+
OPENAI_API_KEY='sk-xxxxxxxxxxxxxxxxxxxxxxx'
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
Per ottenere una chiave API OpenAI:
|
| 24 |
+
- Registrati o accedi a [OpenAI Platform](https://platform.openai.com)
|
| 25 |
+
- Vai nella sezione "API Keys" nel tuo profilo
|
| 26 |
+
- Crea una nuova chiave API
|
| 27 |
+
- Copia la chiave e inseriscila nel file `.env`
|
| 28 |
+
|
| 29 |
+
### Avvio dell'Applicazione
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
docker compose -f docker/compose.base.yaml up --build
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
L'applicazione sarà disponibile su: **http://localhost:8501**
|
| 36 |
+
|
| 37 |
+
### Monitoraggio
|
| 38 |
+
|
| 39 |
+
Per visualizzare i log del container:
|
| 40 |
+
```bash
|
| 41 |
+
docker container logs -f semantic_filter
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
## 🎯 Funzionalità dell'Applicazione
|
| 45 |
+
|
| 46 |
+
### 🔍 Filtro Semantico
|
| 47 |
+
|
| 48 |
+
**Scopo**: Identificare automaticamente progetti PNRR rilevanti basandosi su query in linguaggio naturale.
|
| 49 |
+
|
| 50 |
+
**Come funziona**:
|
| 51 |
+
- Utilizza modelli di AI per comprendere il significato semantico delle descrizioni dei progetti
|
| 52 |
+
- Confronta la query dell'utente con il contenuto dei progetti per trovare corrispondenze concettuali
|
| 53 |
+
- Assegna un punteggio di confidenza a ogni progetto basato sulla rilevanza
|
| 54 |
+
|
| 55 |
+
**Come usarlo**:
|
| 56 |
+
1. Carica il file Excel contenente i progetti PNRR
|
| 57 |
+
2. Imposta la soglia di confidenza (0.0-1.0) per filtrare i risultati
|
| 58 |
+
3. Scrivi una query descrittiva in linguaggio naturale (es. "progetti di digitalizzazione nelle scuole", "infrastrutture sostenibili", "riqualificazione urbana")
|
| 59 |
+
4. Scegli se aggiungere i risultati come nuova colonna o creare un nuovo file
|
| 60 |
+
5. Avvia la ricerca e scarica i risultati
|
| 61 |
+
|
| 62 |
+
**Output**: File Excel con i progetti filtrati e punteggi di rilevanza
|
| 63 |
+
|
| 64 |
+
### 🎯 Analisi Cluster
|
| 65 |
+
|
| 66 |
+
**Scopo**: Raggruppare automaticamente progetti simili in cluster tematici per identificare pattern ricorrenti e aree di investimento comuni.
|
| 67 |
+
|
| 68 |
+
**Come funziona**:
|
| 69 |
+
- Analizza il contenuto testuale delle colonne selezionate usando tecniche di machine learning
|
| 70 |
+
- Applica preprocessing intelligente rimuovendo stopwords italiane, termini PNRR comuni e parole personalizzate dall'utente
|
| 71 |
+
- Raggruppa progetti con caratteristiche simili in cluster tematici usando embeddings semantici
|
| 72 |
+
- Genera automaticamente titoli, descrizioni e parole chiave per ogni cluster tramite AI
|
| 73 |
+
- Calcola statistiche di distribuzione dei progetti
|
| 74 |
+
|
| 75 |
+
**Come usarlo**:
|
| 76 |
+
1. Carica il file Excel contenente i progetti PNRR
|
| 77 |
+
2. Seleziona le colonne testuali da utilizzare per il clustering (es. titolo progetto, descrizione, sintesi)
|
| 78 |
+
3. Configura i parametri:
|
| 79 |
+
- **Automatico**: L'algoritmo determina il numero ottimale di cluster
|
| 80 |
+
- **Manuale**: Specifica un numero fisso di cluster
|
| 81 |
+
4. **Personalizza la blacklist** (opzionale):
|
| 82 |
+
- Aggiungi parole specifiche da escludere dall'analisi
|
| 83 |
+
- Inserisci termini troppo generici o irrilevanti per il tuo contesto
|
| 84 |
+
- Le parole possono essere inserite separate da virgole o una per riga
|
| 85 |
+
- Esempi: nomi di enti frequenti, termini tecnici comuni, location ricorrenti
|
| 86 |
+
5. Avvia l'analisi e attendi il completamento
|
| 87 |
+
6. Esplora i risultati nei cluster generati
|
| 88 |
+
7. **🆕 Visualizza il plot PCA**: Analizza la distribuzione spaziale dei cluster nel grafico interattivo
|
| 89 |
+
8. Scarica i risultati:
|
| 90 |
+
- **Sommario Cluster**: File con titoli, descrizioni e statistiche
|
| 91 |
+
- **Dati con Cluster ID**: File originale con aggiunta dell'identificativo cluster
|
| 92 |
+
|
| 93 |
+
**Output**:
|
| 94 |
+
- Sommario dei cluster con titoli, descrizioni, parole chiave e progetti campione
|
| 95 |
+
- Dataset originale arricchito con l'ID del cluster di appartenenza
|
| 96 |
+
- Visualizzazioni della distribuzione dei progetti per cluster
|
| 97 |
+
- **🆕 Plot PCA interattivo**: Visualizzazione bidimensionale dei cluster nello spazio degli embeddings
|
| 98 |
+
|
| 99 |
+
### 📊 Visualizzazione PCA dei Cluster
|
| 100 |
+
|
| 101 |
+
**Caratteristiche**:
|
| 102 |
+
- **Riduzione dimensionale**: I complessi embeddings multidimensionali vengono ridotti a 2 dimensioni tramite PCA (Principal Component Analysis)
|
| 103 |
+
- **Plot interattivo**: Visualizzazione Plotly con zoom, pan e informazioni al passaggio del mouse
|
| 104 |
+
- **Codifica colori**: Ogni cluster ha un colore distintivo per facilitare l'identificazione
|
| 105 |
+
- **Informazioni dettagliate**: Hover con titolo cluster, descrizione e numero di progetti
|
| 106 |
+
- **Varianza spiegata**: Mostra quanto della variabilità originale è preservata nelle due componenti principali
|
| 107 |
+
|
| 108 |
+
**Interpretazione**:
|
| 109 |
+
- Punti vicini rappresentano progetti semanticamente simili
|
| 110 |
+
- Gruppi di punti dello stesso colore mostrano la coesione del cluster
|
| 111 |
+
- La distanza tra cluster indica quanto sono diversi tematicamente
|
| 112 |
+
- La percentuale di varianza spiegata indica l'affidabilità della rappresentazione 2D
|
| 113 |
+
|
| 114 |
+
### 🎯 Blacklist Personalizzata
|
| 115 |
+
|
| 116 |
+
**Scopo**: Migliorare la qualità dei cluster escludendo parole irrilevanti specifiche del tuo dataset.
|
| 117 |
+
|
| 118 |
+
**Benefici**:
|
| 119 |
+
- **Cluster più precisi**: Rimuovendo parole generiche, i cluster si basano su termini realmente distintivi
|
| 120 |
+
- **Controllo granulare**: Personalizza l'analisi in base al tuo contesto specifico
|
| 121 |
+
- **Flessibilità**: Testa diverse configurazioni per ottimizzare i risultati
|
| 122 |
+
|
| 123 |
+
**Esempi di parole da escludere**:
|
| 124 |
+
- Termini troppo frequenti: "progetto", "attività", "servizio"
|
| 125 |
+
- Nomi di enti ricorrenti: "comune", "regione", "asl"
|
| 126 |
+
- Parole tecniche generiche: "sistema", "gestione", "sviluppo"
|
| 127 |
+
- Location comuni nel dataset: "milano", "roma", "italia"
|
| 128 |
+
|
| 129 |
+
## 📋 Formato File
|
| 130 |
+
|
| 131 |
+
L'applicazione è **completamente flessibile** riguardo al formato del file Excel:
|
| 132 |
+
|
| 133 |
+
### ✅ Requisiti Minimi
|
| 134 |
+
- File in formato Excel (.xlsx)
|
| 135 |
+
- Almeno una colonna contenente testo descrittivo
|
| 136 |
+
|
| 137 |
+
### 🎯 Adattabilità
|
| 138 |
+
- **Nomi colonne**: Possono essere qualsiasi, l'interfaccia mostrerà tutte le colonne disponibili
|
| 139 |
+
- **Struttura dati**: Qualsiasi struttura è supportata
|
| 140 |
+
- **Selezione dinamica**: L'utente sceglie quali colonne utilizzare per l'analisi
|
| 141 |
+
|
| 142 |
+
### 🏆 Colonne Consigliate (Opzionali)
|
| 143 |
+
Per progetti PNRR, colonne come queste migliorano la qualità dell'analisi:
|
| 144 |
+
- **Titolo/Nome Progetto**: Identificazione del progetto
|
| 145 |
+
- **Descrizione/Sintesi**: Contenuto descrittivo dettagliato
|
| 146 |
+
- **Settore/Ambito**: Area tematica del progetto
|
| 147 |
+
- **Soggetto/Ente**: Organizzazione responsabile
|
| 148 |
+
- **Località**: Informazioni geografiche
|
| 149 |
+
|
| 150 |
+
> **💡 Suggerimento**: Più contenuto testuale descrittivo è disponibile, migliore sarà la qualità dell'analisi semantica e del clustering, indipendentemente dai nomi delle colonne.
|
app.py
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import streamlit as st
|
| 4 |
+
from dotenv import load_dotenv
|
| 5 |
+
|
| 6 |
+
load_dotenv()
|
| 7 |
+
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
| 8 |
+
|
| 9 |
+
from modules.home import main as semantic_filter_main
|
| 10 |
+
from modules.cluster_page import main as cluster_main
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
def load_credentials() -> dict:
|
| 14 |
+
users = {}
|
| 15 |
+
for i in range(1, 4):
|
| 16 |
+
entry = os.getenv(f"LOGIN_USER{i}", "")
|
| 17 |
+
if ":" in entry:
|
| 18 |
+
username, password = entry.split(":", 1)
|
| 19 |
+
users[username] = password
|
| 20 |
+
return users
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
def show_login() -> None:
|
| 24 |
+
st.title("🔒 Accesso")
|
| 25 |
+
with st.form("login_form"):
|
| 26 |
+
username = st.text_input("Username")
|
| 27 |
+
password = st.text_input("Password", type="password")
|
| 28 |
+
submitted = st.form_submit_button("Accedi")
|
| 29 |
+
if submitted:
|
| 30 |
+
credentials = load_credentials()
|
| 31 |
+
if username in credentials and credentials[username] == password:
|
| 32 |
+
st.session_state.authenticated = True
|
| 33 |
+
st.session_state.username = username
|
| 34 |
+
st.rerun()
|
| 35 |
+
else:
|
| 36 |
+
st.error("Credenziali non valide. Riprova.")
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
def main() -> None:
|
| 40 |
+
st.sidebar.title("🏛️ PNRR Data Processor")
|
| 41 |
+
st.sidebar.markdown("---")
|
| 42 |
+
|
| 43 |
+
page = st.sidebar.radio(
|
| 44 |
+
"Seleziona una funzione:",
|
| 45 |
+
["🔍 Filtro Semantico", "🎯 Analisi Cluster"],
|
| 46 |
+
format_func=lambda x: x
|
| 47 |
+
)
|
| 48 |
+
|
| 49 |
+
st.sidebar.markdown("---")
|
| 50 |
+
st.sidebar.markdown("""
|
| 51 |
+
### ℹ️ Informazioni
|
| 52 |
+
|
| 53 |
+
**Filtro Semantico**: Filtra i progetti PNRR basandosi su query testuali usando ricerca semantica.
|
| 54 |
+
|
| 55 |
+
**Analisi Cluster**: Raggruppa automaticamente i progetti in cluster tematici per identificare pattern ricorrenti.
|
| 56 |
+
""")
|
| 57 |
+
|
| 58 |
+
if st.sidebar.button("🚪 Logout"):
|
| 59 |
+
st.session_state.authenticated = False
|
| 60 |
+
st.rerun()
|
| 61 |
+
|
| 62 |
+
if page == "🔍 Filtro Semantico":
|
| 63 |
+
semantic_filter_main()
|
| 64 |
+
elif page == "🎯 Analisi Cluster":
|
| 65 |
+
cluster_main()
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
if __name__ == "__main__":
|
| 69 |
+
st.set_page_config(
|
| 70 |
+
page_title="PNRR Data Processor",
|
| 71 |
+
page_icon="🏛️",
|
| 72 |
+
layout="wide",
|
| 73 |
+
initial_sidebar_state="expanded"
|
| 74 |
+
)
|
| 75 |
+
if not st.session_state.get("authenticated", False):
|
| 76 |
+
show_login()
|
| 77 |
+
else:
|
| 78 |
+
main()
|
docker/compose.base.yaml
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
services:
|
| 2 |
+
semantic-filter:
|
| 3 |
+
build:
|
| 4 |
+
context: ..
|
| 5 |
+
dockerfile: docker/Dockerfile
|
| 6 |
+
container_name: semantic_filter
|
| 7 |
+
ports:
|
| 8 |
+
- "8501:8501"
|
| 9 |
+
restart: always
|
| 10 |
+
volumes:
|
| 11 |
+
- ..:/app
|
| 12 |
+
- /app/.venv
|
| 13 |
+
- ../streamlit_config/:/home/user/.streamlit
|
| 14 |
+
env_file:
|
| 15 |
+
- ../.env
|
modules/cluster_analysis.py
ADDED
|
@@ -0,0 +1,681 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import logging
|
| 2 |
+
import pandas as pd
|
| 3 |
+
import numpy as np
|
| 4 |
+
import os
|
| 5 |
+
import json
|
| 6 |
+
import re
|
| 7 |
+
from sklearn.cluster import KMeans
|
| 8 |
+
from sklearn.feature_extraction.text import TfidfVectorizer
|
| 9 |
+
from sklearn.decomposition import PCA
|
| 10 |
+
from sentence_transformers import SentenceTransformer
|
| 11 |
+
from typing import List, Dict, Tuple, Optional
|
| 12 |
+
from langchain_openai import ChatOpenAI
|
| 13 |
+
from langchain.schema import HumanMessage
|
| 14 |
+
import plotly.express as px
|
| 15 |
+
import plotly.graph_objects as go
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
RESULTS_DIR = '/app/results'
|
| 19 |
+
SAVE_PATH_CLUSTERS = os.path.join(RESULTS_DIR, 'cluster_results.xlsx')
|
| 20 |
+
SAVE_PATH_ORIGINAL = os.path.join(RESULTS_DIR, 'data_with_clusters.xlsx')
|
| 21 |
+
EMBEDDING_MODEL_NAME = 'sentence-transformers/all-MiniLM-L6-v2'
|
| 22 |
+
LLM_MODEL_NAME = 'gpt-4o-mini'
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
PNRR_STOPWORDS = {
|
| 26 |
+
'pnrr', 'piano', 'nazionale', 'ripresa', 'resilienza', 'progetto', 'progetti',
|
| 27 |
+
'intervento', 'interventi', 'attività', 'realizzazione', 'sviluppo',
|
| 28 |
+
'implementazione', 'potenziamento', 'miglioramento', 'sostegno',
|
| 29 |
+
'euro', 'milioni', 'miliardi', 'finanziamento', 'investimento',
|
| 30 |
+
'pubblico', 'pubblica', 'amministrazione', 'ente', 'comune', 'regione',
|
| 31 |
+
'italia', 'italiano', 'italiana', 'nazionale'
|
| 32 |
+
}
|
| 33 |
+
|
| 34 |
+
ITALIAN_STOPWORDS = {
|
| 35 |
+
# Articoli
|
| 36 |
+
'il', 'lo', 'la', 'i', 'gli', 'le', 'un', 'uno', 'una',
|
| 37 |
+
# Preposizioni semplici
|
| 38 |
+
'di', 'a', 'da', 'in', 'con', 'su', 'per', 'tra', 'fra',
|
| 39 |
+
# Preposizioni articolate più comuni
|
| 40 |
+
'del', 'dello', 'della', 'dei', 'degli', 'delle',
|
| 41 |
+
'al', 'allo', 'alla', 'ai', 'agli', 'alle',
|
| 42 |
+
'dal', 'dallo', 'dalla', 'dai', 'dagli', 'dalle',
|
| 43 |
+
'nel', 'nello', 'nella', 'nei', 'negli', 'nelle',
|
| 44 |
+
'sul', 'sullo', 'sulla', 'sui', 'sugli', 'sulle',
|
| 45 |
+
# Congiunzioni
|
| 46 |
+
'e', 'ed', 'o', 'od', 'ma', 'però', 'anche', 'ancora', 'quindi', 'dunque', 'mentre', 'quando', 'se',
|
| 47 |
+
# Pronomi
|
| 48 |
+
'che', 'chi', 'cui', 'quale', 'quali', 'questo', 'questa', 'questi', 'queste',
|
| 49 |
+
'quello', 'quella', 'quelli', 'quelle', 'stesso', 'stessa', 'stessi', 'stesse',
|
| 50 |
+
# Avverbi comuni
|
| 51 |
+
'dove', 'come', 'perché', 'già', 'più', 'molto', 'poco', 'tanto', 'quanto', 'sempre', 'mai',
|
| 52 |
+
'oggi', 'ieri', 'domani', 'prima', 'dopo', 'sopra', 'sotto', 'dentro', 'fuori',
|
| 53 |
+
# Aggettivi/pronomi indefiniti
|
| 54 |
+
'tutto', 'tutti', 'tutte', 'ogni', 'alcuni', 'alcune', 'altro', 'altri', 'altre',
|
| 55 |
+
'nessuno', 'nessuna', 'niente', 'nulla', 'qualche', 'qualcosa', 'qualcuno',
|
| 56 |
+
# Verbi ausiliari e modali comuni
|
| 57 |
+
'essere', 'avere', 'fare', 'dire', 'andare', 'venire', 'volere', 'potere', 'dovere', 'sapere',
|
| 58 |
+
'stare', 'dare', 'vedere', 'uscire', 'partire',
|
| 59 |
+
# Parole di contesto comune
|
| 60 |
+
'contesto', 'attraverso', 'mediante', 'presso', 'verso', 'circa', 'oltre', 'secondo', 'durante'
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
def preprocess_text(text: str, remove_domain_stopwords: bool = True, custom_blacklist: Optional[List[str]] = None) -> str:
|
| 65 |
+
"""
|
| 66 |
+
Preprocess text by removing stopwords and applying cleaning.
|
| 67 |
+
|
| 68 |
+
Args:
|
| 69 |
+
text: Input text
|
| 70 |
+
remove_domain_stopwords: Whether to remove PNRR-specific stopwords
|
| 71 |
+
custom_blacklist: Additional words to exclude (will be added to default stopwords)
|
| 72 |
+
|
| 73 |
+
Returns:
|
| 74 |
+
str: Cleaned text
|
| 75 |
+
"""
|
| 76 |
+
if not isinstance(text, str):
|
| 77 |
+
return ""
|
| 78 |
+
|
| 79 |
+
# Convert to lowercase
|
| 80 |
+
text = text.lower()
|
| 81 |
+
|
| 82 |
+
# Remove special characters but keep spaces and accented characters
|
| 83 |
+
text = re.sub(r'[^\w\sàèéìíîòóùú]', ' ', text)
|
| 84 |
+
|
| 85 |
+
# Remove numbers that are standalone
|
| 86 |
+
text = re.sub(r'\b\d+\b', ' ', text)
|
| 87 |
+
|
| 88 |
+
# Remove extra whitespace
|
| 89 |
+
text = ' '.join(text.split())
|
| 90 |
+
|
| 91 |
+
if remove_domain_stopwords:
|
| 92 |
+
# Split into words
|
| 93 |
+
words = text.split()
|
| 94 |
+
|
| 95 |
+
# Remove stopwords
|
| 96 |
+
stopwords_to_remove = ITALIAN_STOPWORDS.union(PNRR_STOPWORDS)
|
| 97 |
+
|
| 98 |
+
# Add custom blacklist if provided
|
| 99 |
+
if custom_blacklist:
|
| 100 |
+
custom_stopwords = {word.lower().strip()
|
| 101 |
+
for word in custom_blacklist if word.strip()}
|
| 102 |
+
stopwords_to_remove = stopwords_to_remove.union(custom_stopwords)
|
| 103 |
+
|
| 104 |
+
# Filter words: remove stopwords, very short words, and words that are only numbers/special chars
|
| 105 |
+
filtered_words = []
|
| 106 |
+
for word in words:
|
| 107 |
+
if (word not in stopwords_to_remove and
|
| 108 |
+
len(word) > 2 and
|
| 109 |
+
not word.isdigit() and
|
| 110 |
+
re.search(r'[a-zA-Zàèéìíîòóùú]', word)): # Must contain at least one letter
|
| 111 |
+
filtered_words.append(word)
|
| 112 |
+
|
| 113 |
+
# Rejoin
|
| 114 |
+
text = ' '.join(filtered_words)
|
| 115 |
+
|
| 116 |
+
return text
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
def combine_text_columns(df: pd.DataFrame, columns: List[str], preprocess: bool = True, custom_blacklist: Optional[List[str]] = None) -> pd.Series:
|
| 120 |
+
"""Combine multiple text columns into a single text representation.
|
| 121 |
+
|
| 122 |
+
Args:
|
| 123 |
+
df: DataFrame containing the data
|
| 124 |
+
columns: List of column names to combine
|
| 125 |
+
preprocess: Whether to apply text preprocessing (cleaning and stopword removal)
|
| 126 |
+
custom_blacklist: Additional words to exclude from preprocessing
|
| 127 |
+
|
| 128 |
+
Returns:
|
| 129 |
+
pd.Series: Series containing the combined texts for each row
|
| 130 |
+
"""
|
| 131 |
+
combined_texts = []
|
| 132 |
+
for idx, row in df.iterrows():
|
| 133 |
+
text_parts = []
|
| 134 |
+
for col in columns:
|
| 135 |
+
if col in df.columns and pd.notna(row[col]):
|
| 136 |
+
text_part = str(row[col])
|
| 137 |
+
if preprocess:
|
| 138 |
+
text_part = preprocess_text(
|
| 139 |
+
text_part, custom_blacklist=custom_blacklist)
|
| 140 |
+
text_parts.append(text_part)
|
| 141 |
+
|
| 142 |
+
combined_text = " | ".join(text_parts)
|
| 143 |
+
# Additional cleaning for the combined text
|
| 144 |
+
if preprocess:
|
| 145 |
+
combined_text = ' '.join(
|
| 146 |
+
combined_text.split()) # Remove extra spaces
|
| 147 |
+
|
| 148 |
+
combined_texts.append(combined_text)
|
| 149 |
+
return pd.Series(combined_texts)
|
| 150 |
+
|
| 151 |
+
|
| 152 |
+
def create_embeddings(texts: List[str], model_name: str = EMBEDDING_MODEL_NAME) -> np.ndarray:
|
| 153 |
+
"""Create vector embeddings for texts using sentence transformers.
|
| 154 |
+
|
| 155 |
+
Args:
|
| 156 |
+
texts: List of texts to process
|
| 157 |
+
model_name: Name of the model to use for embeddings
|
| 158 |
+
|
| 159 |
+
Returns:
|
| 160 |
+
np.ndarray: Numpy array containing the vector embeddings
|
| 161 |
+
"""
|
| 162 |
+
logging.info(f"Creating embeddings with model: {model_name}")
|
| 163 |
+
model = SentenceTransformer(model_name)
|
| 164 |
+
embeddings = model.encode(texts, show_progress_bar=True)
|
| 165 |
+
return embeddings
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
def perform_clustering(embeddings: np.ndarray, n_clusters: Optional[int] = None, max_clusters: int = 20, min_clusters: int = 2) -> Tuple[np.ndarray, int]:
|
| 169 |
+
"""Perform K-means clustering on vector embeddings.
|
| 170 |
+
|
| 171 |
+
Args:
|
| 172 |
+
embeddings: Numpy array of embeddings
|
| 173 |
+
n_clusters: Fixed number of clusters (if None, determined automatically)
|
| 174 |
+
max_clusters: Maximum number of clusters for automatic selection
|
| 175 |
+
min_clusters: Minimum number of clusters for automatic selection
|
| 176 |
+
|
| 177 |
+
Returns:
|
| 178 |
+
Tuple[np.ndarray, int]: Tuple containing cluster labels and final number of clusters
|
| 179 |
+
"""
|
| 180 |
+
if n_clusters is None:
|
| 181 |
+
# Use elbow method to find optimal number of clusters
|
| 182 |
+
n_clusters = find_optimal_clusters(embeddings, max_clusters, min_clusters)
|
| 183 |
+
|
| 184 |
+
logging.info(f"Performing clustering with {n_clusters} clusters")
|
| 185 |
+
kmeans = KMeans(n_clusters=n_clusters, random_state=42, n_init=10)
|
| 186 |
+
cluster_labels = kmeans.fit_predict(embeddings)
|
| 187 |
+
|
| 188 |
+
return cluster_labels, n_clusters
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
def find_optimal_clusters(embeddings: np.ndarray, max_clusters: int = 20, min_clusters: int = 2) -> int:
|
| 192 |
+
"""Find optimal number of clusters using the elbow method.
|
| 193 |
+
|
| 194 |
+
Args:
|
| 195 |
+
embeddings: Numpy array of embeddings
|
| 196 |
+
max_clusters: Maximum number of clusters to test
|
| 197 |
+
min_clusters: Minimum number of clusters to test
|
| 198 |
+
|
| 199 |
+
Returns:
|
| 200 |
+
int: Optimal number of clusters determined
|
| 201 |
+
"""
|
| 202 |
+
if len(embeddings) < max_clusters:
|
| 203 |
+
max_clusters = len(embeddings) - 1
|
| 204 |
+
|
| 205 |
+
# Ensure min_clusters is at least 2 and not greater than max_clusters
|
| 206 |
+
min_clusters = max(2, min_clusters)
|
| 207 |
+
if min_clusters > max_clusters:
|
| 208 |
+
min_clusters = max_clusters
|
| 209 |
+
|
| 210 |
+
if max_clusters < 2:
|
| 211 |
+
return 2
|
| 212 |
+
|
| 213 |
+
inertias = []
|
| 214 |
+
K_range = range(min_clusters, min(max_clusters + 1, len(embeddings)))
|
| 215 |
+
|
| 216 |
+
for k in K_range:
|
| 217 |
+
kmeans = KMeans(n_clusters=k, random_state=42, n_init=10)
|
| 218 |
+
kmeans.fit(embeddings)
|
| 219 |
+
inertias.append(kmeans.inertia_)
|
| 220 |
+
|
| 221 |
+
# Simple elbow detection
|
| 222 |
+
if len(inertias) < 2:
|
| 223 |
+
return min_clusters
|
| 224 |
+
|
| 225 |
+
# Calculate the rate of change
|
| 226 |
+
deltas = np.diff(inertias)
|
| 227 |
+
delta_deltas = np.diff(deltas)
|
| 228 |
+
|
| 229 |
+
# Find the point where the rate of change starts to flatten
|
| 230 |
+
if len(delta_deltas) > 0:
|
| 231 |
+
elbow_idx = np.argmax(delta_deltas) + min_clusters # Start from min_clusters
|
| 232 |
+
return max(min_clusters, min(elbow_idx, max_clusters))
|
| 233 |
+
|
| 234 |
+
return min_clusters
|
| 235 |
+
|
| 236 |
+
|
| 237 |
+
def generate_cluster_description(cluster_texts: List[str], cluster_id: int) -> Tuple[str, str]:
|
| 238 |
+
"""Generate title and description for a cluster using LLM.
|
| 239 |
+
|
| 240 |
+
Args:
|
| 241 |
+
cluster_texts: List of texts belonging to the cluster
|
| 242 |
+
cluster_id: Numeric ID of the cluster
|
| 243 |
+
|
| 244 |
+
Returns:
|
| 245 |
+
Tuple[str, str]: Tuple containing title and description of the cluster
|
| 246 |
+
"""
|
| 247 |
+
try:
|
| 248 |
+
# Sample up to 10 texts for analysis to avoid token limits
|
| 249 |
+
sample_texts = cluster_texts[:10] if len(cluster_texts) > 10 else cluster_texts
|
| 250 |
+
|
| 251 |
+
# Create a concise sample for the LLM
|
| 252 |
+
text_sample = "\n".join([f"- {text[:200]}" for text in sample_texts])
|
| 253 |
+
|
| 254 |
+
llm = ChatOpenAI(model=LLM_MODEL_NAME, temperature=0.3)
|
| 255 |
+
|
| 256 |
+
prompt = f"""
|
| 257 |
+
Analizza i seguenti progetti PNRR e identifica il tema comune che li accomuna.
|
| 258 |
+
Devi fornire un titolo breve (max 50 caratteri) e una descrizione concisa (max 150 caratteri) che catturi l'essenza di questi progetti.
|
| 259 |
+
|
| 260 |
+
Progetti del cluster {cluster_id + 1}:
|
| 261 |
+
{text_sample}
|
| 262 |
+
|
| 263 |
+
Rispondi in formato JSON con le chiavi "titolo" e "descrizione".
|
| 264 |
+
Il titolo deve essere specifico e descrittivo del tema comune.
|
| 265 |
+
La descrizione deve spiegare brevemente cosa accomuna questi progetti.
|
| 266 |
+
|
| 267 |
+
Esempio di risposta:
|
| 268 |
+
{{
|
| 269 |
+
"titolo": "Digitalizzazione Sanità",
|
| 270 |
+
"descrizione": "Progetti di migrazione cloud e infrastrutture digitali per aziende sanitarie"
|
| 271 |
+
}}
|
| 272 |
+
"""
|
| 273 |
+
|
| 274 |
+
response = llm.invoke([HumanMessage(content=prompt)])
|
| 275 |
+
response_content = response.content.strip()
|
| 276 |
+
logging.info(f"LLM Response for cluster {cluster_id}: {response_content}")
|
| 277 |
+
|
| 278 |
+
try:
|
| 279 |
+
result = json.loads(response_content)
|
| 280 |
+
title = result.get("titolo", f"Cluster {cluster_id + 1}")[:50]
|
| 281 |
+
description = result.get("descrizione", "Cluster di progetti correlati")[:150]
|
| 282 |
+
except json.JSONDecodeError:
|
| 283 |
+
try:
|
| 284 |
+
# Try to extract JSON from the response using regex
|
| 285 |
+
json_match = re.search(r'\{[^}]*"titolo"[^}]*"descrizione"[^}]*\}', response_content, re.DOTALL)
|
| 286 |
+
if json_match:
|
| 287 |
+
json_str = json_match.group(0)
|
| 288 |
+
result = json.loads(json_str)
|
| 289 |
+
title = result.get("titolo", f"Cluster {cluster_id + 1}")[:50]
|
| 290 |
+
description = result.get("descrizione", "Cluster di progetti correlati")[:150]
|
| 291 |
+
else:
|
| 292 |
+
# If no valid JSON found, try to extract title and description manually
|
| 293 |
+
title_match = re.search(r'"titolo":\s*"([^"]+)"', response_content)
|
| 294 |
+
desc_match = re.search(r'"descrizione":\s*"([^"]+)"', response_content)
|
| 295 |
+
|
| 296 |
+
title = title_match.group(1)[:50] if title_match else f"Cluster {cluster_id + 1}"
|
| 297 |
+
description = desc_match.group(1)[:150] if desc_match else "Cluster di progetti correlati"
|
| 298 |
+
except (json.JSONDecodeError, AttributeError) as e:
|
| 299 |
+
# Final fallback
|
| 300 |
+
logging.warning(f"Failed to parse JSON for cluster {cluster_id}: {e}")
|
| 301 |
+
title = f"Cluster {cluster_id + 1}"
|
| 302 |
+
description = "Cluster di progetti correlati"
|
| 303 |
+
|
| 304 |
+
except Exception as e:
|
| 305 |
+
logging.warning(f"Error generating description for cluster {cluster_id}: {e}")
|
| 306 |
+
title = f"Cluster {cluster_id + 1}"
|
| 307 |
+
description = f"Cluster contenente {len(cluster_texts)} progetti correlati"
|
| 308 |
+
|
| 309 |
+
return title, description
|
| 310 |
+
|
| 311 |
+
|
| 312 |
+
def extract_keywords(cluster_texts: List[str], top_k: int = 5, custom_blacklist: Optional[List[str]] = None) -> List[str]:
|
| 313 |
+
"""Extract top keywords from cluster texts using TF-IDF with advanced filtering.
|
| 314 |
+
|
| 315 |
+
Args:
|
| 316 |
+
cluster_texts: List of cluster texts
|
| 317 |
+
top_k: Maximum number of keywords to extract
|
| 318 |
+
custom_blacklist: List of words to exclude from extraction
|
| 319 |
+
|
| 320 |
+
Returns:
|
| 321 |
+
List[str]: List of the most relevant keywords
|
| 322 |
+
"""
|
| 323 |
+
if not cluster_texts:
|
| 324 |
+
return []
|
| 325 |
+
|
| 326 |
+
try:
|
| 327 |
+
# Create custom stopwords list combining Italian, PNRR, and custom blacklist
|
| 328 |
+
custom_stopwords = ITALIAN_STOPWORDS.union(PNRR_STOPWORDS)
|
| 329 |
+
|
| 330 |
+
# Add custom blacklist
|
| 331 |
+
if custom_blacklist:
|
| 332 |
+
custom_stopwords_set = {word.lower().strip() for word in custom_blacklist if word.strip()}
|
| 333 |
+
custom_stopwords = custom_stopwords.union(custom_stopwords_set)
|
| 334 |
+
|
| 335 |
+
# Convert to list for TfidfVectorizer
|
| 336 |
+
stopwords_list = list(custom_stopwords)
|
| 337 |
+
|
| 338 |
+
# First pass: get more candidates
|
| 339 |
+
vectorizer = TfidfVectorizer(
|
| 340 |
+
max_features=200, # Increased to get more candidates
|
| 341 |
+
stop_words=stopwords_list,
|
| 342 |
+
ngram_range=(1, 3), # Include trigrams for better context
|
| 343 |
+
min_df=2, # Appear in at least 2 documents
|
| 344 |
+
token_pattern=r'\b[a-zA-ZÀ-ÿ]{3,}\b' # Only words with 3+ characters, including accented
|
| 345 |
+
)
|
| 346 |
+
|
| 347 |
+
tfidf_matrix = vectorizer.fit_transform(cluster_texts)
|
| 348 |
+
feature_names = vectorizer.get_feature_names_out()
|
| 349 |
+
|
| 350 |
+
# Get mean TF-IDF scores
|
| 351 |
+
mean_scores = np.mean(tfidf_matrix.toarray(), axis=0)
|
| 352 |
+
|
| 353 |
+
# Create candidates with scores
|
| 354 |
+
candidates = [(feature_names[i], mean_scores[i]) for i in range(len(feature_names))]
|
| 355 |
+
candidates.sort(key=lambda x: x[1], reverse=True)
|
| 356 |
+
|
| 357 |
+
# Advanced filtering to remove redundant and similar terms
|
| 358 |
+
filtered_keywords = []
|
| 359 |
+
used_words = set()
|
| 360 |
+
|
| 361 |
+
for keyword, score in candidates:
|
| 362 |
+
# Skip if we have enough keywords
|
| 363 |
+
if len(filtered_keywords) >= top_k:
|
| 364 |
+
break
|
| 365 |
+
|
| 366 |
+
# Clean the keyword
|
| 367 |
+
keyword_clean = keyword.lower().strip()
|
| 368 |
+
|
| 369 |
+
# Skip very short words or numbers
|
| 370 |
+
if len(keyword_clean) < 3 or keyword_clean.isdigit():
|
| 371 |
+
continue
|
| 372 |
+
|
| 373 |
+
# Skip if it's essentially a stopword we missed
|
| 374 |
+
if keyword_clean in custom_stopwords:
|
| 375 |
+
continue
|
| 376 |
+
|
| 377 |
+
# Check for redundancy with already selected keywords
|
| 378 |
+
is_redundant = False
|
| 379 |
+
|
| 380 |
+
# Split ngrams to check individual words
|
| 381 |
+
keyword_words = set(keyword_clean.split())
|
| 382 |
+
|
| 383 |
+
# Check if this ngram contains words already used as single keywords
|
| 384 |
+
if len(keyword_words) > 1:
|
| 385 |
+
# If it's a multi-word term, check if we already have the main components
|
| 386 |
+
overlap_with_used = keyword_words.intersection(used_words)
|
| 387 |
+
if len(overlap_with_used) > 0:
|
| 388 |
+
is_redundant = True
|
| 389 |
+
|
| 390 |
+
# Check similarity with existing keywords (basic containment check)
|
| 391 |
+
for existing_keyword in filtered_keywords:
|
| 392 |
+
existing_words = set(existing_keyword.lower().split())
|
| 393 |
+
|
| 394 |
+
# If current keyword is contained in existing or vice versa
|
| 395 |
+
if (keyword_words.issubset(existing_words) or
|
| 396 |
+
existing_words.issubset(keyword_words)):
|
| 397 |
+
is_redundant = True
|
| 398 |
+
break
|
| 399 |
+
|
| 400 |
+
# Check if they share too many words (for multi-word terms)
|
| 401 |
+
if (len(keyword_words) > 1 and len(existing_words) > 1):
|
| 402 |
+
shared_words = keyword_words.intersection(existing_words)
|
| 403 |
+
if len(shared_words) >= min(len(keyword_words), len(existing_words)) * 0.7:
|
| 404 |
+
is_redundant = True
|
| 405 |
+
break
|
| 406 |
+
|
| 407 |
+
if not is_redundant:
|
| 408 |
+
filtered_keywords.append(keyword)
|
| 409 |
+
# Add individual words to used_words set
|
| 410 |
+
used_words.update(keyword_words)
|
| 411 |
+
|
| 412 |
+
return filtered_keywords[:top_k]
|
| 413 |
+
|
| 414 |
+
except Exception as e:
|
| 415 |
+
logging.warning(f"Error extracting keywords: {e}")
|
| 416 |
+
return []
|
| 417 |
+
|
| 418 |
+
|
| 419 |
+
def analyze_clusters(
|
| 420 |
+
data_frame_path,
|
| 421 |
+
selected_columns: List[str],
|
| 422 |
+
n_clusters: Optional[int] = None,
|
| 423 |
+
max_clusters: int = 20,
|
| 424 |
+
min_clusters: int = 2,
|
| 425 |
+
preprocess_text_data: bool = True,
|
| 426 |
+
custom_blacklist: Optional[List[str]] = None
|
| 427 |
+
) -> Tuple[pd.DataFrame, pd.DataFrame, np.ndarray, np.ndarray]:
|
| 428 |
+
"""
|
| 429 |
+
Main function to perform cluster analysis on PNRR projects.
|
| 430 |
+
|
| 431 |
+
Args:
|
| 432 |
+
data_frame_path: Path to the Excel file
|
| 433 |
+
selected_columns: List of column names to use for clustering
|
| 434 |
+
n_clusters: Number of clusters (if None, will be determined automatically)
|
| 435 |
+
max_clusters: Maximum number of clusters for automatic selection
|
| 436 |
+
min_clusters: Minimum number of clusters for automatic selection
|
| 437 |
+
preprocess_text_data: Whether to preprocess text (remove stopwords, clean)
|
| 438 |
+
custom_blacklist: Additional words to exclude from analysis
|
| 439 |
+
|
| 440 |
+
Returns:
|
| 441 |
+
Tuple[pd.DataFrame, pd.DataFrame, np.ndarray, np.ndarray]: Tuple of (cluster_results_df, original_data_with_clusters_df, embeddings, cluster_labels)
|
| 442 |
+
"""
|
| 443 |
+
logging.info(f"Loading DataFrame from {data_frame_path}...")
|
| 444 |
+
df = pd.read_excel(data_frame_path)
|
| 445 |
+
logging.info(f"Loaded DataFrame with {len(df)} rows")
|
| 446 |
+
|
| 447 |
+
available_columns = [col for col in selected_columns if col in df.columns]
|
| 448 |
+
if not available_columns:
|
| 449 |
+
raise ValueError("None of the selected columns are available in the DataFrame")
|
| 450 |
+
|
| 451 |
+
logging.info(f"Using columns for clustering: {available_columns}")
|
| 452 |
+
if preprocess_text_data:
|
| 453 |
+
logging.info(
|
| 454 |
+
"Preprocessing text data (removing stopwords and cleaning)")
|
| 455 |
+
if custom_blacklist:
|
| 456 |
+
logging.info(
|
| 457 |
+
f"Using custom blacklist with {len(custom_blacklist)} additional words")
|
| 458 |
+
|
| 459 |
+
combined_texts = combine_text_columns(
|
| 460 |
+
df, available_columns, preprocess=preprocess_text_data, custom_blacklist=custom_blacklist)
|
| 461 |
+
non_empty_mask = combined_texts.str.strip() != ""
|
| 462 |
+
if non_empty_mask.sum() == 0:
|
| 463 |
+
raise ValueError("No non-empty text found in selected columns")
|
| 464 |
+
|
| 465 |
+
df_filtered = df[non_empty_mask].copy()
|
| 466 |
+
texts_filtered = combined_texts[non_empty_mask].tolist()
|
| 467 |
+
|
| 468 |
+
embeddings = create_embeddings(texts_filtered)
|
| 469 |
+
cluster_labels, final_n_clusters = perform_clustering(embeddings, n_clusters, max_clusters, min_clusters)
|
| 470 |
+
|
| 471 |
+
df_filtered['cluster_id'] = cluster_labels
|
| 472 |
+
|
| 473 |
+
# Generate cluster summaries
|
| 474 |
+
cluster_results = []
|
| 475 |
+
for cluster_id in range(final_n_clusters):
|
| 476 |
+
cluster_mask = cluster_labels == cluster_id
|
| 477 |
+
cluster_texts = [texts_filtered[i] for i in range(len(texts_filtered)) if cluster_mask[i]]
|
| 478 |
+
|
| 479 |
+
if not cluster_texts:
|
| 480 |
+
continue
|
| 481 |
+
|
| 482 |
+
title, description = generate_cluster_description(cluster_texts, cluster_id)
|
| 483 |
+
keywords = extract_keywords(cluster_texts, custom_blacklist=custom_blacklist)
|
| 484 |
+
|
| 485 |
+
cluster_results.append({
|
| 486 |
+
'cluster_id': cluster_id,
|
| 487 |
+
'titolo': title,
|
| 488 |
+
'descrizione': description,
|
| 489 |
+
'num_progetti': len(cluster_texts),
|
| 490 |
+
'keywords': ', '.join(keywords),
|
| 491 |
+
'progetti_campione': ' | '.join(cluster_texts[:3])
|
| 492 |
+
})
|
| 493 |
+
|
| 494 |
+
cluster_df = pd.DataFrame(cluster_results)
|
| 495 |
+
|
| 496 |
+
# Prepare final dataframe with cluster assignments
|
| 497 |
+
# Start with original dataframe and add cluster_id column
|
| 498 |
+
df_with_clusters = df.copy()
|
| 499 |
+
df_with_clusters['cluster_id'] = -1 # Default value for unassigned
|
| 500 |
+
df_with_clusters.loc[non_empty_mask, 'cluster_id'] = cluster_labels
|
| 501 |
+
|
| 502 |
+
logging.info(f"Created {final_n_clusters} clusters")
|
| 503 |
+
logging.info(f"Assigned {len(cluster_labels)} projects to clusters")
|
| 504 |
+
|
| 505 |
+
return cluster_df, df_with_clusters, embeddings, cluster_labels
|
| 506 |
+
|
| 507 |
+
|
| 508 |
+
def save_results(cluster_df: pd.DataFrame, data_with_clusters_df: pd.DataFrame) -> None:
|
| 509 |
+
"""Save clustering results to Excel files.
|
| 510 |
+
|
| 511 |
+
Args:
|
| 512 |
+
cluster_df: DataFrame with cluster results
|
| 513 |
+
data_with_clusters_df: Original DataFrame with assigned cluster IDs
|
| 514 |
+
|
| 515 |
+
Returns:
|
| 516 |
+
None
|
| 517 |
+
"""
|
| 518 |
+
# Ensure the results directory exists
|
| 519 |
+
os.makedirs(RESULTS_DIR, exist_ok=True)
|
| 520 |
+
|
| 521 |
+
logging.info(f"Saving cluster results to {SAVE_PATH_CLUSTERS}")
|
| 522 |
+
cluster_df.to_excel(SAVE_PATH_CLUSTERS, index=False)
|
| 523 |
+
|
| 524 |
+
logging.info(f"Saving data with clusters to {SAVE_PATH_ORIGINAL}")
|
| 525 |
+
data_with_clusters_df.to_excel(SAVE_PATH_ORIGINAL, index=False)
|
| 526 |
+
|
| 527 |
+
logging.info("Results saved successfully")
|
| 528 |
+
|
| 529 |
+
|
| 530 |
+
def get_cluster_statistics(cluster_df: pd.DataFrame, data_with_clusters_df: pd.DataFrame) -> Dict[str, float]:
|
| 531 |
+
"""Generate statistics about the clustering results.
|
| 532 |
+
|
| 533 |
+
Args:
|
| 534 |
+
cluster_df: DataFrame with cluster results
|
| 535 |
+
data_with_clusters_df: Original DataFrame with assigned cluster IDs
|
| 536 |
+
|
| 537 |
+
Returns:
|
| 538 |
+
Dict[str, float]: Dictionary containing clustering statistics
|
| 539 |
+
"""
|
| 540 |
+
total_projects = len(data_with_clusters_df)
|
| 541 |
+
assigned_projects = len(data_with_clusters_df[data_with_clusters_df['cluster_id'] >= 0])
|
| 542 |
+
unassigned_projects = total_projects - assigned_projects
|
| 543 |
+
|
| 544 |
+
stats = {
|
| 545 |
+
'total_projects': total_projects,
|
| 546 |
+
'assigned_projects': assigned_projects,
|
| 547 |
+
'unassigned_projects': unassigned_projects,
|
| 548 |
+
'num_clusters': len(cluster_df),
|
| 549 |
+
'avg_projects_per_cluster': assigned_projects / len(cluster_df) if len(cluster_df) > 0 else 0,
|
| 550 |
+
'largest_cluster_size': cluster_df['num_progetti'].max() if len(cluster_df) > 0 else 0,
|
| 551 |
+
'smallest_cluster_size': cluster_df['num_progetti'].min() if len(cluster_df) > 0 else 0
|
| 552 |
+
}
|
| 553 |
+
|
| 554 |
+
return stats
|
| 555 |
+
|
| 556 |
+
|
| 557 |
+
def create_cluster_pca_plot(embeddings: np.ndarray, cluster_labels: np.ndarray, cluster_df: pd.DataFrame) -> go.Figure:
|
| 558 |
+
"""
|
| 559 |
+
Create a 2D PCA plot of clusters using plotly express.
|
| 560 |
+
|
| 561 |
+
Args:
|
| 562 |
+
embeddings: Numpy array of embeddings
|
| 563 |
+
cluster_labels: Cluster labels for each point
|
| 564 |
+
cluster_df: DataFrame with cluster information (for titles and descriptions)
|
| 565 |
+
|
| 566 |
+
Returns:
|
| 567 |
+
plotly.graph_objects.Figure: Interactive plot figure
|
| 568 |
+
"""
|
| 569 |
+
try:
|
| 570 |
+
# Perform PCA to reduce to 2 dimensions
|
| 571 |
+
logging.info("Performing PCA reduction to 2D for visualization...")
|
| 572 |
+
pca = PCA(n_components=2, random_state=42)
|
| 573 |
+
embeddings_2d = pca.fit_transform(embeddings)
|
| 574 |
+
|
| 575 |
+
# Create a DataFrame for plotting
|
| 576 |
+
plot_df = pd.DataFrame({
|
| 577 |
+
'PC1': embeddings_2d[:, 0],
|
| 578 |
+
'PC2': embeddings_2d[:, 1],
|
| 579 |
+
'cluster_id': cluster_labels
|
| 580 |
+
})
|
| 581 |
+
|
| 582 |
+
# Create cluster titles mapping for hover information
|
| 583 |
+
cluster_titles = {}
|
| 584 |
+
cluster_colors = {}
|
| 585 |
+
for idx, row in cluster_df.iterrows():
|
| 586 |
+
cluster_id = row['cluster_id']
|
| 587 |
+
cluster_titles[cluster_id] = f"Cluster {cluster_id + 1}: {row['titolo']}"
|
| 588 |
+
|
| 589 |
+
# Add cluster titles to the plot DataFrame
|
| 590 |
+
plot_df['cluster_title'] = plot_df['cluster_id'].map(cluster_titles)
|
| 591 |
+
plot_df['cluster_description'] = plot_df['cluster_id'].map(
|
| 592 |
+
lambda x: cluster_df[cluster_df['cluster_id'] ==
|
| 593 |
+
x]['descrizione'].iloc[0] if x in cluster_df['cluster_id'].values else "Cluster sconosciuto"
|
| 594 |
+
)
|
| 595 |
+
plot_df['num_progetti'] = plot_df['cluster_id'].map(
|
| 596 |
+
lambda x: cluster_df[cluster_df['cluster_id'] ==
|
| 597 |
+
x]['num_progetti'].iloc[0] if x in cluster_df['cluster_id'].values else 0
|
| 598 |
+
)
|
| 599 |
+
|
| 600 |
+
# Create the scatter plot
|
| 601 |
+
fig = px.scatter(
|
| 602 |
+
plot_df,
|
| 603 |
+
x='PC1',
|
| 604 |
+
y='PC2',
|
| 605 |
+
color='cluster_id',
|
| 606 |
+
hover_data={
|
| 607 |
+
'cluster_title': True,
|
| 608 |
+
'cluster_description': True,
|
| 609 |
+
'num_progetti': True,
|
| 610 |
+
'PC1': ':.3f',
|
| 611 |
+
'PC2': ':.3f',
|
| 612 |
+
'cluster_id': False
|
| 613 |
+
},
|
| 614 |
+
title='Visualizzazione 2D dei Cluster (PCA)',
|
| 615 |
+
labels={
|
| 616 |
+
'PC1': f'Prima Componente Principale ({pca.explained_variance_ratio_[0]:.1%} varianza)',
|
| 617 |
+
'PC2': f'Seconda Componente Principale ({pca.explained_variance_ratio_[1]:.1%} varianza)',
|
| 618 |
+
'cluster_id': 'Cluster ID'
|
| 619 |
+
},
|
| 620 |
+
color_discrete_sequence=px.colors.qualitative.Set3
|
| 621 |
+
)
|
| 622 |
+
|
| 623 |
+
# Update layout for better presentation
|
| 624 |
+
fig.update_layout(
|
| 625 |
+
width=800,
|
| 626 |
+
height=600,
|
| 627 |
+
showlegend=True,
|
| 628 |
+
legend=dict(
|
| 629 |
+
orientation="v",
|
| 630 |
+
yanchor="top",
|
| 631 |
+
y=1,
|
| 632 |
+
xanchor="left",
|
| 633 |
+
x=1.02
|
| 634 |
+
),
|
| 635 |
+
margin=dict(r=150),
|
| 636 |
+
font=dict(size=12),
|
| 637 |
+
plot_bgcolor='rgba(0,0,0,0)'
|
| 638 |
+
)
|
| 639 |
+
|
| 640 |
+
# Update traces for better markers
|
| 641 |
+
fig.update_traces(
|
| 642 |
+
marker=dict(
|
| 643 |
+
size=8,
|
| 644 |
+
opacity=0.7,
|
| 645 |
+
line=dict(width=1, color='DarkSlateGrey')
|
| 646 |
+
)
|
| 647 |
+
)
|
| 648 |
+
|
| 649 |
+
# Add explanation text
|
| 650 |
+
explained_variance_total = pca.explained_variance_ratio_[
|
| 651 |
+
0] + pca.explained_variance_ratio_[1]
|
| 652 |
+
fig.add_annotation(
|
| 653 |
+
text=f"Varianza totale spiegata: {explained_variance_total:.1%}<br>Ogni punto rappresenta un progetto PNRR",
|
| 654 |
+
xref="paper", yref="paper",
|
| 655 |
+
x=0.02, y=0.98,
|
| 656 |
+
xanchor="left", yanchor="top",
|
| 657 |
+
showarrow=False,
|
| 658 |
+
font=dict(size=10, color="gray"),
|
| 659 |
+
bgcolor="rgba(255,255,255,0.8)",
|
| 660 |
+
bordercolor="gray",
|
| 661 |
+
borderwidth=1
|
| 662 |
+
)
|
| 663 |
+
|
| 664 |
+
logging.info(
|
| 665 |
+
f"Created PCA plot with {len(plot_df)} points and {len(cluster_df)} clusters")
|
| 666 |
+
logging.info(
|
| 667 |
+
f"Total explained variance: {explained_variance_total:.3f}")
|
| 668 |
+
|
| 669 |
+
return fig
|
| 670 |
+
|
| 671 |
+
except Exception as e:
|
| 672 |
+
logging.error(f"Error creating PCA plot: {e}")
|
| 673 |
+
# Return empty figure in case of error
|
| 674 |
+
fig = go.Figure()
|
| 675 |
+
fig.add_annotation(
|
| 676 |
+
text=f"Errore nella creazione del plot PCA: {str(e)}",
|
| 677 |
+
x=0.5, y=0.5,
|
| 678 |
+
xref="paper", yref="paper",
|
| 679 |
+
showarrow=False
|
| 680 |
+
)
|
| 681 |
+
return fig
|
modules/cluster_page.py
ADDED
|
@@ -0,0 +1,350 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import logging
|
| 4 |
+
import streamlit as st
|
| 5 |
+
import pandas as pd
|
| 6 |
+
from typing import Dict, Union, Any
|
| 7 |
+
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
|
| 8 |
+
from modules import cluster_analysis
|
| 9 |
+
|
| 10 |
+
METADATA_PATH = 'modules/fixtures/Scheda metadatazione_Progetti_Lozalizzazioni_PNRR_Italiadomani_V2.xlsx'
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
def set_page_config() -> None:
|
| 14 |
+
"""Configure Streamlit page settings for cluster analysis.
|
| 15 |
+
|
| 16 |
+
Returns:
|
| 17 |
+
None
|
| 18 |
+
"""
|
| 19 |
+
st.set_page_config(
|
| 20 |
+
page_title="PNRR Cluster Analysis",
|
| 21 |
+
page_icon=":chart_with_upwards_trend:",
|
| 22 |
+
layout="wide"
|
| 23 |
+
)
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def load_metadata_columns() -> Dict[str, str]:
|
| 27 |
+
"""Load available columns from metadata file.
|
| 28 |
+
|
| 29 |
+
Returns:
|
| 30 |
+
Dict[str, str]: Dictionary mapping column names to their descriptions
|
| 31 |
+
"""
|
| 32 |
+
try:
|
| 33 |
+
metadata_paths = [
|
| 34 |
+
'/home/giuseppe/IUAV - PNRR/semantic-filter/data/metadata.csv',
|
| 35 |
+
'data/metadata.csv',
|
| 36 |
+
'../data/metadata.csv'
|
| 37 |
+
]
|
| 38 |
+
|
| 39 |
+
metadata_df = None
|
| 40 |
+
for path in metadata_paths:
|
| 41 |
+
if os.path.exists(path):
|
| 42 |
+
metadata_df = pd.read_csv(path)
|
| 43 |
+
break
|
| 44 |
+
|
| 45 |
+
if metadata_df is None:
|
| 46 |
+
return {}
|
| 47 |
+
|
| 48 |
+
high_importance = metadata_df[
|
| 49 |
+
(metadata_df['Ranking importanza variabili (da 1, bassa importanza, a 5, massima importanza)'].isin([4, 5])) &
|
| 50 |
+
(metadata_df['Variabile dei file originali (Italiadomani/Regione Veneto)'].notna())
|
| 51 |
+
]
|
| 52 |
+
|
| 53 |
+
columns_info = {}
|
| 54 |
+
for _, row in high_importance.iterrows():
|
| 55 |
+
var_name = row['Variabile dei file originali (Italiadomani/Regione Veneto)']
|
| 56 |
+
description = row['Descrizione']
|
| 57 |
+
if pd.notna(var_name) and pd.notna(description):
|
| 58 |
+
columns_info[var_name] = description
|
| 59 |
+
|
| 60 |
+
return columns_info
|
| 61 |
+
except Exception as e:
|
| 62 |
+
st.error(f"Errore nel caricamento dei metadati: {e}")
|
| 63 |
+
return {}
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
def display_cluster_statistics(stats: Dict[str, Union[int, float]]) -> None:
|
| 67 |
+
"""Display clustering statistics in an organized format.
|
| 68 |
+
|
| 69 |
+
Args:
|
| 70 |
+
stats: Dictionary containing clustering statistics
|
| 71 |
+
|
| 72 |
+
Returns:
|
| 73 |
+
None
|
| 74 |
+
"""
|
| 75 |
+
col1, col2, col3, col4 = st.columns(4)
|
| 76 |
+
|
| 77 |
+
with col1:
|
| 78 |
+
st.metric("Progetti Totali", stats['total_projects'])
|
| 79 |
+
with col2:
|
| 80 |
+
st.metric("Progetti Assegnati", stats['assigned_projects'])
|
| 81 |
+
with col3:
|
| 82 |
+
st.metric("Numero Cluster", stats['num_clusters'])
|
| 83 |
+
with col4:
|
| 84 |
+
st.metric("Progetti per Cluster (media)", f"{stats['avg_projects_per_cluster']:.1f}")
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
def main() -> None:
|
| 88 |
+
"""Main function for cluster analysis user interface.
|
| 89 |
+
Handles file upload, parameter configuration, and analysis execution.
|
| 90 |
+
|
| 91 |
+
Returns:
|
| 92 |
+
None
|
| 93 |
+
"""
|
| 94 |
+
st.title("🔍 Analisi Cluster Progetti PNRR")
|
| 95 |
+
st.markdown("""
|
| 96 |
+
Questa sezione permette di identificare automaticamente gruppi tematici di progetti PNRR
|
| 97 |
+
basati sul contenuto delle colonne selezionate. L'algoritmo utilizza tecniche di machine learning
|
| 98 |
+
per raggruppare progetti simili e genera automaticamente titoli e descrizioni per ogni cluster.
|
| 99 |
+
""")
|
| 100 |
+
|
| 101 |
+
st.header("📁 Carica il File Excel")
|
| 102 |
+
uploaded_file = st.file_uploader(
|
| 103 |
+
"Seleziona il file Excel contenente i progetti PNRR",
|
| 104 |
+
type=["xlsx"],
|
| 105 |
+
help="Carica un file Excel con i dati dei progetti PNRR"
|
| 106 |
+
)
|
| 107 |
+
|
| 108 |
+
if uploaded_file is not None:
|
| 109 |
+
try:
|
| 110 |
+
df = pd.read_excel(uploaded_file)
|
| 111 |
+
st.success(f"✅ File caricato con successo! Trovate {len(df)} righe e {len(df.columns)} colonne.")
|
| 112 |
+
|
| 113 |
+
columns_info = load_metadata_columns()
|
| 114 |
+
|
| 115 |
+
st.header("🎯 Selezione Colonne per Clustering")
|
| 116 |
+
st.markdown("""
|
| 117 |
+
Seleziona le colonne da utilizzare per il clustering. Le colonne testuali con informazioni
|
| 118 |
+
descrittive dei progetti sono generalmente le più efficaci per identificare temi ricorrenti.
|
| 119 |
+
""")
|
| 120 |
+
|
| 121 |
+
column_options = []
|
| 122 |
+
for col in df.columns:
|
| 123 |
+
if col in columns_info:
|
| 124 |
+
description = columns_info[col][:100] + "..." if len(columns_info[col]) > 100 else columns_info[col]
|
| 125 |
+
column_options.append((col, f"{col} - {description}"))
|
| 126 |
+
else:
|
| 127 |
+
column_options.append((col, col))
|
| 128 |
+
|
| 129 |
+
selected_column_tuples = st.multiselect(
|
| 130 |
+
"Seleziona le colonne da utilizzare per il clustering:",
|
| 131 |
+
column_options,
|
| 132 |
+
format_func=lambda x: x[1],
|
| 133 |
+
help="Seleziona almeno una colonna. Le colonne con testo descrittivo sono più efficaci."
|
| 134 |
+
)
|
| 135 |
+
|
| 136 |
+
selected_columns = [col[0] for col in selected_column_tuples]
|
| 137 |
+
|
| 138 |
+
if selected_columns:
|
| 139 |
+
st.subheader("🔍 Anteprima Colonne Selezionate")
|
| 140 |
+
preview_df = df[selected_columns].head(3)
|
| 141 |
+
st.dataframe(preview_df, use_container_width=True)
|
| 142 |
+
|
| 143 |
+
st.header("⚙️ Parametri Clustering")
|
| 144 |
+
col1, col2 = st.columns(2)
|
| 145 |
+
|
| 146 |
+
with col1:
|
| 147 |
+
auto_clusters = st.checkbox(
|
| 148 |
+
"Determinazione automatica del numero di cluster",
|
| 149 |
+
value=True,
|
| 150 |
+
help="Se selezionato, l'algoritmo determinerà automaticamente il numero ottimale di cluster"
|
| 151 |
+
)
|
| 152 |
+
|
| 153 |
+
with col2:
|
| 154 |
+
if not auto_clusters:
|
| 155 |
+
n_clusters = st.slider(
|
| 156 |
+
"Numero di cluster",
|
| 157 |
+
min_value=2,
|
| 158 |
+
max_value=min(500, len(df) // 5),
|
| 159 |
+
value=250,
|
| 160 |
+
help="Numero fisso di cluster da creare"
|
| 161 |
+
)
|
| 162 |
+
else:
|
| 163 |
+
col2_1, col2_2 = st.columns(2)
|
| 164 |
+
with col2_1:
|
| 165 |
+
min_clusters = st.number_input(
|
| 166 |
+
"Numero minimo di cluster",
|
| 167 |
+
min_value=2,
|
| 168 |
+
max_value=500,
|
| 169 |
+
value=5,
|
| 170 |
+
step=1,
|
| 171 |
+
help="Numero minimo di cluster per la determinazione automatica"
|
| 172 |
+
)
|
| 173 |
+
with col2_2:
|
| 174 |
+
max_clusters = st.number_input(
|
| 175 |
+
"Numero massimo di cluster",
|
| 176 |
+
min_value=min_clusters,
|
| 177 |
+
max_value=500,
|
| 178 |
+
value=250,
|
| 179 |
+
step=1,
|
| 180 |
+
help="Numero massimo di cluster per la determinazione automatica"
|
| 181 |
+
)
|
| 182 |
+
|
| 183 |
+
st.header("🚫 Blacklist Parole Personalizzata")
|
| 184 |
+
st.markdown("""
|
| 185 |
+
Aggiungi parole che vuoi escludere completamente dall'analisi del clustering.
|
| 186 |
+
Queste parole saranno rimosse dall'analisi per evitare che influenzino i risultati.
|
| 187 |
+
""")
|
| 188 |
+
|
| 189 |
+
col1_bl, col2_bl = st.columns([2, 1])
|
| 190 |
+
with col1_bl:
|
| 191 |
+
custom_words_input = st.text_area(
|
| 192 |
+
"Parole da escludere (una per riga o separate da virgola):",
|
| 193 |
+
height=100,
|
| 194 |
+
placeholder="digitalizzazione\ninfrastruttura\nsanità\n\noppure: digitalizzazione, infrastruttura, sanità",
|
| 195 |
+
help="Inserisci parole che ritieni irrilevanti per il tuo contesto di analisi. "
|
| 196 |
+
"Puoi inserire una parola per riga oppure separare le parole con virgole."
|
| 197 |
+
)
|
| 198 |
+
|
| 199 |
+
with col2_bl:
|
| 200 |
+
st.markdown("**Esempi di parole da escludere:**")
|
| 201 |
+
st.markdown("- Termini troppo generici")
|
| 202 |
+
st.markdown("- Nomi di enti frequenti")
|
| 203 |
+
st.markdown("- Parole tecniche comuni")
|
| 204 |
+
st.markdown("- Location ricorrenti")
|
| 205 |
+
|
| 206 |
+
# Parse custom blacklist
|
| 207 |
+
custom_blacklist = []
|
| 208 |
+
if custom_words_input.strip():
|
| 209 |
+
# Try comma-separated first
|
| 210 |
+
if ',' in custom_words_input:
|
| 211 |
+
custom_blacklist = [
|
| 212 |
+
word.strip() for word in custom_words_input.split(',')]
|
| 213 |
+
else:
|
| 214 |
+
# Otherwise, split by lines
|
| 215 |
+
custom_blacklist = [
|
| 216 |
+
word.strip() for word in custom_words_input.split('\n')]
|
| 217 |
+
|
| 218 |
+
# Filter out empty strings
|
| 219 |
+
custom_blacklist = [
|
| 220 |
+
word for word in custom_blacklist if word]
|
| 221 |
+
|
| 222 |
+
if custom_blacklist:
|
| 223 |
+
st.success(
|
| 224 |
+
f"✅ Saranno escluse {len(custom_blacklist)} parole personalizzate: {', '.join(custom_blacklist[:5])}{'...' if len(custom_blacklist) > 5 else ''}")
|
| 225 |
+
|
| 226 |
+
if st.button("🚀 Avvia Analisi Cluster", type="primary"):
|
| 227 |
+
with st.spinner("Analisi in corso... Questo potrebbe richiedere alcuni minuti."):
|
| 228 |
+
try:
|
| 229 |
+
n_clusters_param = None if auto_clusters else n_clusters
|
| 230 |
+
max_clusters_param = max_clusters if auto_clusters else 20
|
| 231 |
+
min_clusters_param = min_clusters if auto_clusters else 2
|
| 232 |
+
|
| 233 |
+
cluster_df, data_with_clusters_df, embeddings, cluster_labels = cluster_analysis.analyze_clusters(
|
| 234 |
+
data_frame_path=uploaded_file,
|
| 235 |
+
selected_columns=selected_columns,
|
| 236 |
+
n_clusters=n_clusters_param,
|
| 237 |
+
max_clusters=max_clusters_param,
|
| 238 |
+
min_clusters=min_clusters_param,
|
| 239 |
+
custom_blacklist=custom_blacklist if custom_blacklist else None
|
| 240 |
+
)
|
| 241 |
+
|
| 242 |
+
cluster_analysis.save_results(cluster_df, data_with_clusters_df)
|
| 243 |
+
stats = cluster_analysis.get_cluster_statistics(cluster_df, data_with_clusters_df)
|
| 244 |
+
st.success("✅ Analisi completata con successo!")
|
| 245 |
+
|
| 246 |
+
st.header("📊 Statistiche Clustering")
|
| 247 |
+
display_cluster_statistics(stats)
|
| 248 |
+
|
| 249 |
+
st.header("🎯 Risultati Cluster")
|
| 250 |
+
st.markdown(f"Sono stati identificati **{len(cluster_df)}** cluster tematici:")
|
| 251 |
+
|
| 252 |
+
for idx, row in cluster_df.iterrows():
|
| 253 |
+
with st.expander(f"**Cluster {row['cluster_id'] + 1}**: {row['titolo']} ({row['num_progetti']} progetti)"):
|
| 254 |
+
st.write(f"**Descrizione**: {row['descrizione']}")
|
| 255 |
+
st.write(f"**Parole chiave**: {row['keywords']}")
|
| 256 |
+
st.write(f"**Progetti di esempio**:")
|
| 257 |
+
st.write(row['progetti_campione'])
|
| 258 |
+
|
| 259 |
+
st.header("📥 Download Risultati")
|
| 260 |
+
col1, col2 = st.columns(2)
|
| 261 |
+
|
| 262 |
+
with col1:
|
| 263 |
+
with open(cluster_analysis.SAVE_PATH_CLUSTERS, 'rb') as f:
|
| 264 |
+
cluster_bytes = f.read()
|
| 265 |
+
|
| 266 |
+
st.download_button(
|
| 267 |
+
label="📋 Scarica Sommario Cluster",
|
| 268 |
+
data=cluster_bytes,
|
| 269 |
+
file_name="cluster_results.xlsx",
|
| 270 |
+
mime="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
|
| 271 |
+
help="File Excel con titoli, descrizioni e statistiche dei cluster"
|
| 272 |
+
)
|
| 273 |
+
|
| 274 |
+
with col2:
|
| 275 |
+
with open(cluster_analysis.SAVE_PATH_ORIGINAL, 'rb') as f:
|
| 276 |
+
data_bytes = f.read()
|
| 277 |
+
|
| 278 |
+
st.download_button(
|
| 279 |
+
label="📊 Scarica Dati con Cluster ID",
|
| 280 |
+
data=data_bytes,
|
| 281 |
+
file_name="data_with_clusters.xlsx",
|
| 282 |
+
mime="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
|
| 283 |
+
help="File Excel originale con aggiunta colonna cluster_id per ogni progetto"
|
| 284 |
+
)
|
| 285 |
+
|
| 286 |
+
st.header(
|
| 287 |
+
"📊 Visualizzazione Cluster nello Spazio degli Embeddings")
|
| 288 |
+
st.markdown("""
|
| 289 |
+
Questo grafico mostra una rappresentazione bidimensionale dei cluster ottenuti tramite PCA (Principal Component Analysis).
|
| 290 |
+
Ogni punto rappresenta un progetto PNRR, colorato secondo il cluster di appartenenza.
|
| 291 |
+
""")
|
| 292 |
+
|
| 293 |
+
try:
|
| 294 |
+
# Create and display PCA plot
|
| 295 |
+
pca_fig = cluster_analysis.create_cluster_pca_plot(
|
| 296 |
+
embeddings, cluster_labels, cluster_df)
|
| 297 |
+
st.plotly_chart(
|
| 298 |
+
pca_fig, use_container_width=True)
|
| 299 |
+
except Exception as e:
|
| 300 |
+
st.error(
|
| 301 |
+
f"❌ Errore nella creazione del plot PCA: {str(e)}")
|
| 302 |
+
logging.error(
|
| 303 |
+
f"PCA plot error: {e}", exc_info=True)
|
| 304 |
+
|
| 305 |
+
st.header("👀 Anteprima Risultati")
|
| 306 |
+
|
| 307 |
+
cluster_counts = data_with_clusters_df['cluster_id'].value_counts().sort_index()
|
| 308 |
+
cluster_counts_df = pd.DataFrame({
|
| 309 |
+
'Cluster ID': cluster_counts.index,
|
| 310 |
+
'Numero Progetti': cluster_counts.values
|
| 311 |
+
})
|
| 312 |
+
|
| 313 |
+
st.subheader("Distribuzione Progetti per Cluster")
|
| 314 |
+
st.bar_chart(cluster_counts_df.set_index('Cluster ID'))
|
| 315 |
+
|
| 316 |
+
st.subheader("Dati di Esempio con Cluster ID")
|
| 317 |
+
sample_data = data_with_clusters_df[selected_columns + ['cluster_id']].head(10)
|
| 318 |
+
st.dataframe(sample_data, use_container_width=True)
|
| 319 |
+
|
| 320 |
+
except Exception as e:
|
| 321 |
+
st.error(f"❌ Errore durante l'analisi: {str(e)}")
|
| 322 |
+
logging.error(f"Clustering error: {e}", exc_info=True)
|
| 323 |
+
|
| 324 |
+
else:
|
| 325 |
+
st.warning("⚠️ Seleziona almeno una colonna per procedere con il clustering.")
|
| 326 |
+
|
| 327 |
+
except Exception as e:
|
| 328 |
+
st.error(f"❌ Errore nel caricamento del file: {str(e)}")
|
| 329 |
+
|
| 330 |
+
else:
|
| 331 |
+
st.info("👆 Carica un file Excel per iniziare l'analisi cluster.")
|
| 332 |
+
|
| 333 |
+
st.header("📋 Formato File Atteso")
|
| 334 |
+
st.markdown("""
|
| 335 |
+
Il file Excel dovrebbe contenere i dati dei progetti PNRR con colonne come:
|
| 336 |
+
- **Titolo Progetto**: Nome del progetto
|
| 337 |
+
- **Sintesi Progetto**: Descrizione dettagliata
|
| 338 |
+
- **Descrizione Missione**: Descrizione della missione PNRR
|
| 339 |
+
- **Descrizione Componente**: Descrizione della componente
|
| 340 |
+
- **Soggetto Attuatore**: Ente responsabile
|
| 341 |
+
- **Descrizione Comune**: Località del progetto
|
| 342 |
+
|
| 343 |
+
Più colonne testuali descrittive vengono selezionate, migliore sarà la qualità del clustering.
|
| 344 |
+
""")
|
| 345 |
+
|
| 346 |
+
|
| 347 |
+
if __name__ == "__main__":
|
| 348 |
+
logging.basicConfig(level=logging.INFO)
|
| 349 |
+
set_page_config()
|
| 350 |
+
main()
|
modules/column_query_agent.py
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import logging
|
| 3 |
+
import os
|
| 4 |
+
from typing import Dict, Callable, Any
|
| 5 |
+
import langchain_openai
|
| 6 |
+
from langchain_core import prompts
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def agent(base_llm_model_name: str) -> Callable[[str, Dict[str, str]], Dict[str, str]]:
|
| 10 |
+
"""Create an agent for query analysis and column mapping.
|
| 11 |
+
|
| 12 |
+
Args:
|
| 13 |
+
base_llm_model_name: Name of the LLM model to use
|
| 14 |
+
|
| 15 |
+
Returns:
|
| 16 |
+
Callable: Function that accepts (query, columns_and_descriptions) and returns column-query mapping
|
| 17 |
+
"""
|
| 18 |
+
config = {
|
| 19 |
+
'model': base_llm_model_name,
|
| 20 |
+
'temperature': 0,
|
| 21 |
+
'max_tokens': 4000,
|
| 22 |
+
'max_retries': 10,
|
| 23 |
+
'seed': 123456
|
| 24 |
+
}
|
| 25 |
+
system_prompt = '''
|
| 26 |
+
You are a smart assistant that receives:
|
| 27 |
+
- a user search query with a lot of keywords,
|
| 28 |
+
- a list of columns extracted from a dataset,
|
| 29 |
+
- and for each column, its description explaining what it contains.
|
| 30 |
+
|
| 31 |
+
Your task:
|
| 32 |
+
- Analyze the query.
|
| 33 |
+
- For each column, determine if part of the query is highly relevant to it.
|
| 34 |
+
- Extract only the most relevant keywords or parts of the query that fit the topic and meaning of the column.
|
| 35 |
+
- Output a list of (query fragment, column name) pairs.
|
| 36 |
+
|
| 37 |
+
Rules:
|
| 38 |
+
- The query fragment must make sense for that specific column.
|
| 39 |
+
- If the column is not relevant to any part of the query, you can skip it.
|
| 40 |
+
- Do not modify the meaning of the user's query, but you can split and adapt it into multiple parts.
|
| 41 |
+
- Be concise but precise in fragment construction.
|
| 42 |
+
- Include the most important 5-10 columns, maximum.
|
| 43 |
+
- Does not change the names of the columns.
|
| 44 |
+
|
| 45 |
+
Output format: a JSON object with the key the column names and the values the query fragments.
|
| 46 |
+
'''
|
| 47 |
+
|
| 48 |
+
logging.info(f"Loading model {base_llm_model_name}...")
|
| 49 |
+
model = langchain_openai.ChatOpenAI(
|
| 50 |
+
api_key=os.getenv("OPENAI_API_KEY"),
|
| 51 |
+
model=config['model'],
|
| 52 |
+
temperature=config['temperature'],
|
| 53 |
+
max_tokens=config['max_tokens'],
|
| 54 |
+
max_retries=config['max_retries'],
|
| 55 |
+
seed=config['seed'],
|
| 56 |
+
)
|
| 57 |
+
prompt = prompts.ChatPromptTemplate.from_messages([
|
| 58 |
+
('system', system_prompt),
|
| 59 |
+
('human', 'User Query: {query}, Columns and Descriptions: {columns}'),
|
| 60 |
+
])
|
| 61 |
+
chain = prompt | model
|
| 62 |
+
|
| 63 |
+
def invoke(query, columns_and_descriptions):
|
| 64 |
+
formatted_columns = "\n".join(
|
| 65 |
+
f"- {col}: {desc}" for col, desc in columns_and_descriptions.items()
|
| 66 |
+
)
|
| 67 |
+
return post_process(chain.invoke({'query': query, 'columns': formatted_columns}), columns_and_descriptions)
|
| 68 |
+
|
| 69 |
+
return invoke
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
def post_process(response: Any, columns_and_descriptions: Dict[str, str]) -> Dict[str, str]:
|
| 73 |
+
"""Post-process LLM response to extract column-query mapping.
|
| 74 |
+
|
| 75 |
+
Args:
|
| 76 |
+
response: LLM response containing JSON
|
| 77 |
+
columns_and_descriptions: Dictionary of available columns and descriptions
|
| 78 |
+
|
| 79 |
+
Returns:
|
| 80 |
+
Dict[str, str]: Dictionary mapping column names to relevant query fragments
|
| 81 |
+
"""
|
| 82 |
+
json_response = json.loads(response.content.strip('`').lstrip('json\n'))
|
| 83 |
+
return {col: json_response[col] for col in columns_and_descriptions if col in json_response}
|
modules/create.py
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import uuid
|
| 3 |
+
import faiss
|
| 4 |
+
import shutil
|
| 5 |
+
import logging
|
| 6 |
+
import pandas as pd
|
| 7 |
+
from typing import Any
|
| 8 |
+
from langchain_core import documents
|
| 9 |
+
from langchain_community import embeddings
|
| 10 |
+
from langchain_community import vectorstores
|
| 11 |
+
from langchain_community.docstore import in_memory
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
DEFAULT_INDEX_QUERY = "hello world"
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
def build_faiss(
|
| 18 |
+
data_frame: pd.DataFrame,
|
| 19 |
+
index_path: str,
|
| 20 |
+
embedder: Any
|
| 21 |
+
) -> vectorstores.FAISS:
|
| 22 |
+
"""Build a FAISS index from a DataFrame.
|
| 23 |
+
|
| 24 |
+
Args:
|
| 25 |
+
data_frame: DataFrame containing data to index
|
| 26 |
+
index_path: Path where to save the FAISS index
|
| 27 |
+
embedder: Embedder object to generate vectors
|
| 28 |
+
|
| 29 |
+
Returns:
|
| 30 |
+
vectorstores.FAISS: Built FAISS vectorstore object
|
| 31 |
+
"""
|
| 32 |
+
embedded_documents = []
|
| 33 |
+
for row_idx, row in data_frame.iterrows():
|
| 34 |
+
for col_name, cell_val in row.items():
|
| 35 |
+
embedded_documents.append(documents.Document(
|
| 36 |
+
page_content=str(cell_val),
|
| 37 |
+
metadata={"row": row_idx, "column": col_name},
|
| 38 |
+
))
|
| 39 |
+
|
| 40 |
+
if os.path.exists(index_path):
|
| 41 |
+
shutil.rmtree(index_path, ignore_errors=True)
|
| 42 |
+
logging.debug(f"Deleted existing FAISS index at {index_path}")
|
| 43 |
+
|
| 44 |
+
vectorstore = vectorstores.FAISS(
|
| 45 |
+
embedding_function=embedder,
|
| 46 |
+
index=faiss.IndexFlatIP(len(embedder.embed_query(DEFAULT_INDEX_QUERY))),
|
| 47 |
+
docstore=in_memory.InMemoryDocstore(),
|
| 48 |
+
index_to_docstore_id={},
|
| 49 |
+
)
|
| 50 |
+
|
| 51 |
+
uuids = [str(uuid.uuid4()) for _ in range(len(embedded_documents))]
|
| 52 |
+
vectorstore.add_documents(documents=embedded_documents, ids=uuids)
|
| 53 |
+
logging.debug(f"Added {len(embedded_documents)} documents to FAISS index")
|
| 54 |
+
|
| 55 |
+
os.makedirs(index_path, exist_ok=True)
|
| 56 |
+
vectorstore.save_local(index_path)
|
| 57 |
+
logging.debug(f"FAISS index saved to ./{index_path}/")
|
| 58 |
+
return vectorstore
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
def load_faiss_index(
|
| 62 |
+
index_path: str,
|
| 63 |
+
hf_model_name: str
|
| 64 |
+
) -> vectorstores.FAISS:
|
| 65 |
+
"""Load a previously saved FAISS index.
|
| 66 |
+
|
| 67 |
+
Args:
|
| 68 |
+
index_path: Path of the saved FAISS index
|
| 69 |
+
hf_model_name: Name of the HuggingFace model for embeddings
|
| 70 |
+
|
| 71 |
+
Returns:
|
| 72 |
+
vectorstores.FAISS: Loaded FAISS vectorstore object
|
| 73 |
+
"""
|
| 74 |
+
embedder = embeddings.HuggingFaceEmbeddings(model_name=hf_model_name)
|
| 75 |
+
return vectorstores.FAISS.load_local(index_path, embedder, allow_dangerous_deserialization=True)
|
modules/fixtures/Scheda metadatazione_Progetti_Lozalizzazioni_PNRR_Italiadomani_V2.xlsx
ADDED
|
Binary file (21.3 kB). View file
|
|
|
modules/home.py
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import torch
|
| 4 |
+
import dotenv
|
| 5 |
+
import logging
|
| 6 |
+
import streamlit as st
|
| 7 |
+
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
|
| 8 |
+
from modules import semantic_filter
|
| 9 |
+
|
| 10 |
+
torch.classes.__path__ = []
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
CONFIDENCE = 0.8
|
| 14 |
+
CHUNK_SIZE = 1000
|
| 15 |
+
METADATA_PATH = 'modules/fixtures/Scheda metadatazione_Progetti_Lozalizzazioni_PNRR_Italiadomani_V2.xlsx'
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def set_page_config() -> None:
|
| 19 |
+
"""Configure Streamlit page settings for semantic filter.
|
| 20 |
+
|
| 21 |
+
Returns:
|
| 22 |
+
None
|
| 23 |
+
"""
|
| 24 |
+
st.set_page_config(
|
| 25 |
+
page_title="Semantic Filter",
|
| 26 |
+
page_icon=":desert_island:",
|
| 27 |
+
)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def main() -> None:
|
| 31 |
+
"""Main function for semantic filter user interface.
|
| 32 |
+
Handles file upload, parameter configuration, and search execution.
|
| 33 |
+
|
| 34 |
+
Returns:
|
| 35 |
+
None
|
| 36 |
+
"""
|
| 37 |
+
st.title("🔍 Filtro Semantico Progetti PNRR")
|
| 38 |
+
st.markdown("""
|
| 39 |
+
Questa sezione permette di filtrare i progetti PNRR utilizzando ricerca semantica avanzata.
|
| 40 |
+
Inserisci una query testuale e il sistema identificherà automaticamente i progetti più rilevanti.
|
| 41 |
+
""")
|
| 42 |
+
|
| 43 |
+
st.header("📁 Carica il File Excel")
|
| 44 |
+
uploaded_file = st.file_uploader(
|
| 45 |
+
"Seleziona il file Excel contenente i progetti PNRR", type=["xlsx"])
|
| 46 |
+
|
| 47 |
+
st.header("⚙️ Parametri di Ricerca")
|
| 48 |
+
col1, col2 = st.columns(2)
|
| 49 |
+
|
| 50 |
+
with col1:
|
| 51 |
+
confidence = st.slider(
|
| 52 |
+
"Soglia di confidenza",
|
| 53 |
+
min_value=0.0,
|
| 54 |
+
max_value=1.0,
|
| 55 |
+
value=CONFIDENCE,
|
| 56 |
+
step=0.01,
|
| 57 |
+
help="Valore minimo di similarità per considerare un progetto rilevante"
|
| 58 |
+
)
|
| 59 |
+
|
| 60 |
+
with col2:
|
| 61 |
+
output_option = st.selectbox(
|
| 62 |
+
"Opzione di output",
|
| 63 |
+
[("Aggiungi una colonna al file", "add_column"),
|
| 64 |
+
("Crea un nuovo file", "new_file")],
|
| 65 |
+
format_func=lambda x: x[0],
|
| 66 |
+
index=0
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
st.header("💬 Query di Ricerca")
|
| 70 |
+
user_query = st.text_area(
|
| 71 |
+
"Inserisci la query di ricerca semantica:",
|
| 72 |
+
height=150,
|
| 73 |
+
max_chars=None,
|
| 74 |
+
help="Descrivi il tipo di progetti che stai cercando in linguaggio naturale",
|
| 75 |
+
placeholder="Esempio: progetti di digitalizzazione nelle scuole, infrastrutture sostenibili, riqualificazione urbana..."
|
| 76 |
+
)
|
| 77 |
+
|
| 78 |
+
if st.button("🚀 Avvia Ricerca Semantica", type="primary"):
|
| 79 |
+
if uploaded_file is not None and user_query:
|
| 80 |
+
with st.spinner("Ricerca in corso... Questo potrebbe richiedere alcuni minuti."):
|
| 81 |
+
try:
|
| 82 |
+
semantic_filter.apply(
|
| 83 |
+
data_frame_path=uploaded_file,
|
| 84 |
+
metadata_path=METADATA_PATH,
|
| 85 |
+
user_query=user_query,
|
| 86 |
+
threshold=confidence,
|
| 87 |
+
chunk_size=CHUNK_SIZE,
|
| 88 |
+
output_option=output_option[1]
|
| 89 |
+
)
|
| 90 |
+
|
| 91 |
+
st.success("✅ Ricerca completata con successo!")
|
| 92 |
+
|
| 93 |
+
with open(semantic_filter.SAVE_PATH, 'rb') as f:
|
| 94 |
+
file_bytes = f.read()
|
| 95 |
+
|
| 96 |
+
st.download_button(
|
| 97 |
+
label="📥 Scarica Risultati",
|
| 98 |
+
data=file_bytes,
|
| 99 |
+
file_name=semantic_filter.SAVE_PATH.split('/')[-1],
|
| 100 |
+
mime="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
|
| 101 |
+
)
|
| 102 |
+
except Exception as e:
|
| 103 |
+
st.error(f"❌ Errore durante la ricerca: {str(e)}")
|
| 104 |
+
else:
|
| 105 |
+
st.error("⚠️ Carica un file Excel e inserisci una query di ricerca.")
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
if __name__ == "__main__":
|
| 109 |
+
logging.basicConfig(level=logging.INFO)
|
| 110 |
+
dotenv.load_dotenv()
|
| 111 |
+
|
| 112 |
+
set_page_config()
|
| 113 |
+
main()
|
modules/search.py
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
from typing import List, Tuple, Dict, Any
|
| 3 |
+
from langchain_community.vectorstores import faiss
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
def multi_column(db: faiss.FAISS, df: pd.DataFrame, qc_pairs: Dict[str, str], threshold: float) -> List[Tuple[int, float, Dict[str, Any]]]:
|
| 7 |
+
"""Perform semantic search across multiple columns and return aggregated results.
|
| 8 |
+
|
| 9 |
+
Args:
|
| 10 |
+
db: FAISS vector database for search
|
| 11 |
+
df: Original DataFrame containing the data
|
| 12 |
+
qc_pairs: Dictionary mapping columns to query fragments
|
| 13 |
+
threshold: Minimum similarity threshold to include a result
|
| 14 |
+
|
| 15 |
+
Returns:
|
| 16 |
+
List[Tuple[int, float, Dict[str, Any]]]: List of tuples (row_id, avg_score, row_dict)
|
| 17 |
+
"""
|
| 18 |
+
per_column_scores = []
|
| 19 |
+
for column, query in qc_pairs.items():
|
| 20 |
+
hits = db.similarity_search_with_score(
|
| 21 |
+
query,
|
| 22 |
+
k=db.index.ntotal,
|
| 23 |
+
filter={'column': column},
|
| 24 |
+
distance_strategy=faiss.DistanceStrategy.COSINE
|
| 25 |
+
)
|
| 26 |
+
score_map = {
|
| 27 |
+
doc.metadata['row']: score
|
| 28 |
+
for doc, score in hits
|
| 29 |
+
if score >= threshold
|
| 30 |
+
}
|
| 31 |
+
per_column_scores.append(score_map)
|
| 32 |
+
|
| 33 |
+
all_rows = set()
|
| 34 |
+
for score_map in per_column_scores:
|
| 35 |
+
all_rows.update(score_map.keys())
|
| 36 |
+
|
| 37 |
+
results = []
|
| 38 |
+
for rid in all_rows:
|
| 39 |
+
scores = [score_map[rid] for score_map in per_column_scores if rid in score_map]
|
| 40 |
+
if scores:
|
| 41 |
+
avg_score = sum(scores) / len(scores)
|
| 42 |
+
row_dict = df.loc[rid].to_dict()
|
| 43 |
+
results.append((rid, avg_score, row_dict))
|
| 44 |
+
|
| 45 |
+
results.sort(key=lambda x: x[1], reverse=True)
|
| 46 |
+
return results
|
modules/semantic_filter.py
ADDED
|
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import logging
|
| 2 |
+
import pandas as pd
|
| 3 |
+
from typing import Any, List, Tuple, Dict, Union
|
| 4 |
+
from stqdm import stqdm
|
| 5 |
+
from modules import search
|
| 6 |
+
from modules import create
|
| 7 |
+
from modules import column_query_agent
|
| 8 |
+
from langchain_huggingface import embeddings
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
SAVE_PATH = 'semantic_filter_results.xlsx'
|
| 12 |
+
LLM_MODEL_NAME = 'gpt-4o-mini'
|
| 13 |
+
EMBEDDING_MODEL_NAME = 'sentence-transformers/all-MiniLM-L6-v2'
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
def apply(
|
| 17 |
+
data_frame_path: Union[str, Any],
|
| 18 |
+
metadata_path: str,
|
| 19 |
+
user_query: str,
|
| 20 |
+
threshold: float,
|
| 21 |
+
chunk_size: int,
|
| 22 |
+
output_option: str
|
| 23 |
+
) -> None:
|
| 24 |
+
"""Apply semantic filter to PNRR data.
|
| 25 |
+
|
| 26 |
+
Args:
|
| 27 |
+
data_frame_path: Path or object of Excel file containing the data
|
| 28 |
+
metadata_path: Path to Excel file containing column metadata
|
| 29 |
+
user_query: User's textual query for semantic search
|
| 30 |
+
threshold: Minimum confidence threshold to consider a result relevant
|
| 31 |
+
chunk_size: Chunk size for data processing
|
| 32 |
+
output_option: Output option ('new_file' or 'add_column')
|
| 33 |
+
|
| 34 |
+
Returns:
|
| 35 |
+
None
|
| 36 |
+
"""
|
| 37 |
+
query_agent = column_query_agent.agent(LLM_MODEL_NAME)
|
| 38 |
+
embedder = embeddings.HuggingFaceEmbeddings(model_name=EMBEDDING_MODEL_NAME)
|
| 39 |
+
|
| 40 |
+
logging.info(f"Loading DataFrame at {data_frame_path}...")
|
| 41 |
+
df = pd.read_excel(data_frame_path)
|
| 42 |
+
logging.info(f"Loaded DataFrame with {len(df)} rows")
|
| 43 |
+
|
| 44 |
+
metadata_df = pd.read_excel(metadata_path)
|
| 45 |
+
columns_and_descriptions = dict(zip(
|
| 46 |
+
metadata_df['Variabile'],
|
| 47 |
+
metadata_df['Descrizione']
|
| 48 |
+
))
|
| 49 |
+
columns_and_descriptions = {k: v for k, v in columns_and_descriptions.items() if pd.notna(v) and k in df.columns}
|
| 50 |
+
query_pairs = query_agent(user_query, columns_and_descriptions)
|
| 51 |
+
relevant_columns = list(query_pairs.keys())
|
| 52 |
+
|
| 53 |
+
all_results = []
|
| 54 |
+
chunks = split_dataframe(df, chunk_size)
|
| 55 |
+
for chunk in stqdm(chunks, desc='Processing chunks'):
|
| 56 |
+
df_reduced = chunk[relevant_columns]
|
| 57 |
+
db = create.build_faiss(
|
| 58 |
+
df_reduced,
|
| 59 |
+
index_path='faiss_index',
|
| 60 |
+
embedder=embedder
|
| 61 |
+
)
|
| 62 |
+
results = search.multi_column(db, chunk, query_pairs, threshold)
|
| 63 |
+
all_results.extend(results)
|
| 64 |
+
|
| 65 |
+
all_results.sort(key=lambda x: x[1], reverse=True)
|
| 66 |
+
if output_option == 'new_file':
|
| 67 |
+
save_results_to_excel(all_results, SAVE_PATH)
|
| 68 |
+
else:
|
| 69 |
+
df['is_valid'] = False
|
| 70 |
+
for row_id, score, _row_dict in all_results:
|
| 71 |
+
df.at[row_id, 'is_valid'] = score
|
| 72 |
+
df.to_excel(SAVE_PATH, index=False)
|
| 73 |
+
|
| 74 |
+
logging.info(f"{len(all_results)} rows found")
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
def split_dataframe(df: pd.DataFrame, chunk_size: int) -> List[pd.DataFrame]:
|
| 78 |
+
"""Split a DataFrame into chunks of specified size.
|
| 79 |
+
|
| 80 |
+
Args:
|
| 81 |
+
df: DataFrame to split
|
| 82 |
+
chunk_size: Maximum size of each chunk
|
| 83 |
+
|
| 84 |
+
Returns:
|
| 85 |
+
List[pd.DataFrame]: List of DataFrame chunks
|
| 86 |
+
"""
|
| 87 |
+
chunks = []
|
| 88 |
+
for i in range(0, len(df), chunk_size):
|
| 89 |
+
chunks.append(df.iloc[i:i + chunk_size])
|
| 90 |
+
return chunks
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
def save_results_to_excel(results: List[Tuple[int, float, Dict[str, Any]]], output_path: str) -> None:
|
| 94 |
+
"""Save semantic search results to Excel file.
|
| 95 |
+
|
| 96 |
+
Args:
|
| 97 |
+
results: List of tuples containing (row_id, score, row_dict)
|
| 98 |
+
output_path: Path of output Excel file
|
| 99 |
+
|
| 100 |
+
Returns:
|
| 101 |
+
None
|
| 102 |
+
"""
|
| 103 |
+
if not results:
|
| 104 |
+
logging.warning("No results to save.")
|
| 105 |
+
return
|
| 106 |
+
|
| 107 |
+
data = []
|
| 108 |
+
for row_id, score, row_dict in results:
|
| 109 |
+
row = {
|
| 110 |
+
'row_id': row_id,
|
| 111 |
+
'score': score,
|
| 112 |
+
**row_dict
|
| 113 |
+
}
|
| 114 |
+
data.append(row)
|
| 115 |
+
|
| 116 |
+
df = pd.DataFrame(data)
|
| 117 |
+
df = df.sort_values(by='row_id').reset_index(drop=True)
|
| 118 |
+
df.to_excel(output_path, index=False)
|
| 119 |
+
|
| 120 |
+
logging.info(f"Saved {len(results)} results to {output_path}")
|
relevant-queries.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
**1. Cambiamento climatico e adattamento**
|
| 2 |
+
|
| 3 |
+
Parole chiave
|
| 4 |
+
|
| 5 |
+
- cambiamento climatico;
|
| 6 |
+
- adattamento e mitigazione;
|
| 7 |
+
- resilienza climatica, resilienza territoriale;
|
| 8 |
+
- rischio idraulico, rischio idrogeologico;
|
| 9 |
+
- strategie e difesa;
|
| 10 |
+
|
| 11 |
+
Indicatori indiretti
|
| 12 |
+
|
| 13 |
+
- eventi estremi, alluvioni;
|
| 14 |
+
- infrastrutture verdi;
|
| 15 |
+
- gestione acque meteoriche, drenaggio urbano sostenibile;
|
| 16 |
+
- rigenerazione verde, forestazione urbana, alberature.
|
| 17 |
+
|
| 18 |
+
**2. Mobilità e logistica**
|
| 19 |
+
|
| 20 |
+
Parole chiave
|
| 21 |
+
|
| 22 |
+
- mobilità sostenibile, mobilità dolce;
|
| 23 |
+
- trasporto pubblico locale, TPL, autobus/mezzi elettrici;
|
| 24 |
+
- ciclabilità, ciclovie, piste ciclabili;
|
| 25 |
+
- intermodalità, hub logistico, stazioni intermodali;
|
| 26 |
+
- logistica urbana, city logistics, logistica green.
|
| 27 |
+
|
| 28 |
+
Indicatori indiretti
|
| 29 |
+
|
| 30 |
+
- riduzione traffico, decongestionamento;
|
| 31 |
+
- infrastrutture di trasporto;
|
| 32 |
+
- elettrificazione mezzi.
|
| 33 |
+
|
| 34 |
+
**3. Digitalizzazione e competitività**
|
| 35 |
+
|
| 36 |
+
Parole chiave
|
| 37 |
+
|
| 38 |
+
- digitalizzazione, transizione digitale;
|
| 39 |
+
- innovazione tecnologica, trasformazione digitale;
|
| 40 |
+
- banda larga, 5G, cloud computing;
|
| 41 |
+
- piattaforme digitali, servizi digitali, interoperabilità.
|
| 42 |
+
|
| 43 |
+
Indicatori indiretti
|
| 44 |
+
|
| 45 |
+
- digital twin, dati aperti, data governance;
|
| 46 |
+
- imprese innovative, startup, ecosistemi digitali;
|
| 47 |
+
- piattaforme per servizi, digitalizzazione PA, e-government.
|
| 48 |
+
|
| 49 |
+
**4. Rigenerazione urbana e territoriale**
|
| 50 |
+
|
| 51 |
+
Parole chiave
|
| 52 |
+
|
| 53 |
+
- rigenerazione urbana, rigenerazione territoriale
|
| 54 |
+
- riqualificazione edilizia, riuso, riattivazione
|
| 55 |
+
- spazi pubblici, periferie, edilizia sociale
|
| 56 |
+
- partecipazione territoriale, co-progettazione, urbanistica tattica
|
| 57 |
+
|
| 58 |
+
Indicatori
|
| 59 |
+
|
| 60 |
+
- qualità urbana, inclusione territoriale
|
| 61 |
+
- contrasto al degrado, recupero funzionale
|
| 62 |
+
- housing sociale, servizi di prossimità, welfare urbano
|
| 63 |
+
|
| 64 |
+
**5. Sostenibilità energetica**
|
| 65 |
+
|
| 66 |
+
Parole chiave
|
| 67 |
+
|
| 68 |
+
- energia rinnovabile, fonti rinnovabili;
|
| 69 |
+
- efficienza energetica, risparmio energetico;
|
| 70 |
+
- comunità energetiche, autoconsumo collettivo;
|
| 71 |
+
- BIM, fotovoltaico, solare termico, pompe di calore;
|
| 72 |
+
|
| 73 |
+
Indicatori indiretti
|
| 74 |
+
|
| 75 |
+
- decarbonizzazione, transizione energetica;
|
| 76 |
+
- piani energetici comunali, audit energetici;
|
| 77 |
+
- reti intelligenti, smart grid, smart building.
|
requirements.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
pandas==2.3.0
|
| 2 |
+
openpyxl==3.1.5
|
| 3 |
+
faiss-cpu==1.11.0
|
| 4 |
+
python-dotenv==1.1.0
|
| 5 |
+
stqdm==0.0.5
|
| 6 |
+
langchain==0.3.24
|
| 7 |
+
langchain-community==0.3.22
|
| 8 |
+
langchain-huggingface==0.1.2
|
| 9 |
+
huggingface-hub==0.30.2
|
| 10 |
+
hf-xet==1.0.5
|
| 11 |
+
langchain-openai==0.3.14
|
| 12 |
+
sentence-transformers==4.1.0
|
| 13 |
+
streamlit==1.44.1
|
| 14 |
+
scikit-learn==1.5.1
|
| 15 |
+
numpy>=1.25.0,<3.0
|
| 16 |
+
protobuf==3.20.3
|
| 17 |
+
plotly==5.24.0
|
streamlit_config/config.toml
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[server]
|
| 2 |
+
# Max size, in megabytes, for files uploaded with the file_uploader.
|
| 3 |
+
#
|
| 4 |
+
# Default: 200
|
| 5 |
+
maxUploadSize = 400
|