--- title: PolyFusionAgent Demo sdk: gradio python_version: "3.11" app_file: PolyAgent/gradio_interface.py --- # PolyFusionAgent: A Multimodal Foundation Model and Autonomous AI Assistant for Polymer Property Prediction and Inverse Design **PolyFusionAgent** is an interactive framework that couples a **multimodal polymer foundation model (PolyFusion)** with a **tool-augmented, literature-grounded design agent (PolyAgent)** for polymer property prediction, inverse design, and evidence-linked scientific reasoning. > **PolyFusion** aligns complementary polymer views—**PSMILES sequence**, **2D topology**, **3D structural proxies**, and **chemical fingerprints**—into a shared latent space that transfers across chemistries and data regimes. > **PolyAgent** closes the design loop by connecting **prediction + generation + retrieval + visualization** so recommendations are contextualized with explicit supporting precedent. ## Links - **Live Space:** [kaurm43/PolyFusionAgent](https://huggingface.co/spaces/kaurm43/PolyFusionAgent) - **Weights repo:** [kaurm43/polyfusionagent-weights](https://huggingface.co/kaurm43/polyfusionagent-weights) - **Weights file browser:** [weights/tree/main](https://huggingface.co/kaurm43/polyfusionagent-weights/tree/main) --- ## Abstract Polymers underpin technologies from energy storage to biomedicine, yet discovery remains constrained by an astronomically large design space and fragmented representations of polymer structure, properties, and prior knowledge. Although machine learning has advanced property prediction and candidate generation, most models remain disconnected from the physical and experimental context needed for actionable materials design. Here we introduce **PolyFusionAgent**, an interactive framework that couples a multimodal polymer foundation model (**PolyFusion**) with a tool-augmented, literature-grounded design agent (**PolyAgent**). PolyFusion aligns complementary polymer views—sequence, topology, three-dimensional structural proxies, and chemical fingerprints—across millions of polymers to learn a shared latent space that transfers across chemistries and data regimes. Using this unified representation, PolyFusion improves prediction of key thermophysical properties and enables property-conditioned generation of chemically valid, structurally novel polymers that extend beyond the reference design space. PolyAgent closes the design loop by coupling prediction and inverse design to evidence retrieval from the polymer literature, so that hypotheses are proposed, evaluated, and contextualized with explicit supporting precedent in a single workflow. Together, **PolyFusionAgent** establishes a route toward interactive, evidence-linked polymer discovery that combines large-scale representation learning, multimodal chemical knowledge, and verifiable scientific reasoning. --- ## Repository Structure ```text . ├── PolyAgent/ │ ├── gradio_interface.py # Gradio UI (Console / Tools / Other LLMs) │ ├── orchestrator.py # Controller: planning + tool registry + execution │ └── rag_pipeline.py # Local KB + web retrieval + PDF ingestion utilities ├── PolyFusion/ │ ├── CL.py # Multimodal contrastive learning utilities │ ├── DeBERTav2.py # PSMILES encoder wrapper (HF Transformers) │ ├── GINE.py # 2D graph encoder (PyTorch Geometric) │ ├── SchNet.py # 3D geometry encoder (PyTorch Geometric SchNet) │ └── Transformer.py # Fingerprint transformer encoder ├── Downstream Tasks/ │ ├── Polymer_Generation.py # Inverse design / generation utilities │ └── Property_Prediction.py # Property prediction utilities ├── Data_Modalities.py # CSV→multimodal extraction (2D graph / 3D geometry / fingerprints) + wildcard handling ├── requirements.txt └── README.md ``` ## What PolyFusionAgent can do ### PolyFusion #### 1) Multimodal extraction from PSMILES (Data_Modalities.py) - Wildcard handling: attachment points [*] are mapped to a rare marker (e.g., [At]) for stable RDKit featurization, then converted back for modality-construction/display/generation outputs - Builds PSMILES sequence inputs for the language encoder - Constructs RDKit-based 2D atom/bond graphs (node/edge features + connectivity) - Generates ETKDG 3D conformer proxies with force-field relaxation fallback - Computes Morgan (ECFP-style) fingerprints (fixed-length, radius-configurable) #### 2) Multimodal foundation embedding (PolyFusion/*) Encoders per modality: - PSMILES Transformer (PolyFusion/DeBERTav2.py) - GINE for 2D graphs (PolyFusion/GINE.py) - SchNet for 3D geometry (PolyFusion/SchNet.py) - Fingerprint Transformer (PolyFusion/Transformer.py) - Projects each modality into a shared, unit-normalized latent space - Uses contrastive alignment where a fused structural anchor (PSMILES + 2D graph + 3D geometry) is aligned with a fingerprint target (PolyFusion/CL.py) ### Downstream Tasks #### 3) Forward property prediction (structure → properties) (Downstream Tasks/Property_Prediction.py) - Lightweight regressors on top of PolyFusion embeddings for thermophysical property prediction - Returns predictions in original units (with standardization handled internally) #### 4) Inverse design / polymer generation (targets → candidates) (Downstream Tasks/Polymer_Generation.py) - Property-conditioned candidate generation using PolyFusion embeddings as the conditioning interface for a sequence-to-sequence generator (SELFIES-based Encoder Decoder) - Produces candidate lists suitable for generate → filter → validate workflows ### PolyAgent #### Goal Convert open-ended polymer design prompts into grounded, constraint-consistent, evidence-linked outputs by coupling PolyFusion with tool-mediated verification and retrieval. #### What PolyAgent does (system-level) - Decomposes a user request into typed sub-tasks (prediction, generation, retrieval, visualization) - Calls tools for prediction, inverse design, retrieval (local RAG + web), and visualization - Returns a final response with explicit evidence/citations and an experiment-ready validation plan #### Main files - PolyAgent/orchestrator.py — planning + tool routing (controller) - PolyAgent/rag_pipeline.py — local retrieval utilities (PDF → chunks → embeddings → vector store) - PolyAgent/gradio_interface.py — Gradio UI entrypoint --- ## Running on Hugging Face Spaces This repository is configured as a Gradio Space via the YAML header at the top of this README. Entry point: app_file: PolyAgent/gradio_interface.py --- ## Model weights and artifacts The orchestrator downloads required artifacts (tokenizers, pretrained encoders, downstream heads, inverse-design models) from a Hugging Face model repo at runtime using snapshot_download. ### Default weights repo By default, the Space expects: ```bash POLYFUSION_WEIGHTS_REPO=kaurm43/polyfusionagent-weights POLYFUSION_WEIGHTS_REPO_TYPE=model ``` - **Weights repo:** [kaurm43/polyfusionagent-weights](https://huggingface.co/kaurm43/polyfusionagent-weights) ### Override via environment variables (local or Space secrets) ```bash POLYFUSION_WEIGHTS_REPO=your-org/your-weights-repo POLYFUSION_WEIGHTS_REPO_TYPE=model POLYFUSION_WEIGHTS_DIR=/path/to/cache ``` ### Expected weights layout (inside the weights repo) The orchestrator expects these folders/files: - tokenizer_spm_5m/** - polyfusion_cl_5m/** - downstream_heads_5m/** - inverse_design_5m/** - MANIFEST.txt If you are building your own weights repo, mirror this structure. --- ## Local knowledge base (RAG) ### Chroma DB path The orchestrator defaults to a folder path (relative or absolute): ```bash CHROMA_DB_PATH=chroma_polymer_db_big ``` ### Options - Ship a Chroma DB folder in this repo (good for small KBs) - Host a KB as a separate dataset/model repo and download it similarly to weights --- ## Local Quickstart (optional) ### 1) Create environment ```bash python -m venv .venv # Windows: # .venv\Scripts\activate # macOS/Linux: source .venv/bin/activate python -m pip install --upgrade pip ``` ### 2) Install dependencies ```bash pip install -r requirements.txt ``` ### 3) Run the Gradio app ```bash python PolyAgent/gradio_interface.py ``` --- ## Configuration (optional) ```bash # - If someone forks/runs locally and needs these APIs, they could create their OWN keys and set them as # environment variables (or add them in their own Space Settings → Secrets). # Hugging Face token: # In a Hugging Face Space: Settings → Secrets → add key "HF_TOKEN" with your token value. HF_TOKEN=hf_... # OpenAI credentials: # Create your own key, then set it as an environment variable or Space secret. OPENAI_API_KEY=sk-... OPENAI_MODEL=gpt-4.1 ``` ## Reproducibility This repo separates **(1) representation learning**, **(2) downstream tasks**, and **(3) the interactive agent/UI** so results can be reproduced end-to-end. ### What to run (in execution order) - **Data extraction / featurization** - Code: `Data_Modalities.py` - **PolyFusion pretraining (multimodal contrastive learning)** - Individual Encoders: - `PolyFusion/DeBERTav2.py` - `PolyFusion/GINE.py` - `PolyFusion/SchNet.py` - `PolyFusion/Transformer.py` - Code: `PolyFusion/CL.py` - **Downstream evaluation (property prediction)** - Code: `Downstream Tasks/Property_Prediction.py` - **Inverse design (property-conditioned generation)** - Code: `Downstream Tasks/Polymer_Generation.py` - **Agent + UI (PolyAgent + Gradio Space)** - Retrieval utilities: `PolyAgent/rag_pipeline.py` - Controller / tool routing: `PolyAgent/orchestrator.py` - Entry point: `PolyAgent/gradio_interface.py` ### Weights / artifacts All pretrained checkpoints, tokenizers, and downstream heads are stored in the weights repository: - https://huggingface.co/kaurm43/polyfusionagent-weights