Spaces:
Running
A newer version of the Gradio SDK is available:
6.5.1
title: PolyFusionAgent Demo
sdk: gradio
python_version: '3.11'
app_file: PolyAgent/gradio_interface.py
PolyFusionAgent: A Multimodal Foundation Model and Autonomous AI Assistant for Polymer Property Prediction and Inverse Design
PolyFusionAgent is an interactive framework that couples a multimodal polymer foundation model (PolyFusion) with a tool-augmented, literature-grounded design agent (PolyAgent) for polymer property prediction, inverse design, and evidence-linked scientific reasoning.
PolyFusion aligns complementary polymer viewsβPSMILES sequence, 2D topology, 3D structural proxies, and chemical fingerprintsβinto a shared latent space that transfers across chemistries and data regimes.
PolyAgent closes the design loop by connecting prediction + generation + retrieval + visualization so recommendations are contextualized with explicit supporting precedent.
Links
- Live Space: kaurm43/PolyFusionAgent
- Weights repo: kaurm43/polyfusionagent-weights
- Weights file browser: weights/tree/main
Abstract
Polymers underpin technologies from energy storage to biomedicine, yet discovery remains constrained by an astronomically large design space and fragmented representations of polymer structure, properties, and prior knowledge. Although machine learning has advanced property prediction and candidate generation, most models remain disconnected from the physical and experimental context needed for actionable materials design.
Here we introduce PolyFusionAgent, an interactive framework that couples a multimodal polymer foundation model (PolyFusion) with a tool-augmented, literature-grounded design agent (PolyAgent). PolyFusion aligns complementary polymer viewsβsequence, topology, three-dimensional structural proxies, and chemical fingerprintsβacross millions of polymers to learn a shared latent space that transfers across chemistries and data regimes. Using this unified representation, PolyFusion improves prediction of key thermophysical properties and enables property-conditioned generation of chemically valid, structurally novel polymers that extend beyond the reference design space.
PolyAgent closes the design loop by coupling prediction and inverse design to evidence retrieval from the polymer literature, so that hypotheses are proposed, evaluated, and contextualized with explicit supporting precedent in a single workflow. Together, PolyFusionAgent establishes a route toward interactive, evidence-linked polymer discovery that combines large-scale representation learning, multimodal chemical knowledge, and verifiable scientific reasoning.
Repository Structure
.
βββ PolyAgent/
β βββ gradio_interface.py # Gradio UI (Console / Tools / Other LLMs)
β βββ orchestrator.py # Controller: planning + tool registry + execution
β βββ rag_pipeline.py # Local KB + web retrieval + PDF ingestion utilities
βββ PolyFusion/
β βββ CL.py # Multimodal contrastive learning utilities
β βββ DeBERTav2.py # PSMILES encoder wrapper (HF Transformers)
β βββ GINE.py # 2D graph encoder (PyTorch Geometric)
β βββ SchNet.py # 3D geometry encoder (PyTorch Geometric SchNet)
β βββ Transformer.py # Fingerprint transformer encoder
βββ Downstream Tasks/
β βββ Polymer_Generation.py # Inverse design / generation utilities
β βββ Property_Prediction.py # Property prediction utilities
βββ Data_Modalities.py # CSVβmultimodal extraction (2D graph / 3D geometry / fingerprints) + wildcard handling
βββ requirements.txt
βββ README.md
What PolyFusionAgent can do
PolyFusion
1) Multimodal extraction from PSMILES (Data_Modalities.py)
- Wildcard handling: attachment points [*] are mapped to a rare marker (e.g., [At]) for stable RDKit featurization, then converted back for modality-construction/display/generation outputs
- Builds PSMILES sequence inputs for the language encoder
- Constructs RDKit-based 2D atom/bond graphs (node/edge features + connectivity)
- Generates ETKDG 3D conformer proxies with force-field relaxation fallback
- Computes Morgan (ECFP-style) fingerprints (fixed-length, radius-configurable)
2) Multimodal foundation embedding (PolyFusion/*)
Encoders per modality:
PSMILES Transformer (PolyFusion/DeBERTav2.py)
GINE for 2D graphs (PolyFusion/GINE.py)
SchNet for 3D geometry (PolyFusion/SchNet.py)
Fingerprint Transformer (PolyFusion/Transformer.py)
Projects each modality into a shared, unit-normalized latent space
Uses contrastive alignment where a fused structural anchor (PSMILES + 2D graph + 3D geometry) is aligned with a fingerprint target (PolyFusion/CL.py)
Downstream Tasks
3) Forward property prediction (structure β properties) (Downstream Tasks/Property_Prediction.py)
- Lightweight regressors on top of PolyFusion embeddings for thermophysical property prediction
- Returns predictions in original units (with standardization handled internally)
4) Inverse design / polymer generation (targets β candidates) (Downstream Tasks/Polymer_Generation.py)
- Property-conditioned candidate generation using PolyFusion embeddings as the conditioning interface for a sequence-to-sequence generator (SELFIES-based Encoder Decoder)
- Produces candidate lists suitable for generate β filter β validate workflows
PolyAgent
Goal
Convert open-ended polymer design prompts into grounded, constraint-consistent, evidence-linked outputs by coupling PolyFusion with tool-mediated verification and retrieval.
What PolyAgent does (system-level)
- Decomposes a user request into typed sub-tasks (prediction, generation, retrieval, visualization)
- Calls tools for prediction, inverse design, retrieval (local RAG + web), and visualization
- Returns a final response with explicit evidence/citations and an experiment-ready validation plan
Main files
- PolyAgent/orchestrator.py β planning + tool routing (controller)
- PolyAgent/rag_pipeline.py β local retrieval utilities (PDF β chunks β embeddings β vector store)
- PolyAgent/gradio_interface.py β Gradio UI entrypoint
Running on Hugging Face Spaces
This repository is configured as a Gradio Space via the YAML header at the top of this README.
Entry point: app_file: PolyAgent/gradio_interface.py
Model weights and artifacts
The orchestrator downloads required artifacts (tokenizers, pretrained encoders, downstream heads, inverse-design models) from a Hugging Face model repo at runtime using snapshot_download.
Default weights repo
By default, the Space expects:
POLYFUSION_WEIGHTS_REPO=kaurm43/polyfusionagent-weights
POLYFUSION_WEIGHTS_REPO_TYPE=model
- Weights repo: kaurm43/polyfusionagent-weights
Override via environment variables (local or Space secrets)
POLYFUSION_WEIGHTS_REPO=your-org/your-weights-repo
POLYFUSION_WEIGHTS_REPO_TYPE=model
POLYFUSION_WEIGHTS_DIR=/path/to/cache
Expected weights layout (inside the weights repo)
The orchestrator expects these folders/files:
- tokenizer_spm_5m/**
- polyfusion_cl_5m/**
- downstream_heads_5m/**
- inverse_design_5m/**
- MANIFEST.txt
If you are building your own weights repo, mirror this structure.
Local knowledge base (RAG)
Chroma DB path
The orchestrator defaults to a folder path (relative or absolute):
CHROMA_DB_PATH=chroma_polymer_db_big
Options
- Ship a Chroma DB folder in this repo (good for small KBs)
- Host a KB as a separate dataset/model repo and download it similarly to weights
Local Quickstart (optional)
1) Create environment
python -m venv .venv
# Windows:
# .venv\Scripts\activate
# macOS/Linux:
source .venv/bin/activate
python -m pip install --upgrade pip
2) Install dependencies
pip install -r requirements.txt
3) Run the Gradio app
python PolyAgent/gradio_interface.py
Configuration (optional)
# - If someone forks/runs locally and needs these APIs, they could create their OWN keys and set them as
# environment variables (or add them in their own Space Settings β Secrets).
# Hugging Face token:
# In a Hugging Face Space: Settings β Secrets β add key "HF_TOKEN" with your token value.
HF_TOKEN=hf_...
# OpenAI credentials:
# Create your own key, then set it as an environment variable or Space secret.
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4.1
Reproducibility
This repo separates (1) representation learning, (2) downstream tasks, and (3) the interactive agent/UI so results can be reproduced end-to-end.
What to run (in execution order)
Data extraction / featurization
- Code:
Data_Modalities.py
- Code:
PolyFusion pretraining (multimodal contrastive learning)
- Individual Encoders:
PolyFusion/DeBERTav2.pyPolyFusion/GINE.pyPolyFusion/SchNet.pyPolyFusion/Transformer.py
- Code:
PolyFusion/CL.py
- Individual Encoders:
Downstream evaluation (property prediction)
- Code:
Downstream Tasks/Property_Prediction.py
- Code:
Inverse design (property-conditioned generation)
- Code:
Downstream Tasks/Polymer_Generation.py
- Code:
Agent + UI (PolyAgent + Gradio Space)
- Retrieval utilities:
PolyAgent/rag_pipeline.py - Controller / tool routing:
PolyAgent/orchestrator.py - Entry point:
PolyAgent/gradio_interface.py
- Retrieval utilities:
Weights / artifacts
All pretrained checkpoints, tokenizers, and downstream heads are stored in the weights repository: