File size: 3,971 Bytes
2654681 bc4ac7a 2654681 f12965f d9d48a8 f12965f 2654681 df071c3 2654681 df071c3 2654681 6907292 2654681 6907292 2654681 f24b090 331d8b3 2654681 6907292 331d8b3 bc4ac7a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: cc-by-nc-nd-4.0
---
<img src="pixcell_1024_banner.png" alt="pixcell_1024_banner" width="500"/>
# PixCell: A generative foundation model for digital histopathology images
[[📄 arXiv]](https://arxiv.org/abs/2506.05127) [[GitHub]](https://github.com/cvlab-stonybrook/PixCell) [[🔬 PixCell-1024]](https://huggingface.co/StonyBrook-CVLab/PixCell-1024) [[🔬 PixCell-256]](https://huggingface.co/StonyBrook-CVLab/PixCell-256) [[🔬 Pixcell-256-Cell-ControlNet]](https://huggingface.co/StonyBrook-CVLab/PixCell-256-Cell-ControlNet) [[💾 Synthetic-TCGA-10M]](https://huggingface.co/datasets/StonyBrook-CVLab/Synthetic-TCGA-10M)
### Load PixCell-1024 model
```python
import torch
from diffusers import DiffusionPipeline
from diffusers import AutoencoderKL
device = torch.device('cuda')
# We do not host the weights of the SD3 VAE -- load it from StabilityAI
sd3_vae = AutoencoderKL.from_pretrained("stabilityai/stable-diffusion-3.5-large", subfolder="vae")
pipeline = DiffusionPipeline.from_pretrained(
"StonyBrook-CVLab/PixCell-1024",
vae=sd3_vae,
custom_pipeline="StonyBrook-CVLab/PixCell-pipeline",
trust_remote_code=True,
torch_dtype=torch.float16,
)
pipeline.to(device);
```
### Load [[UNI-2h]](https://huggingface.co/MahmoodLab/UNI2-h) for conditioning
```python
import timm
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
timm_kwargs = {
'img_size': 224,
'patch_size': 14,
'depth': 24,
'num_heads': 24,
'init_values': 1e-5,
'embed_dim': 1536,
'mlp_ratio': 2.66667*2,
'num_classes': 0,
'no_embed_class': True,
'mlp_layer': timm.layers.SwiGLUPacked,
'act_layer': torch.nn.SiLU,
'reg_tokens': 8,
'dynamic_img_size': True
}
uni_model = timm.create_model("hf-hub:MahmoodLab/UNI2-h", pretrained=True, **timm_kwargs)
transform = create_transform(**resolve_data_config(uni_model.pretrained_cfg, model=uni_model))
uni_model.eval()
uni_model.to(device);
```
### Unconditional generation
```python
uncond = pipeline.get_unconditional_embedding(1)
with torch.amp.autocast('cuda'):
samples = pipeline(uni_embeds=uncond, negative_uni_embeds=None, guidance_scale=1.0)
```
### Conditional generation
```python
# Load image
import numpy as np
import einops
from PIL import Image
from huggingface_hub import hf_hub_download
# This is an example image we provide
path = hf_hub_download(repo_id="StonyBrook-CVLab/PixCell-1024", filename="test_image.png")
image = Image.open(path).convert("RGB")
# Rearrange 1024x1024 image into 16 256x256 patches
uni_patches = np.array(image)
uni_patches = einops.rearrange(uni_patches, '(d1 h) (d2 w) c -> (d1 d2) h w c', d1=4, d2=4)
uni_input = torch.stack([transform(Image.fromarray(item)) for item in uni_patches])
# Extract UNI embeddings
with torch.inference_mode():
uni_emb = uni_model(uni_input.to(device))
# reshape UNI to (bs, 16, D)
uni_emb = uni_emb.unsqueeze(0)
print("Extracted UNI:", uni_emb.shape)
# Get unconditional embedding for classifier-free guidance
uncond = pipeline.get_unconditional_embedding(uni_emb.shape[0])
# Generate new samples
with torch.amp.autocast('cuda'):
samples = pipeline(uni_embeds=uni_emb, negative_uni_embeds=uncond, guidance_scale=1.5).images
```
### License & Usage
**License**: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
**Notice**: This model is a derivative work conditioned on embeddings from the [[UNI-2h]](https://huggingface.co/MahmoodLab/UNI2-h) foundation model. As such, it is subject to the original terms of the UNI2 license.
- Academic & Research Use Only: You may use these weights for non-commercial research purposes.
- No Commercial Use: You may not use this model for any commercial purpose, including product development or commercial services. |