3DGenMem / README.md
nielsr's picture
nielsr HF Staff
Add model card
e5d74cf verified
|
raw
history blame
2.12 kB
metadata
license: apache-2.0
pipeline_tag: text-to-3d

Memorization in 3D Shape Generation

This repository contains the official implementation of the paper Memorization in 3D Shape Generation: An Empirical Study.

Project Page | GitHub Repository

Description

Generative models are increasingly used in 3D vision to synthesize novel shapes, yet it remains unclear whether their generation relies on memorizing training shapes. This work introduces an evaluation framework to quantify memorization ($Z_U$) in 3D generative models and studies the influence of different data and modeling designs on memorization. Through controlled experiments with a latent vector-set (Vecset) diffusion model, the authors provide analysis and strategies to reduce memorization without degrading generation quality.

Usage

For environment setup and instructions on data curation, please refer to the official GitHub repository.

Inference

Text-conditional generation

python inference.py \
    --config configs/Baseline.yaml \
    --ckpt PATH/TO/CHECKPOINT \
    --out_dir outputs_text/ \
    --num_samples 4 \
    --text "a chair"

Class-conditional generation

python inference.py \
    --config configs/Conditioning/LVIS-16-Category.yaml \
    --ckpt PATH/TO/CHECKPOINT \
    --out_dir outputs_lvis/ \
    --num_samples 4 \
    --class_id 0

Image-conditional generation

python inference.py \
    --config configs/Conditioning/Image.yaml \
    --ckpt PATH/TO/CHECKPOINT \
    --out_dir outputs_image \
    --num_samples 1 \
    --image_path data_sprite.png \
    --image_views 12 \
    --image_pick random

Citation

@article{pu2025memorization,
  title={Memorization in 3D Shape Generation: An Empirical Study},
  author={Pu, Shu and Zeng, Boya and Zhou, Kaichen and Wang, Mengyu and Liu, Zhuang},
  journal={arXiv preprint arXiv:2512.23628},
  year={2025}
}