--- license: apache-2.0 pipeline_tag: text-to-3d --- # Memorization in 3D Shape Generation This repository contains the official implementation of the paper [Memorization in 3D Shape Generation: An Empirical Study](https://huggingface.co/papers/2512.23628). [**Project Page**](https://urrealhero.github.io/3DGenMemorizationWeb/) | [**GitHub Repository**](https://github.com/zlab-princeton/3d_mem) ## Description Generative models are increasingly used in 3D vision to synthesize novel shapes, yet it remains unclear whether their generation relies on memorizing training shapes. This work introduces an evaluation framework to quantify memorization ($Z_U$) in 3D generative models and studies the influence of different data and modeling designs on memorization. Through controlled experiments with a latent vector-set (Vecset) diffusion model, the authors provide analysis and strategies to reduce memorization without degrading generation quality. ## Usage For environment setup and instructions on data curation, please refer to the [official GitHub repository](https://github.com/zlab-princeton/3d_mem). ### Inference **Text-conditional generation** ```bash python inference.py \ --config configs/Baseline.yaml \ --ckpt PATH/TO/CHECKPOINT \ --out_dir outputs_text/ \ --num_samples 4 \ --text "a chair" ``` **Class-conditional generation** ```bash python inference.py \ --config configs/Conditioning/LVIS-16-Category.yaml \ --ckpt PATH/TO/CHECKPOINT \ --out_dir outputs_lvis/ \ --num_samples 4 \ --class_id 0 ``` **Image-conditional generation** ```bash python inference.py \ --config configs/Conditioning/Image.yaml \ --ckpt PATH/TO/CHECKPOINT \ --out_dir outputs_image \ --num_samples 1 \ --image_path data_sprite.png \ --image_views 12 \ --image_pick random ``` ## Citation ```bibtex @article{pu2025memorization, title={Memorization in 3D Shape Generation: An Empirical Study}, author={Pu, Shu and Zeng, Boya and Zhou, Kaichen and Wang, Mengyu and Liu, Zhuang}, journal={arXiv preprint arXiv:2512.23628}, year={2025} } ```