--- task_categories: - image-text-to-text license: other dataset_info: features: - name: filename dtype: string - name: size dtype: int64 - name: url dtype: string - name: license dtype: string - name: title dtype: string - name: created dtype: string - name: updated dtype: string - name: doi dtype: string - name: checksum dtype: string splits: - name: pptx num_bytes: 3925161 num_examples: 10448 download_size: 2028492 dataset_size: 3925161 configs: - config_name: default data_files: - split: pptx path: data/pptx-* --- # PPTAgent/Zenodo10K This is the dataset used in [PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides](https://arxiv.org/abs/2501.03936) and also used for evaluation in [VLM-SlideEval: Evaluating VLMs on Structured Comprehension and Perturbation Sensitivity in PPT](https://huggingface.co/papers/2510.22045). It was crawled from [zenodo](http://zenodo.org). To the best of our knowledge, it is the **largest presentation dataset** currently available, comprising over 10,000 **PowerPoint (.pptx)** files, all distributed under a clear and compliant license. For more information regarding the `PPTAgent` project, please visit its [github repo](https://github.com/icip-cas/PPTAgent). ## VLM-SlideEval Abstract Vision-language models (VLMs) are increasingly used to evaluate multimodal content, including presentation slides, yet their slide-specific understanding remains underexplored {despite their growing role as critics in agentic, model-forward pipelines}. We introduce VLM-SlideEval, an evaluation framework that probes VLMs along three axes: (1) element-level extraction from slide images aligned to ground truth; (2) robustness to controlled perturbations in geometry, style, and text; and (3) higher-level comprehension, such as recovering a deck's narrative order from shuffled slides. Using publicly available decks from Zenodo ( this https URL ), we standardize ground-truth element metadata from PowerPoint XML and live renderings into a unified, verifiable schema. Empirically, VLMs underperform on pixel-accurate extraction and show non-trivial agreement, fidelity, and consistency under controlled perturbations, while performing better on single-slide content understanding; however, they do not reliably capture narrative structure across slides. These results highlight the limits of current VLMs for slide evaluation and motivate calibrated, critic-in-the-loop evaluators that drive iterative refinement and selection in agentic pipelines. ## Sample Usage ```python dirname = f"zenodo-pptx/pptx/{task['license']}/{task['created'][:4]}/" basename = f"{task['checksum'][4:]}-{task['filename']}" filepath = dirname + basename try: open('/tmp/'+basename,'wb').close() except: filepath = dirname + basename[:240] + ".pptx" ``` ## Citation If you find this project helpful, please use the following to cite it: ```bibtex @misc{zheng2025pptagentgeneratingevaluatingpresentations, title={PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides}, author={Hao Zheng and Xinyan Guan and Hao Kong and Jia Zheng and Hongyu Lin and Yaojie Lu and Ben He and Xianpei Han and Le Sun}, year={2025}, eprint={2501.03936}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2501.03936}, } ``` If you use this dataset for VLM evaluation, please also cite: ```bibtex @article{xu2025vlm, title={VLM-SlideEval: Evaluating VLMs on Structured Comprehension and Perturbation Sensitivity in PPT}, author={Xu, Charles and Li, Qiyang and Luo, Jianlan and Levine, Sergey}, journal={arXiv preprint arXiv:2510.22045}, year={2025} } ```