hysts-bot commited on
Commit
7c8792f
·
verified ·
1 Parent(s): 2962d49

Upload data.json with huggingface_hub

Browse files
Files changed (1) hide show
  1. data.json +61 -0
data.json CHANGED
@@ -109720,5 +109720,66 @@
109720
  ],
109721
  "github": "",
109722
  "abstract": "We introduce CHOrD, a novel framework for scalable synthesis of 3D indoor scenes, designed to create house-scale, collision-free, and hierarchically structured indoor digital twins. In contrast to existing methods that directly synthesize the scene layout as a scene graph or object list, CHOrD incorporates a 2D image-based intermediate layout representation, enabling effective prevention of collision artifacts by successfully capturing them as out-of-distribution (OOD) scenarios during generation. Furthermore, unlike existing methods, CHOrD is capable of generating scene layouts that adhere to complex floor plans with multi-modal controls, enabling the creation of coherent, house-wide layouts robust to both geometric and semantic variations in room structures. Additionally, we propose a novel dataset with expanded coverage of household items and room configurations, as well as significantly improved data quality. CHOrD demonstrates state-of-the-art performance on both the 3D-FRONT and our proposed datasets, delivering photorealistic, spatially coherent indoor scene synthesis adaptable to arbitrary floor plan variations."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109723
  }
109724
  ]
 
109720
  ],
109721
  "github": "",
109722
  "abstract": "We introduce CHOrD, a novel framework for scalable synthesis of 3D indoor scenes, designed to create house-scale, collision-free, and hierarchically structured indoor digital twins. In contrast to existing methods that directly synthesize the scene layout as a scene graph or object list, CHOrD incorporates a 2D image-based intermediate layout representation, enabling effective prevention of collision artifacts by successfully capturing them as out-of-distribution (OOD) scenarios during generation. Furthermore, unlike existing methods, CHOrD is capable of generating scene layouts that adhere to complex floor plans with multi-modal controls, enabling the creation of coherent, house-wide layouts robust to both geometric and semantic variations in room structures. Additionally, we propose a novel dataset with expanded coverage of household items and room configurations, as well as significantly improved data quality. CHOrD demonstrates state-of-the-art performance on both the 3D-FRONT and our proposed datasets, delivering photorealistic, spatially coherent indoor scene synthesis adaptable to arbitrary floor plan variations."
109723
+ },
109724
+ {
109725
+ "date": "2025-03-18",
109726
+ "arxiv_id": "2503.13399",
109727
+ "title": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research",
109728
+ "authors": [
109729
+ "James Burgess",
109730
+ "Jeffrey J Nirschl",
109731
+ "Laura Bravo-S\u00e1nchez",
109732
+ "Alejandro Lozano",
109733
+ "Sanket Rajan Gupte",
109734
+ "Jesus G. Galaz-Montoya",
109735
+ "Yuhui Zhang",
109736
+ "Yuchang Su",
109737
+ "Disha Bhowmik",
109738
+ "Zachary Coman",
109739
+ "Sarina M. Hasan",
109740
+ "Alexandra Johannesson",
109741
+ "William D. Leineweber",
109742
+ "Malvika G Nair",
109743
+ "Ridhi Yarlagadda",
109744
+ "Connor Zuraski",
109745
+ "Wah Chiu",
109746
+ "Sarah Cohen",
109747
+ "Jan N. Hansen",
109748
+ "Manuel D Leonetti",
109749
+ "Chad Liu",
109750
+ "Emma Lundberg",
109751
+ "Serena Yeung-Levy"
109752
+ ],
109753
+ "github": "",
109754
+ "abstract": "Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment proposal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based `RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of 53\\%; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgeneralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available at https://huggingface.co/datasets/jmhb/microvqa, and project page at https://jmhb0.github.io/microvqa."
109755
+ },
109756
+ {
109757
+ "date": "2025-03-18",
109758
+ "arxiv_id": "2503.12590",
109759
+ "title": "Personalize Anything for Free with Diffusion Transformer",
109760
+ "authors": [
109761
+ "Haoran Feng",
109762
+ "Zehuan Huang",
109763
+ "Lin Li",
109764
+ "Hairong Lv",
109765
+ "Lu Sheng"
109766
+ ],
109767
+ "github": "",
109768
+ "abstract": "Personalized image generation aims to produce images of user-specified concepts while enabling flexible editing. Recent training-free approaches, while exhibit higher computational efficiency than training-based methods, struggle with identity preservation, applicability, and compatibility with diffusion transformers (DiTs). In this paper, we uncover the untapped potential of DiT, where simply replacing denoising tokens with those of a reference subject achieves zero-shot subject reconstruction. This simple yet effective feature injection technique unlocks diverse scenarios, from personalization to image editing. Building upon this observation, we propose Personalize Anything, a training-free framework that achieves personalized image generation in DiT through: 1) timestep-adaptive token replacement that enforces subject consistency via early-stage injection and enhances flexibility through late-stage regularization, and 2) patch perturbation strategies to boost structural diversity. Our method seamlessly supports layout-guided generation, multi-subject personalization, and mask-controlled editing. Evaluations demonstrate state-of-the-art performance in identity preservation and versatility. Our work establishes new insights into DiTs while delivering a practical paradigm for efficient personalization."
109769
+ },
109770
+ {
109771
+ "date": "2025-03-18",
109772
+ "arxiv_id": "2503.11495",
109773
+ "title": "V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning",
109774
+ "authors": [
109775
+ "Zixu Cheng",
109776
+ "Jian Hu",
109777
+ "Ziquan Liu",
109778
+ "Chenyang Si",
109779
+ "Wei Li",
109780
+ "Shaogang Gong"
109781
+ ],
109782
+ "github": "",
109783
+ "abstract": "Human processes video reasoning in a sequential spatio-temporal reasoning logic, we first identify the relevant frames (\"when\") and then analyse the spatial relationships (\"where\") between key objects, and finally leverage these relationships to draw inferences (\"what\"). However, can Video Large Language Models (Video-LLMs) also \"reason through a sequential spatio-temporal logic\" in videos? Existing Video-LLM benchmarks primarily focus on assessing object presence, neglecting relational reasoning. Consequently, it is difficult to measure whether a model truly comprehends object interactions (actions/events) in videos or merely relies on pre-trained \"memory\" of co-occurrences as biases in generating answers. In this work, we introduce a Video Spatio-Temporal Reasoning (V-STaR) benchmark to address these shortcomings. The key idea is to decompose video understanding into a Reverse Spatio-Temporal Reasoning (RSTR) task that simultaneously evaluates what objects are present, when events occur, and where they are located while capturing the underlying Chain-of-thought (CoT) logic. To support this evaluation, we construct a dataset to elicit the spatial-temporal reasoning process of Video-LLMs. It contains coarse-to-fine CoT questions generated by a semi-automated GPT-4-powered pipeline, embedding explicit reasoning chains to mimic human cognition. Experiments from 14 Video-LLMs on our V-STaR reveal significant gaps between current Video-LLMs and the needs for robust and consistent spatio-temporal reasoning."
109784
  }
109785
  ]