id
int64 0
427
| paper_ID
stringlengths 12
12
| paper_title
stringlengths 26
144
| publication_date
stringdate 2024-04-27 00:00:00
2025-08-05 00:00:00
| update_date
stringdate 2024-04-29 00:00:00
2025-08-05 00:00:00
| image
imagewidth (px) 1.14k
1.7k
|
|---|---|---|---|---|---|
0
|
2508.03404v1
|
Visual Document Understanding and Question Answering_ A Multi-Agent Collaboration Framework with Test-Time Scaling
|
2025-08-05
|
2025-08-05
| |
1
|
2508.02917v1
|
Following Route Instructions using Large Vision-Language Models_ A Comparison between Low-level and Panoramic Action Spaces
|
2025-08-04
|
2025-08-04
| |
2
|
2508.01943v1
|
ROVER_ Recursive Reasoning Over Videos with Vision-Language Models for Embodied Tasks
|
2025-08-03
|
2025-08-03
| |
3
|
2508.01408v1
|
Artificial Intelligence and Misinformation in Art_ Can Vision Language Models Judge the Hand or the Machine Behind the Canvas_
|
2025-08-02
|
2025-08-02
| |
4
|
2508.01057v1
|
REACT_ A Real-Time Edge-AI Based V2X Framework for Accident Avoidance in Autonomous Driving System
|
2025-08-01
|
2025-08-01
| |
5
|
2507.23734v1
|
RAGNet_ Large-scale Reasoning-based Affordance Segmentation Benchmark towards General Grasping
|
2025-07-31
|
2025-07-31
| |
6
|
2507.23134v1
|
Details Matter for Indoor Open-vocabulary 3D Instance Segmentation
|
2025-07-30
|
2025-07-30
| |
7
|
2507.22958v1
|
CHECK-MAT_ Checking Hand-Written Mathematical Answers for the Russian Unified State Exam
|
2025-07-29
|
2025-07-29
| |
8
|
2507.21335v1
|
Analyzing the Sensitivity of Vision Language Models in Visual Question Answering
|
2025-07-28
|
2025-07-28
| |
9
|
2507.20409v1
|
Cognitive Chain-of-Thought_ Structured Multimodal Reasoning about Social Situations
|
2025-07-27
|
2025-07-27
| |
10
|
2507.20077v1
|
The Devil is in the EOS_ Sequence Training for Detailed Image Captioning
|
2025-07-26
|
2025-07-26
| |
11
|
2507.19679v1
|
Efficient Learning for Product Attributes with Compact Multimodal Models
|
2025-07-25
|
2025-07-25
| |
12
|
2507.18743v1
|
SAR-TEXT_ A Large-Scale SAR Image-Text Dataset Built with SAR-Narrator and Progressive Transfer Learning
|
2025-07-24
|
2025-07-24
| |
13
|
2507.17722v1
|
BetterCheck_ Towards Safeguarding VLMs for Automotive Perception Systems
|
2025-07-23
|
2025-07-23
| |
14
|
2507.17050v1
|
Toward Scalable Video Narration_ A Training-free Approach Using Multimodal Large Language Models
|
2025-07-22
|
2025-07-22
| |
15
|
2507.16856v1
|
SIA_ Enhancing Safety via Intent Awareness for Vision-Language Models
|
2025-07-21
|
2025-07-21
| |
16
|
2507.15025v1
|
Survey of GenAI for Automotive Software Development_ From Requirements to Executable Code
|
2025-07-20
|
2025-07-20
| |
17
|
2507.15882v2
|
Document Haystack_ A Long Context Multimodal Image_Document Understanding Vision LLM Benchmark
|
2025-07-18
|
2025-08-04
| |
18
|
2507.13568v2
|
LoRA-Loop_ Closing the Synthetic Replay Cycle for Continual VLM Learning
|
2025-07-17
|
2025-07-29
| |
19
|
2507.12644v1
|
VLMgineer_ Vision Language Models as Robotic Toolsmiths
|
2025-07-16
|
2025-07-16
| |
20
|
2507.11730v1
|
Seeing the Signs_ A Survey of Edge-Deployable OCR Models for Billboard Visibility Analysis
|
2025-07-15
|
2025-07-15
| |
21
|
2507.10548v1
|
EmbRACE-3K_ Embodied Reasoning and Action in Complex Environments
|
2025-07-14
|
2025-07-14
| |
22
|
2507.09615v1
|
Towards Fine-Grained Adaptation of CLIP via a Self-Trained Alignment Score
|
2025-07-13
|
2025-07-13
| |
23
|
2507.09209v1
|
Uncertainty-Driven Expert Control_ Enhancing the Reliability of Medical Vision-Language Models
|
2025-07-12
|
2025-07-12
| |
24
|
2507.08982v1
|
VIP_ Visual Information Protection through Adversarial Attacks on Vision-Language Models
|
2025-07-11
|
2025-07-11
| |
25
|
2507.07939v2
|
SAGE_ A Visual Language Model for Anomaly Detection via Fact Enhancement and Entropy-aware Alignment
|
2025-07-10
|
2025-07-22
| |
26
|
2507.07317v2
|
ADIEE_ Automatic Dataset Creation and Scorer for Instruction-Guided Image Editing Evaluation
|
2025-07-09
|
2025-07-28
| |
27
|
2507.08030v1
|
A Systematic Analysis of Declining Medical Safety Messaging in Generative AI Models
|
2025-07-08
|
2025-07-08
| |
28
|
2507.05515v2
|
LEGO Co-builder_ Exploring Fine-Grained Vision-Language Modeling for Multimodal LEGO Assembly Assistants
|
2025-07-07
|
2025-07-23
| |
29
|
2507.04524v1
|
VLM-TDP_ VLM-guided Trajectory-conditioned Diffusion Policy for Robust Long-Horizon Manipulation
|
2025-07-06
|
2025-07-06
| |
30
|
2507.04107v2
|
VICI_ VLM-Instructed Cross-view Image-localisation
|
2025-07-05
|
2025-07-22
| |
31
|
2507.13361v1
|
VLMs have Tunnel Vision_ Evaluating Nonlocal Visual Reasoning in Leading VLMs
|
2025-07-04
|
2025-07-04
| |
32
|
2507.03123v1
|
Towards a Psychoanalytic Perspective on VLM Behaviour_ A First-step Interpretation with Intriguing Observations
|
2025-07-03
|
2025-07-03
| |
33
|
2507.02190v1
|
cVLA_ Towards Efficient Camera-Space VLAs
|
2025-07-02
|
2025-07-02
| |
34
|
2507.02001v1
|
Temporal Chain of Thought_ Long-Video Understanding by Thinking in Frames
|
2025-07-01
|
2025-07-01
| |
35
|
2507.00209v3
|
SurgiSR4K_ A High-Resolution Endoscopic Video Dataset for Robotic-Assisted Minimally Invasive Procedures
|
2025-06-30
|
2025-07-29
| |
36
|
2506.23329v1
|
IR3D-Bench_ Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering
|
2025-06-29
|
2025-06-29
| |
37
|
2507.02948v3
|
DriveMRP_ Enhancing Vision-Language Models with Synthetic Motion Data for Motion Risk Prediction
|
2025-06-28
|
2025-07-13
| |
38
|
2506.22636v1
|
ReCo_ Reminder Composition Mitigates Hallucinations in Vision-Language Models
|
2025-06-27
|
2025-06-27
| |
39
|
2506.21656v1
|
Fine-Grained Preference Optimization Improves Spatial Reasoning in VLMs
|
2025-06-26
|
2025-06-26
| |
40
|
2506.20832v1
|
Leveraging Vision-Language Models to Select Trustworthy Super-Resolution Samples Generated by Diffusion Models
|
2025-06-25
|
2025-06-25
| |
41
|
2507.02909v1
|
Beyond Token Pruning_ Operation Pruning in Vision-Language Models
|
2025-06-24
|
2025-06-24
| |
42
|
2506.19079v1
|
Reading Smiles_ Proxy Bias in Foundation Models for Facial Emotion Recognition
|
2025-06-23
|
2025-06-23
| |
43
|
2506.18140v1
|
See-in-Pairs_ Reference Image-Guided Comparative Vision-Language Models for Medical Diagnosis
|
2025-06-22
|
2025-06-22
| |
44
|
2506.17811v2
|
RoboMonkey_ Scaling Test-Time Sampling and Verification for Vision-Language-Action Models
|
2025-06-21
|
2025-07-07
| |
45
|
2506.17417v1
|
Aha Moment Revisited_ Are VLMs Truly Capable of Self Verification in Inference-time Scaling_
|
2025-06-20
|
2025-06-20
| |
46
|
2506.16652v1
|
CodeDiffuser_ Attention-Enhanced Diffusion Policy via VLM-Generated Code for Instruction Ambiguity
|
2025-06-19
|
2025-06-19
| |
47
|
2506.15871v1
|
Visual symbolic mechanisms_ Emergent symbol processing in vision language models
|
2025-06-18
|
2025-06-18
| |
48
|
2506.14907v1
|
PeRL_ Permutation-Enhanced Reinforcement Learning for Interleaved Vision-Language Reasoning
|
2025-06-17
|
2025-06-17
| |
49
|
2506.14035v1
|
SimpleDoc_ Multi-Modal Document Understanding with Dual-Cue Page Retrieval and Iterative Refinement
|
2025-06-16
|
2025-06-16
| |
50
|
2506.12849v1
|
CAPO_ Reinforcing Consistent Reasoning in Medical Decision-Making
|
2025-06-15
|
2025-06-15
| |
51
|
2506.12409v1
|
Branch_ or Layer_ Zeroth-Order Optimization for Continual Learning of Vision-Language Models
|
2025-06-14
|
2025-06-14
| |
52
|
2506.11976v2
|
How Visual Representations Map to Language Feature Space in Multimodal LLMs
|
2025-06-13
|
2025-06-22
| |
53
|
2506.11234v1
|
Poutine_ Vision-Language-Trajectory Pre-Training and Reinforcement Learning Post-Training Enable Robust End-to-End Autonomous Driving
|
2025-06-12
|
2025-06-12
| |
54
|
2506.10202v1
|
Q2E_ Query-to-Event Decomposition for Zero-Shot Multilingual Text-to-Video Retrieval
|
2025-06-11
|
2025-06-11
| |
55
|
2506.17267v1
|
CF-VLM_CounterFactual Vision-Language Fine-tuning
|
2025-06-10
|
2025-06-10
| |
56
|
2506.08227v1
|
A Good CREPE needs more than just Sugar_ Investigating Biases in Compositional Vision-Language Benchmarks
|
2025-06-09
|
2025-06-09
| |
57
|
2506.07214v1
|
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation
|
2025-06-08
|
2025-06-08
| |
58
|
2506.06884v1
|
FREE_ Fast and Robust Vision Language Models with Early Exits
|
2025-06-07
|
2025-06-07
| |
59
|
2506.06506v1
|
Biases Propagate in Encoder-based Vision-Language Models_ A Systematic Analysis From Intrinsic Measures to Zero-shot Retrieval Outcomes
|
2025-06-06
|
2025-06-06
| |
60
|
2506.05523v1
|
MORSE-500_ A Programmatically Controllable Video Benchmark to Stress-Test Multimodal Reasoning
|
2025-06-05
|
2025-06-05
| |
61
|
2506.04308v1
|
RoboRefer_ Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics
|
2025-06-04
|
2025-06-04
| |
62
|
2506.03371v1
|
Toward Reliable VLM_ A Fine-Grained Benchmark and Framework for Exposure_ Bias_ and Inference in Korean Street Views
|
2025-06-03
|
2025-06-03
| |
63
|
2506.01955v1
|
Dual-Process Image Generation
|
2025-06-02
|
2025-06-02
| |
64
|
2506.01203v1
|
Self-Supervised Multi-View Representation Learning using Vision-Language Model for 3D_4D Facial Expression Recognition
|
2025-06-01
|
2025-06-01
| |
65
|
2506.00613v1
|
Evaluating Robot Policies in a World Model
|
2025-05-31
|
2025-05-31
| |
66
|
2506.00238v1
|
ZeShot-VQA_ Zero-Shot Visual Question Answering Framework with Answer Mapping for Natural Disaster Damage Assessment
|
2025-05-30
|
2025-05-30
| |
67
|
2505.23977v1
|
VisualSphinx_ Large-Scale Synthetic Vision Logic Puzzles for RL
|
2025-05-29
|
2025-05-29
| |
68
|
2505.22946v1
|
NegVQA_ Can Vision Language Models Understand Negation_
|
2025-05-28
|
2025-05-28
| |
69
|
2505.21771v1
|
MMTBENCH_ A Unified Benchmark for Complex Multimodal Table Reasoning
|
2025-05-27
|
2025-05-27
| |
70
|
2505.20444v1
|
HoPE_ Hybrid of Position Embedding for Length Generalization in Vision-Language Models
|
2025-05-26
|
2025-05-26
| |
71
|
2505.19358v1
|
RoofNet_ A Global Multimodal Dataset for Roof Material Classification
|
2025-05-25
|
2025-05-25
| |
72
|
2505.18792v1
|
On the Dual-Use Dilemma in Physical Reasoning and Force
|
2025-05-24
|
2025-05-24
| |
73
|
2505.20326v1
|
Cultural Awareness in Vision-Language Models_ A Cross-Country Exploration
|
2025-05-23
|
2025-05-23
| |
74
|
2505.16854v2
|
Think or Not_ Selective Reasoning via Reinforcement Learning for Vision-Language Models
|
2025-05-22
|
2025-05-23
| |
75
|
2505.15966v2
|
Pixel Reasoner_ Incentivizing Pixel-Space Reasoning with Curiosity-Driven Reinforcement Learning
|
2025-05-21
|
2025-05-26
| |
76
|
2506.11031v2
|
Task-aligned prompting improves zero-shot detection of AI-generated images by Vision-Language Models
|
2025-05-20
|
2025-06-16
| |
77
|
2505.13777v1
|
Sat2Sound_ A Unified Framework for Zero-Shot Soundscape Mapping
|
2025-05-19
|
2025-05-19
| |
78
|
2505.12493v2
|
UIShift_ Enhancing VLM-based GUI Agents through Self-supervised Reinforcement Learning
|
2025-05-18
|
2025-06-21
| |
79
|
2505.12045v1
|
FIGhost_ Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition
|
2025-05-17
|
2025-05-17
| |
80
|
2505.11758v1
|
Generalizable Vision-Language Few-Shot Adaptation with Predictive Prompts and Negative Learning
|
2025-05-16
|
2025-05-16
| |
81
|
2505.10664v1
|
CLIP Embeddings for AI-Generated Image Detection_ A Few-Shot Study with Lightweight Classifier
|
2025-05-15
|
2025-05-15
| |
82
|
2505.09498v1
|
Flash-VL 2B_ Optimizing Vision-Language Model Performance for Ultra-Low Latency and High Throughput
|
2025-05-14
|
2025-05-14
| |
83
|
2505.08910v2
|
Behind Maya_ Building a Multilingual Vision Language Model
|
2025-05-13
|
2025-05-15
| |
84
|
2505.08818v1
|
Position_ Restructuring of Categories and Implementation of Guidelines Essential for VLM Adoption in Healthcare
|
2025-05-12
|
2025-05-12
| |
85
|
2505.07062v1
|
Seed1_5-VL Technical Report
|
2025-05-11
|
2025-05-11
| |
86
|
2505.08803v1
|
Multi-modal Synthetic Data Training and Model Collapse_ Insights from VLMs and Diffusion Models
|
2025-05-10
|
2025-05-10
| |
87
|
2505.06413v1
|
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving
|
2025-05-09
|
2025-05-09
| |
88
|
2505.05635v1
|
VR-RAG_ Open-vocabulary Species Recognition with RAG-Assisted Large Multi-Modal Models
|
2025-05-08
|
2025-05-08
| |
89
|
2505.04787v2
|
Replay to Remember _R2R__ An Efficient Uncertainty-driven Unsupervised Continual Learning Framework Using Generative Replay
|
2025-05-07
|
2025-05-09
| |
90
|
2505.03703v1
|
Fill the Gap_ Quantifying and Reducing the Modality Gap in Image-Text Representation Learning
|
2025-05-06
|
2025-05-06
| |
91
|
2505.02569v1
|
HapticVLM_ VLM-Driven Texture Recognition Aimed at Intelligent Haptic Interaction
|
2025-05-05
|
2025-05-05
| |
92
|
2505.02056v1
|
Handling Imbalanced Pseudolabels for Vision-Language Models with Concept Alignment and Confusion-Aware Calibrated Margin
|
2025-05-04
|
2025-05-04
| |
93
|
2505.01881v3
|
PhysNav-DG_ A Novel Adaptive Framework for Robust VLM-Sensor Fusion in Navigation Applications
|
2025-05-03
|
2025-06-13
| |
94
|
2505.01578v1
|
Grounding Task Assistance with Multimodal Cues from a Single Demonstration
|
2025-05-02
|
2025-05-02
| |
95
|
2505.00693v3
|
Robotic Visual Instruction
|
2025-05-01
|
2025-07-27
| |
96
|
2505.00150v1
|
Detecting and Mitigating Hateful Content in Multimodal Memes with Vision-Language Models
|
2025-04-30
|
2025-04-30
| |
97
|
2504.21186v2
|
GLIP-OOD_ Zero-Shot Graph OOD Detection with Graph Foundation Model
|
2025-04-29
|
2025-05-17
| |
98
|
2504.20294v1
|
mrCAD_ Multimodal Refinement of Computer-aided Designs
|
2025-04-28
|
2025-04-28
| |
99
|
2504.18856v1
|
Multi-Resolution Pathology-Language Pre-training Model with Text-Guided Visual Representation
|
2025-04-26
|
2025-04-26
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- -