text
string | source
string |
|---|---|
arXiv:2505.21966v1 [cs.HC] 28 May 2025MapStory: LLM-Powered Text-Driven Map Animation Prototyping with Human-in-the-Loop Editing Aditya Gunturu University of Calgary Canada aditya.gunturu@ucalgary.caBen Pearman University of Calgary Canada ben.pearman@ucalgary.caKeiichi Ihara University of Tsukuba Japan kihara@iplab.cs.tsukuba.ac.jp Morteza Faraji University of Calgary Canada morteza.faraji@ucalgary.caBryan Wang Adobe United States bryanw@adobe.comRubaiat Habib Kazi Adobe United States rhabib@adobe.com Ryo Suzuki University of Colorado Boulder United States ryo.suzuki@colorado.edu Figure 1: MapStory is an LLM-powered animation prototyping tool that allows users to create map-centric animations. Users can write and edit a script (a) that automatically generates an editable scene breakdowns with detailed, step-by-step animation instructions (b). The resulting animation sequence is displayed on an editable timeline alongside a properties panel (c). A researcher module processes each query to render the geospatial data on the canvas, which the user can follow up and change (d). We also demonstrate many examples of map animations created with our tool (e). ABSTRACT We introduce MapStory, an LLM-powered animation authoring tool that generates editable map animation sequences directly from nat- ural language text. Given a user-written script, MapStory leverages an agentic architecture to automatically produce a scene break- down, which decomposes the script into key animation building blocks such as camera movements, visual highlights, and animated elements. Our system includes a researcher component that accu- rately queries geospatial information by leveraging an LLM with web search, enabling the automatic extraction of relevant regions, paths, and coordinates while allowing users to edit and query for changes or additional information to refine the results. Addition- ally, users can fine-tune parameters of these blocks through an interactive timeline editor. We detail the system’s design and archi- tecture, informed by formative interviews with professional ani- mators and an analysis of 200 existing map animation videos. Our evaluation, which includes expert interviews (N=5), and a usability study (N=12), demonstrates that MapStory enables users to create map animations with ease, facilitates faster iteration, encourages creative exploration, and lowers barriers to creating map-centric stories. CCS CONCEPTS •Human-centered computing →User interface design . KEYWORDS text-to-animation, map-based storytelling, LLM-based authoring tools, AI-assisted animation, human-AI collaboration 1 INTRODUCTION Maps have long been a powerful medium for storytelling [ 65]. Since ancient and medieval times, people have used maps to tell stories [ 58,59], such as the Catalan Atlas, which depicted medieval trade routes [ 1], or Charles Minard’s visualization of Napoleon’s campaign, considered one of the earliest examples of map-based data storytelling [ 2]. Today, people bring this form of narrative to life through animated storytelling, which we call map animation . This map animation is becoming increasingly popular, as it can effectively communicate historical trajectories [ 20], migration pat- terns [ 35], or geopolitical dynamics [ 28], often attracting millions of viewers as engaging educational content. However, creating map animations remains significantly chal- lenging and time-consuming. Producing even a short animation clip requires substantial efforts and expertise with animation tools. In addition, creators must conduct extensive research to ensure factual accuracy throughout the process. As a result, the creation of map animations remains largely inaccessible to non-experts, and even professional animators often spend days or weeks
|
https://arxiv.org/abs/2505.21966v1
|
producing a single video, as we observed in our formative study. In this paper, we ask: what if anyone could immediately create these map-based animations simply by writing a script ? And going beyond that to make it editable at every stage. For example, imagine typing a sentence like “Very few people live in the mainland of the US. The main reason for this is the Rocky Mountains, which act as a natural barrier to the rains coming from the Pacific” (Figure 1a) and instantly generating an animated sequence that visualizes thisexplanation and is editable at the script level as well as granular animation level (Figure 1). To explore this vision, we present Map- Story, a system that generates editable map-based animations from natural language scripts following a human-in-the loop approach throughout the process. Unlike general-purpose text-to-video mod- els like Sora [ 10] or Veo2 [ 3], MapStory is designed to preserve creative control and enable fine-grained editing through a scene breakdown , aconversational researcher , and map-centric animation building blocks such as camera motion, location highlights, and animated elements presented in an editable timeline interface. Our focus is not to replace traditional workflows and tools like After Effects for animation authoring. Instead, we provide a text-driven tool that enables animators to prototype map animations from text and refine them through rapid iteration. MapStory is powered by an agentic architecture consisting of two LLM-based agents: 1) a scene breakdown agent , which parses the script into key animation building blocks, and 2) a researcher agent , which grounds the script in factual geospatial data by iden- tifying real-world regions, paths, and coordinates. For instance, in the previous Rocky Mountains example (Figure 1a), the scene breakdown agent identifies zooming actions for the US, highlight blocks for the Rockies and Midwest regions, and animated elements like moving clouds (Figure 1b). Furthermore, the research agent automatically fetches polygons to highlight these regions without requiring manual specification; the user can then provide additional modification queries (Figure 1d). These animations can be easily modified or fine-tuned through an interactive timeline interface (Figure 1c). We implement this based on OpenAI’s o1model for scene breakdown and Perplexity’s sonar-pro model for research with web-search capability. We leverage the geocoding ability of LLMs [40, 46] to generate real-world Cartesian coordinates. Our design was informed by two formative studies. First, we iden- tified animation building blocks commonly used in map animations (Figure 1e and 4) through an analysis of 200 publicly available map animation videos on YouTube. These building blocks fall into three categories: 1) camera movements , such as zoom ,translate , and orbit ; 2)visual highlights , such as area,path, and point highlights ; and 3) animated elements , such as routes ,spatial transitions , and auxiliary motion . Second, we interviewed professional animators to understand their workflows and challenges, informing our three key design goals: 1) script-driven authoring, 2) research integration, and 3) modular building blocks approach for creative controllability. These findings guided the design and development of the MapStory interface and system architecture. To evaluate our system, we
|
https://arxiv.org/abs/2505.21966v1
|
conducted three studies: 1) a us- ability study (N=12) to measure the tool’s expressiveness, creative exploration, and accessibility for novice users; 2) an expert study with professional map animators (N=5) to gather feedback on the tool’s workflows, and potential for real-world applicability; and 3) a technical evaluation on 20 randomly sampled prompts to assess the generalizability and factual correctness of automatically gener- ated animations. Our findings suggest that MapStory enables faster iteration, encourages creative exploration, and lowers barriers to creating map-centric animations for storytelling. Participants ap- preciated the effective balance between automation and control, and experts recognized its potential as a creative support tool with 2 Figure 2: Popular online animated map videos explain knowledge to a general audience in a highly engaging manner, influencing millions of viewers. a focus on rapid prototyping. On the other hand, we also identi- fied several limitations, including challenges with AI hallucination affecting factual accuracy and needing verification of produced visualizations. Finally, our contributions are as follows: (1)MapStory, a text-driven animation prototyping tool pow- ered by an LLM-based agent architecture—composed of scene breakdown and researcher agents—that enables auto- mated yet human-in-the-loop creation of map animations. (2)Insights from two formative studies: analyzing 200 map animation videos and interviewing expert animators, that guided our identification of essential animation building blocks and script-driven design principles. (3)Results from three evaluations: technical, usability, and expert feedback, that assess the effectiveness of our tool and highlight opportunities and challenges in AI-assisted map animation authoring. 2 RELATED WORK 2.1 Map-Based Visualization and Storytelling Map-based storytelling systems have leveraged interactive visual- izations, especially geographic maps, to contextualize narratives. For example, NewsViews [ 16] and GeoExplainer [ 31] automatically generate annotated thematic maps by mining text for locations and linking them to relevant data. Lundblad et al. [ 38] present a web-based geovisual analytics toolkit that integrates dynamic vi- sual analysis with interactive storytelling. These systems lower the barrier to creating map-based storytelling for understanding sta- tistical data and conducting spatial analyses. Narrative Maps [ 27] introduced a “route map” metaphor, depicting events in a story as landmarks connected by routes on a conceptual map. Prior work has extensively explored map-based visualizations for geospatial data. For example, data visualizations rendered on maps or 3D globes to convey global trends [ 49,50]. Other work integrates narratives with physical space: Location-Aware Adaptation [ 32], Story-Driven [ 7], Believable Environments [ 19], and How Space is Told [ 52] present an approach to generate location-based stories, automatically as- signing story events to contextually suitable locations. Additionally, GeoCamera [ 33] introduced an authoring tool that supports users in designing camera movements for storytelling with geographic visu- alizations. MapStory builds on these threads and extends prior map visualization and narrative authoring systems by leveraging mapanimation generation through script, allowing authors to easily produce and revise rich map-based stories. 2.2 Animation Authoring Tools Creating animations traditionally requires significant time, effort, and technical skill, so a number of HCI systems have explored ways to lower this barrier. Early approaches introduced sketch-based animation tools like K-Sketch [ 15], Draco [ 25], and Kitty
|
https://arxiv.org/abs/2505.21966v1
|
[ 24]. For example, Draco [ 25] let illustrators bring static drawings to life by sketching motion paths and applying kinetic textures to create rich path animations and particle effects. These sketch-based interfaces greatly lowered the barrier through more natural interactions. Aug- mented Physics [ 18] introduced a CV-based pipeline to convert static physics diagrams into animated physics simulations, pro- viding a AI-assisted, selection oriented tool for creating animated physics visuals. Another line of work provides higher-level building blocks and templates to simplify animation authoring. Motion Am- plifiers [ 26] introduced a set of reusable animation primitives based on the principles of traditional animation. Similarly, Ma et al. [ 39] extended this idea to 3D animations with a layered authoring inter- face. This layered design balanced ease-of-use with expressiveness, showing how modular animation blocks and multi-level interfaces can support both novices and professionals. Other systems have ex- plored performance-based and data-driven templates: for example, Moscovich et al. [ 42] enabled recording motions via hand gestures. More recently, researchers have explored flowchart-based author- ing [ 12,70]. For instance, FlowMatic [ 70] introduced an authoring tool that uses flowcharts to create reactive behaviors of virtual objects. 2.3 LLM-Powered Authoring Tools The emergence of generative AI and large language models (LLMs) has further inspired AI-assisted authoring tools. LLMs have shown promise in supporting creative work like story generation [ 14]. In particular, recent works have explored text-to-video generation, demonstrated through numerous commercially-available tools, in- cluding Sora [ 44], Veo2 [ 17], Runway [ 48], and Pika [ 45]. These approaches basically generate videos from prompts, but they usu- ally lack iteration and fine-tuning. In contrast, the HCI community has explored more interactive and user-in-the-loop approaches. Several systems introduce visual authoring techniques, such as graph-based manipulation [ 6,23,68], drag-and-drop interfaces [ 8,41], and multi-modal prompt refine- ment [ 63], to allow users to edit and steer the generation process. 3 Several notable systems have applied these approaches to animation generation. One example is Katika [ 22], an end-to-end system for creating explainer-style motion graphics videos from a natural lan- guage script. Alternatively, Spellburst [5] uses an LLM to produce p5.js code snippets to generate creative web animations. Similarly, Keyframer [ 57] explored using GPT-3 to synthesize CSS keyframe animations from design descriptions. LogoMotion [ 37] is a recent system that helps novices animate static logo graphics through visually-grounded code synthesis. Inspired by these prior works, our system also leverages LLMs not to directly output the video content, but instead generate animation components by leveraging LLM-agent architecture for map-based animation generation. 2.4 Natural Language for Creative Tasks Natural language has been extensively explored as a medium for authoring, editing, and organizing content across various creative domains. Prior work has shown its effectiveness in video editing and storytelling with text input. For instance, CrossPower [ 66] leverages script-like natural language to organize visuals in videos, while DataParticles [ 11] supports language-oriented authoring of unit visualizations. Natural language is not limited to text input; recent systems explore speech as a modality for real-time creative control. Take- Toons [ 54]
|
https://arxiv.org/abs/2505.21966v1
|
leverages structured scripts and the actor’s facial poses to translate talking animations to a virtual character in real-time. DrawTalking [ 47], on the other hand, enables users to add sim- ple motion to sketched objects via speech, and RealityTalk [ 34] displays relevant graphics based on the user’s speech for creating augmented presentations. Recently, many works have focused on editing and organizing video content via natural language using LLMs. Works such as LAVE [ 60] and ExpressEdit [ 55] have explored story level editing video through free-form text. Other systems, such as ChunkyEdit [ 30], QuickCut [ 56], and B-Script [ 21] have explored natural language prompting as a way to make edits, such as trimming or organizing video clips using video and organizing transcripts. Natural language has also been used for map-based content creation. For instance, Embark [ 53] has examined parsing natural language text outlines into plans with routes rendered on maps, effectively turning notes with a structured schema into dynamic documents. Meanwhile, tools like Eviza [ 51] support natural lan- guage analytical questions and render the resulting visualization situated on a map. Finally, CrossTalk [ 67] leverages conversations in online meetings to organize, recommend, and navigate places on shared maps. Script-driven editing tools for video have proven to be promising in allowing creators to have high-level control over their editing, organization, and overall storytelling [ 29,30,56]. In- spired by these ideas, our system supports creating parameterized animations from the script, which are organizable, controllable, and editable at a high level for conveying map-centric stories. Recent work in HCI has seen the emergence of LLM-driven editing tools such as LAVE [ 60] and MoGraphGPT [ 69], which re- spectively leverage linguistic augmentation for video editing and modular LLMs for interactive scene creation. In contrast to these approaches, our architecture is novel in its explicit focus on map- centric storytelling: it employs a dedicated planner-researcher agentthat translates free-form user instructions into GeoJSON operations, including querying, addition, reduction, and generation, while be- ing promptable at every step of the architecture. The user can request changes to the scene breakdown as well as the researcher agent. This modular design not only decouples complex geographic reasoning from traditional video or scene editing pipelines but also provides enhanced transparency and user control by allowing real-time refinement of map animations. 3 FORMATIVE STUDY To better understand current practices and challenges in creating map animations, we conducted a formative study. The goal of this study was to gather insights that would inform the interface design and workflow of MapStory, grounded in the real-world experiences, needs, and challenges faced by professionals in this domain. 3.1 Method We recruited three professional map video animators (3 male, ages 21-28). Each participant had at least three years of experience (3- 6 years) producing map animation videos, which they regularly upload to YouTube. One participant does client work. We contacted them via emails, Youtube, and Instagram DMs. We conducted semi- structured interviews of approximately 1 hour over Zoom, and compensated each with 15 USD. During the interviews, we explored the experts’ typical
|
https://arxiv.org/abs/2505.21966v1
|
work- flows, the tools they use, and the challenges or needs they encounter when producing map animations. Although we acknowledge the relatively small number of experts in our study, we intentionally focused on specialized map animators rather than general video creators, thus recruitment was challenging. To complement our interviews and mitigate this limitation, we also analyzed a well- known tutorial on creating map animations, titled “How I Make My Maps”1. We also analyzed popular creator’s Patreon tutorials2. Our findings, drawn from both the expert interviews and the tutorial analysis, are described in the following sections. 3.2 Insights and Findings Current Tools, Practices, and Workflows .In our study, all ex- perts use Adobe After Effects for video editing. For map-specific graphics, they use an After Effects plugin called GeoLayers [ 4], which offers features such as importing real-world map data, cus- tomizing map styles, and integrating location-based metadata. In case of area highlights, they use GeoJSON3polygons to render the regions on the maps. Despite these capabilities, the experts noted that complex animations still demand considerable manual script- ing and keyframing. As a result, they reported that creating a single map animation typically takes from a few days to weeks, with some projects extending beyond. Moreover, the experts emphasized that After Effects usage is only one component of their overall process. Based on their comments and the tutorial, a typical map animation project generally involves four stages: 1) script phase , where the idea and narrative are determined; 2) research phase , for gathering 1https://www.youtube.com/watch?v=GsojLuJpe_0 2https://www.patreon.com/posts/behind-scenes-51811597 3https://geojson.org/ 4 factual information and extracting data; 3) animation prototyp- ing phase , where the script is translated into a rough animation; and 4) iteration phase , during which the animation is refined for publication. It All Starts with a Script .Participants emphasized that the ani- mation process always begins with a script that outlines the core narrative, pacing, and visual milestones (Figure 3). Before working in After Effects, they typically create a written plan specifying what content must appear on the map and when, in sync with the video’s narration. As P1 explained, “I generally start with an idea [...] then I’ll start writing a script, and it usually takes longer than you’d think” . The script also captures instructions for other visual elements, such as layering different geographic datasets, adding text callouts, or highlighting resource-rich regions. P1 added, “Sometimes, I’ll put directions like ‘zoom in here’ or ‘highlight this region’ in my script” . Once the script is complete, they conduct a “script breakdown” by tagging specific parts of the text with directional notes—such as zooming in or toggling boundaries—to convert the written plan into explicit animation steps. As P2 described, “When I started, I did some markup on my script like ‘move the camera here’ or ‘highlight that re- gion next’ so I know exactly how to animate it in After Effects” . Some participants also described using large language models to outline their animation sequences well before launching any software. As P2 explained, “I give ChatGPT the script and ask it to generate a
|
https://arxiv.org/abs/2505.21966v1
|
plan [...] It’s a huge time saver for planning my camera moves before I open After Effects. ” Figure 3: Workflow of a map animator. The animator first breaks down their script into manageable chunks and de- scribes the animation they plan to make for this specific item. The animation guides also include research informa- tion like population counts, medals won by a specific country or region boundary maps. Decoupled Research and Animation Process .After writing the script, creators often go through a dedicated research phase to identify accurate routes, verify historical or geographic facts, and gather supporting data. However, this research occurs outside the animation tools, so animators must manually translate their find- ings into keyframes, such as plotting accurate paths or highlighting specific regions. As P2 noted, “I do my fact-checking in Google or ChatGPT, but then I have to transfer everything by hand into After Effects. If some information like routes, highlights, or stats changes last minute, I need to rework the whole scene. ” This lack of real-time integration between research, planning, and animation leads to frequent misalignments, increased chances for errors, and sloweriteration cycles. Ultimately, these fragmented workflows hinder the creative process. P1 echoed similar challenges, noting how “I do a lot of the research with Google, but then I basically integrate it all (borders, stats, routes, facts etc.) out by hand. If I discover something’s slightly off, I have to go back and fix every keyframe. ” He also pointed out that “I’ll sometimes use ChatGPT to refine my script, but there’s no direct way to update the animation when I find new facts—I still have to do everything manually. ” This repeated back-and-forth between gathering accurate data and implementing it in the animation often results in “days” of extra work whenever details change. Need for Iteration and Creative Controllability .When proto- typing their animations, experts emphasized the importance of an iterative process. As P1 explained, “Being able to quickly tweak an animation and immediately see how it aligns with the narrative is crucial” . Therefore, rather than relying on a single, monolithic time- line, creators prefer to assemble individual building blocks, where each block is responsible for a particular sequence or visual effect. For instance, one block might highlight a geographic boundary, while another animates a specific route, which are related to the tags in the script. Since these modules are self-contained, changes in one block do not disrupt the rest of the animation. P2 added, “We can’t just fix it on the fly in the same place, so it takes forever if the story shifts suddenly.” . Similarly, P3 said “When clients request changes last moment, it can be very frustrating. I try to clarify re- quirements upfront, but clients sometimes change their requests in the middle of the process, ” , underscoring how sudden updates force rework, which takes a lot of time. Overall, professional creators pre- fer granular-level adjustments and creative controllability, rather than generating the entire video in a single pass. 3.3 Design Implications Based on these findings, we made the
|
https://arxiv.org/abs/2505.21966v1
|
following design decisions. D1: Script-Driven Authoring .Our system should directly trans- late scripts to support a script-driven workflow. Moreover, changes to textual instructions should seamlessly reflect in the animated map scenes, enabling rapid iteration. D2: Research Integration to Extract Map Data .Our system should integrate research into the animation tool, allowing users to gather, verify, and organize geospatial data and story elements in a single environment. These elements should be directly embedded into the scene to minimize manual plotting. D3: Generating Editable Animation Building Blocks .Our system should generate animations not as one-off videos but as editable animation modules. These modules should allow creators to tweak individual parameters and elements without disrupting other segments, thereby supporting fine-tuned visual adjustments and rapid iteration. 4 COMMON MAP ANIMATION TECHNIQUES To develop our map animation building blocks, we began by identi- fying the most common techniques used in existing map animation videos. To this end, we conducted a formative analysis of map- based animations, aiming to pinpoint widely adopted methods. The 5 Figure 4: Common map augmentation and animation techniques based on content analysis of 200 map-centric videos findings from this study helped shape the system design of our animation building blocks. 4.1 Method We started from collecting popular map animation videos through YouTube search. Two authors (A1 and A3) began with keyword search, such as “map animation” ,“animated maps” , and more context- specific terms like “history of [geographic location]” . Since there is no single dominant keyword to discover these map animation videos, we used YouTube’s recommendation system based on the initial subset of relevant videos to discover additional related videos. We included only videos that employed animation techniques using a map as the primary visual canvas. Static map images or videos without any animated transitions or effects were excluded. Through this process, we curated a total of 200 map animation videos. Recognizing that each video might feature multiple animation techniques across various scenes, A1 and A3 extracted screenshots from each distinct animation scene, producing a total of 1200 screen- shots. These screenshots were then categorized with thematic cod- ing. A1 initiated the coding by organizing the screenshots on a Miro board. After this initial categorization, all authors reviewed the emerging themes and iteratively refined them until reaching a consensus. 4.2 Results of Map Animation Techniques This analysis resulted in the identification of three primary cat- egories of map animation techniques, which we implemented as core components in our system (Figure 4). A complete list of the analyzed videos, along with their corresponding visual overlays, is provided in the supplementary materials.Highlights .Highlights are a primary method of emphasizing spe- cific geographic areas of interest on a map. They can be imple- mented as areas, lines, or points: •Area highlights are used to visually emphasize regions or zones by shading, coloring, or bounding the chosen area. This approach effectively focuses the viewer’s attention on a specific region of the map—such as a country, state, or historical territory—by distinguishing it from its surroundings. •Line highlights outline or trace boundaries and routes. For example, line highlights
|
https://arxiv.org/abs/2505.21966v1
|
are often used to represent journeys like troop movements and migration paths. Also, it can show borders, national boundaries, or coordinate references to provide contextual grounding or to show expansion over time. •Point highlights are small markers, symbols, or icons placed at particular coordinates. These highlights typically indicate important landmarks, cities, or data points, making it clear where the viewer should focus. Camera Controls .Camera movement plays a critical role in map- based animation, guiding the viewer’s focus and creating dynamic transitions between scenes. •Zooming is the most common camera motion, used to shift the level of detail by moving in and out of specific regions. It helps create narrative pacing and emphasizes points of interest. •Translation involves shifting or panning the camera’s view- point across the map. This technique can guide the audience from one geographic region to another, showcasing transitions between key locations or highlighting spatial relationships. •Orbiting refers to rotating the camera around a focal point or region on the map. By orbiting, animators can present multiple sides or angles of a location, adding a three-dimensional feel and 6 helping viewers grasp the scope or scale of the geography in question. These camera movements are frequently orchestrated using keyframes, which define the camera’s position and orientation at specific mo- ments in time, resulting in smooth and deliberate visual transitions. Animated Elements .Animated elements bring dynamic motion to the map, helping convey movement, changes in territory, or shifting data. •Route motion animates objects, such as ships, planes, and ar- rows, along paths to illustrate travel, trade routes, invasions, or other directional flows. This technique is especially effective for showing temporal progression or cause-and-effect across space. •Spatial transitions involve the enlargement or reduction of highlighted regions or shapes over time. This technique is typi- cally used to showcase how areas grow or shrink—such as ex- panding boundaries in historical conquests, shifting population zones, or changes in environmental conditions. •Auxiliary motion includes any additional, supportive move- ment layered onto the map for emphasis or context. Examples include arrows sweeping across a region, clusters of icons like armies or clouds moving in a swarm-like fashion, or scanning beams that pass over key areas. This motion is often employed to guide the viewer’s attention and reinforce important narrative points or data trends. 5 MAPSTORY This section presents MapStory, an interactive tool that generates map-based animations from natural language input. 5.1 Overview Built on large language models, MapStory provides the author- ing interface that translates text into modular, editable map ani- mations. MapStory takes a text script as input and outputs a se- quence of structured animation components, offering flexibility and fine-grained control. MapStory decomposes the input into discrete animation steps by identifying high-level actions, such as zoom, highlight, and route animation, which are represented as editable JSON and later automatically converted into corresponding ani- mations. These modules are visualized on a map canvas and can be interactively refined using a timeline editor and a properties panel. Users can edit the animation modules on a timeline, adjust parameters, and preview changes in real time. To
|
https://arxiv.org/abs/2505.21966v1
|
support accu- rate and fact-grounded animations, users can ask follow up queries and modifications to the system’s initial results. MapStory assists animators in resolving vague or ambiguous location references, re- trieving precise geospatial data like points, regions, or paths, based on the LLM-powered web search, that is directly integrated into the animation modules. 5.2 System Walkthrough We now walk through an example of how a user can create an animated scene using MapStory, as illustrated in Figure 5. Step 1: The User Types a Script into MapStory .The user begins by writing, in natural language, a description of the animationscene they wish to create. In Figure 5-1, the user provides the script: “Let’s follow the journey of the ceremonial mace starting its delivery sail from London, going through the Atlantic Ocean, entering North America through the St. Lawrence Seaway, and finally reaching Toronto. ” The input can be as vague or as detailed as the user prefers. MapStory is designed to interpret a wide range of user intentions from open-ended descriptions. Step 2: The System Breaks Down the Script into Animation Sequences .From the initial text input, the system’s scene break- down module generates a step-by-step breakdown of the scene into modular animation guides. In this example, the system auto- matically generates the following sequence: “a zoom to London” , “a route animation across the Atlantic” ,“a zoom to Toronto” , and “a highlight on Toronto” . Each step corresponds to a specific animation block, which can be edited, reordered, added, or removed by the user (Figure 5-2). The user can further refine each module by writ- ing contextual or descriptive text, which the system uses to better understand the animation intention. Step 3: The User Interactively Retrieves Geospatial Map Data . Next, the system helps retrieve and process relevant geospatial data via the researcher module. This includes fetching GeoJSON coordinates for locations, areas, or routes mentioned in the script. The system first automatically fetches the relevant information for each block in the scene breakdown. However, the user can follow up with additional queries or modifications. For example, when the user asks, “Where is the ceremonial mace housed in the province of Ontario in Toronto?” , the system identifies the Ontario Legislative Building as the destination and extracts its location for use in the animation (Figure 5-2). The system can also pull in additional metadata, such as historical context, the year a building was constructed, or statistics like the number of medals won by a country, allowing users to focus on their story. Step 4: The User Modifies Parameters, Adjusts Styles, and Controls Timing of Each Animation Module .With the anima- tion structure and map data in place, users can fine-tune the look and feel of each element using the properties panel. For example, they can add images, label annotations, or change the style to en- hance the animation. Each module’s visual properties, such as color, transparency, labels, and styling, can be customized. For instance, a user might highlight a country in a vivid color, adjust the opacity of a route, or
|
https://arxiv.org/abs/2505.21966v1
|
change the map style to match the narrative tone. Finally, users can interactively control the timing of each an- imation module by specifying when it appears and how long it plays, through the timeline sequencer. This allows for fine-grained control over pacing and sequencing in real time. By iteratively refining timing and transitions, users can craft a cohesive and com- pelling animated story that aligns with their intended narrative. The final result is a text-driven animation that seamlessly integrates geospatial data, visual styles, and temporal coordination. 6 IMPLEMENTATION Our system is implemented using React.js and TypeScript, com- posed of three main components: 1) a base map canvas built with Mapbox, 2) a custom timeline editor and animation sequencer, and 7 Figure 5: MapStory System Walkthrough. 3) a text-driven scene breakdown and researcher interfaces pow- ered by our large language model architecture. We use OpenAI’s o1as the primary multimodal language model to process user in- put and assist with animation generation. For geospatial queries, the system uses Perplexity’s sonar-pro model to retrieve and inte- grate geospatial data, which has access to the web. We additionally query GeoJSON data from OpenStreetMap via the Nominatim API4, enabling location lookup and metadata retrieval based on natural language queries. For map animation rendering, we use the Mapbox API within the React.js framework. The system can also dynami- cally change the map’s appearance by selecting from a variety of styles supported by Mapbox5. Figure 6: Map Stylization 6.1 Animation Sequencer The animation sequencer, which is visually represented as a time- line is implemented in React.js in the browser environment using 4https://nominatim.openstreetmap.org/ui/search.html 5https://docs.mapbox.com/api/maps/styles/Mapbox API as a base canvas. We use Fabric.js to render overlays on top of this canvas. The engine comprises of a time interval which runs every second when the user clicks the play button. Each an- imation block in the system is represented as a structured JSON with a block_name ,start_time ,end_time , and block_args , where the block arguments are specific to each block type. These blocks are typically created by the LLM agent but can be manually added by the user as well. When the animation starts, the renderer constantly checks if the time interval matches the start time of any blocks. If it does, a block specific function is called to start the animation sequence of those blocks. These block specific functions animate and visualize items on the Mapbox canvas with fabric.js overlay. The block arguments with start and end times are passed into the block function, which then performs the animation action rendered on screen. 6.2 Supported Animation Building Blocks In this section, we describe how our system implements the core animation building blocks. Each block The choice of supported animations is informed by our taxonomy analysis, and each cate- gory corresponds to the descriptions provided in Section 4. These modules serve as the fundamental components for constructing map-based animations in our system. Highlights .The Highlight module supports three types of visual emphasis: Area,Line, and Point . 8 •Area : Renders a GeoJSON polygon on the map canvas at a spec- ified
|
https://arxiv.org/abs/2505.21966v1
|
time interval. The polygon data is fetched from the Nomi- natim API or generated by the LLM. •Line : Uses an array of latitude-longitude coordinates provided by the LLM to render a polyline on the map. •Point : Plots a map marker at the specified coordinates. Users can freely attach text or images to highlights of any type, of- fering a flexible way to annotate or enhance the visual presentation of geographic features. Figure 7: Our system supports three types of highlights 1) area, 2) point, and 3) line. Camera Controls .Camera movements are handled by animating the viewpoint in Mapbox: •Zoom : Employs the built-in FlyTo function to smoothly zoom into a specified location. •Translate : Interpolates the camera’s target position from start to end coordinates, shifting the focal point without changing the zoom level. •Orbit : Moves the camera along a circular path using incremental angular steps every frame, creating a revolving view of a given focal point. Figure 8: Camera controls: 1) zoom, 2) orbit, and 3) transla- tion. Animated Elements .To create dynamic motion atop the map canvas, our system employs a custom animator built using browser intervals and Fabric.js6. By defining sprite positions over time in latitude–longitude format, we can display animation frames that smoothly transition across the map. •Animated Routes : Drawn via the Mapbox drawing API, routes are specified as an array of latitudes and longitudes, generated by the LLM (or modified by the animator). Once the path is rendered, user-uploaded sprites can be animated along that route. These sprites remain fully editable, and their position, style, or any other attribute can be adjusted. 6https://fabricjs.com/•Spatial Transitions : Implemented as polygon shape transfor- mations or translations. For example, to show a transition be- tween the polygons for North Dakota and a merged North–South Dakota shape, we interpolate these two polygons over a speci- fied time interval with Flubber7, creating a morphing animation. Simple translations move the polygon from one location to an- other. •Auxiliary Motion : Provides secondary movements such as loop- ing animations across a specified coordinate range on the map. Users can upload sprites, which the system duplicates into a cluster. The LLM determines the motion range and cluster count, enabling versatile and visually engaging secondary animations. Figure 9: Animated elements: 1) routes, 2) auxiliary motion, and 3) spatial transitions. 6.3 LLM Architecture MapStory’s agent architecture follows a human-in-the-loop ap- proach to creating, verifying, and querying information for map animations. We chose to separate our architecture into a agent workflow consisting of multiple calls as LLMs perform better when broken down into modular task due to token limitations, conti- nuity issues or clarity [ 61]. We chose the two main tasks of cre- ating map animations as found in our formative explorations, an animation guide planner and a research assistant. Our architec- ture features two primary LLM agents: a scene breakdown agent and a map animation researcher agent. We use the o1model for scene breakdown tasks, while Perplexity’s sonar-pro model serves as the research agent, helping maintain accurate outputs. The scene breakdown agent is
|
https://arxiv.org/abs/2505.21966v1
|
responsible for converting user input into a structured JSON-based scene breakdown. The research agent then examines the relevant modules from that breakdown, performs chain-of-thought reasoning [ 64] to validate their parameters, and finally invokes the appropriate function call with the confirmed arguments. Below, we detail both agent roles. Scene Breakdown Agent The scene breakdown agent is designed to create step-by-step ed- itable animation guides (D1), which follow an animator’s traditional approach of creating animation guides from their scripts. We use theo1model, which is a reasoning model. The model is encouraged to think module-by-module. Given an initial list of modules, We break down parts of the prompt in the following descriptions. Query Reasoning .Given an initial script, the scene breakdown agent is instructed to first perform query reasoning to deconstruct the initial free-form user query into actionable animation guides in plain text. The scene breakdown agent follows a planning approach 7https://www.npmjs.com/package/flubber/v/0.1.0 9 Figure 10: MapStory LLM Architecture where a task is divided into smaller manageable sub-tasks [ 62]. The prompt includes descriptions of each animation block supported by our system within three categories: highlight ,camera control , and animated element blocks. The prompt also includes heuristic con- siderations, like always preceding highlight or animated element blocks with camera blocks to focus on the visualization. The agent is then instructed to choose the most appropriate blocks from the given list of supported blocks. Once a set of appro- priate blocks is determined, we use OpenAI’s function calling API [43] so the LLM can choose the appropriate functions in our system. The agent design also considers timing as an instruction. The agent is instructed to make sure no time overlaps occur and each block is placed in succession. Camera control blocks can overlap with highlight or animated element blocks. Initial Parameter Tuning .The agent is instructed to produce both a short and a long description for each block. The short description is one sentence describing the module and its main parameter. For thelong description , the agent sets initial parameters for the anima- tion block, such as the latitude and longitude of the location, the initial query for the Nominatim API, or an initial list of coordinates for a route. This provides additional and specific context for the researcher to perform reasoning, querying, and verification. Map Animation Researcher Agent Once the scene breakdown agent has drafted an initial set of an- imation guides and set initial parameters, the researcher agent parses through each item with its unique user-specified and system- generated context to find or generate appropriate GeoJSON and map-specific coordinates. The researcher is designed to reason about the user’s intent and extract the relevant information from the web or other sources (D2) . We use Perplexity’s sonar-pro with web-search and reasoning capability to query, search, and fetch relevant GeoJSON data to satisfy the user’s specific request. A dedicated research agent is assigned to each animation module. The module specific researcher agent validates initial parameters from the scene breakdown agent and each research agent’s system prompt is tailored to a specific animation block. Additionally, each
|
https://arxiv.org/abs/2505.21966v1
|
research agent can only access one tool: a function call adhering to the OpenAI Function Calling Protocol [ 43]. This modular structure not only streamlines the system’s architecture but also minimizescontext length, given the inverse relationship between context size and instruction-following accuracy [36]. We break down the agent prompt into the following parts: Query Reasoning .Given a scene breakdown object with user query, long and short description, initial parameters, and overall scene breakdown, the researcher module is first instructed to reason about user intent, disambiguating queries into map-compatible or queryable terms. For example, when asked to highlight “ Andhra Pradesh before 2014 ,” the researcher is instructed to first verify if this is a Nominatim-compatible query. In this case, it is not, as the user is asking for region boundaries in the past. GeoJSON Retrival .Once query reasoning is complete, the agent moves on to the GeoJSON retrieval step. We use the OpenStreetMap Nominatim API to query for various types of GeoJSONs, like poly- gons, multipolygons, lines, or points. The arguments taken by the API are a compatible query string and optional arguments like fil- ters for a specific country. For example, the “Avon River” can be found in both the UK and Canada. We can construct a query to dis- ambiguate between the two by replacing the bounding coordinates (Example query with the UK as filter ). The researcher is instructed to choose an appropriate GeoJSON retrival action . 6.3.1 GeoJSON Actions .We define actions as a supplementary query to a specialized agent which performs a modify action on the fetched GeoJSON for a given query. Our system supports the following modifications on fetched GeoJSONs: •Query : If a query is determined to simply be Nominatim-compatible, the agent constructs a compatible query and fetches the GeoJ- SON. For example, querying “ Andhra Pradesh ” results in a poly- gon of the state in India in the present day. •Addition : The addition action combines multiple boundaries into a single boundary. The agent first returns a string array of all the Nominatim-compatible queries that satisfy a request. For example, for the request “Andhra Pradesh in 2014,” the Addition agent returns boundaries of all the regions comprising the state in 2014, namely, “Telangana, India” + “Andhra Pradesh, India” . •Reduction : The reduction action modifies a GeoJSON to remove any points not relevant to the query. For example, given a Geo- JSON for the Rocky Mountain area and asked to “remove the 10 Canadian part, ” the agent removes all the points that are not part of the US. •Generation : For simple requests that are not queryable, such as representative flight or sea routes (which the agent does not have a database to query), the agent generates an estimated GeoJSON with key waypoints. Parameter Selection .The agent is then instructed to select rele- vant module parameters. For example, if a highlight module requires text to display the number of gold medals won by a country or if a route color needs to be changed to contrast with the map style. Finally, the
|
https://arxiv.org/abs/2505.21966v1
|
response (which relies on the function calling API) is parsed into a compatible JSON output with the animation func- tions available in the system. This is then rendered as an editable timeline. Human-in-the-Loop .Once the agent generates an initial scene breakdown, the user can edit, rearrange, or delete any part and regenerate a scene breakdown. For example, the user can change the order the blocks appear in, add or delete blocks or edit short and long descriptions of individual map animation blocks to give additional context. Similarly, by accessing the researcher tab when a specific item in the timeline or scene breakdown is selected, the user can interface with the researcher through a chat interface and ask for specific questions or modifications ( D3). 7 TECHNICAL EVALUATION To evaluate our system’s performance, we conducted a technical evaluation. 7.1 Method Dataset .We used 20 map-based storytelling prompts generated by GPT 4.5 to evaluate our system. These prompts were generated using 5 user-generated prompts as input, with additional context about our system such as “In 1492, Christopher Columbus set sail from Palos, Spain, and after weeks at sea, he arrived in the Bahamas, forever altering world history. ” . We then tested these 20 prompts with four different LLM models: 1) GPT-4.5, 2) o1, 3) GPT-4o, and 4) GPT-3.5 to compare the performance. We did not include a baseline because we simply tested our system with text-to-video generation with Sora but with a simple test of 5 prompts, we immediately realized that this approach failed completely, (e.g. generate completely wrong video, generate completely inaccurate information, etc). Therefore, we evaluated our system across different LLM models. Procedure .We entered each prompt into the Scene Breakdown agent and recorded both the generated modules and the time it took to generate them. After that, we called the Researcher agent to process the modules, with its execution time also measured. After the Researcher completed processing, we added the modules to the timeline and evaluate them. We then graded each module on its accuracy, with criteria varying depending on the module type. Camera control modules must have 1) the correct location, and 2) the correct zoom level. Highlight modules must have accurate boundaries, points and lines. Animated element modules must be within 1 km of the real routes, as well as being physically plausible (e.g. train routes musn’t go through bodies of water). We evaluatedaccuracy based on our judgment as no standardized or objective ground truth was available for these types of outputs. Figure 11: Results of accuracy across four LLM models. 7.2 Results Figure 12: Results of processing time (in seconds) across four LLM models. Figure 11 presents the accuracy results across the four evaluated models. As expected, we observed a clear improvement in system performance with each successive model. For context, GPT-3.5 is the oldest model, followed by GPT-4o, then o1, then GPT-4.5. These results suggest that the capabilities of the underlying LLM play a substantial role in determining the overall performance of our system. As more sophisticated models become available, we expect corresponding improvements in our
|
https://arxiv.org/abs/2505.21966v1
|
system’s capabilities. Figure 12 shows the average processing time (in seconds) for each model used in our system. As expected, GPT-4.5 took the longest to produce an output, followed by o1, 4o, and 3.5 respectively. Interestingly, GPT-4.5 took less time to produce an output for the scene breakdown, this is presumably due to GPT-4.5 not being a reasoning model. As the efficacy of the models increases so does the time. Failure cases can be attributed to the wrong or inaccurate geocoding by the LLM or complex and historical region queries 11 not available for the LLM to fetch from the databases (even with addition, reduction functions). 8 USER EVALUATION To understand how MapStory supports novice users in creating map animations, we conducted a usability study with twelve participants. We aimed to evaluate the tool’s expressiveness, learnability, support for creative exploration, and factual correctness. 8.1 Method Participants .We recruited 12 participants (5 male, 7 female; ages 21–30, mean=25) via university mailing lists and snowballing sam- pling. All participants had minimal or no prior experience with animation tools. Tasks .Participants were asked to complete two animation tasks: •Task A (Guided) : Create a short animation based on a pre- written script about the migration of monarch butterflies from Canada to Mexico. •Task B (Open-ended) : Create an animation based on a self- authored narrative (e.g., historical event, personal story, or travel journey). Procedure .Each session lasted approximately 45 minutes and was conducted in person. Participants received a compensation of $15 USD. Before starting the tasks, participants were given a brief tu- torial explaining the study’s purpose and the functionality of our system. They were encouraged to ask questions during the tutorial and Task A to ensure a clear understanding. However, to minimize external influence on their creativity, participants were not allowed to ask questions during Task B. For each task, participants were allowed to work on it until they were satisfied and were encouraged to think aloud during the study. To evaluate system accuracy, the experimenter recorded the number of animation modules created during each task. Afterward, participants were asked which mod- ules they believed accurately reflected their prompts, allowing us to compute a subjective accuracy rate. After completing each task, par- ticipants filled out a questionnaire covering three aspects: 1) system usability, based on the System Usability Scale (SUS) [ 9], 2) creativity support, based on the Creativity Support Index (CSI) [ 13], and 3) usefulness for each system feature, based on our 7-point Likert scale questionnaire. After both tasks, we conducted a semi-structured interviews to gain in-depth feedback on their experience. 8.2 Results Accuracy, Usability, and Usefulness .Overall, the system gener- ated a total of 114 animation modules during our user evaluations. Of those, participants rated 91 of them to be accurate (79.82%). Task Awas found to be 89.47% accurate (34 / 38) and the failure cases were the result of users not being satisfied with the start and end position, which they then prompted the system again to change (After the user changes, the Task A success rate was 100%). Task
|
https://arxiv.org/abs/2505.21966v1
|
B was found to be 75% accurate (57 / 76). Failures were almost always a result of the route or the highlight modules having imprecise coordinates, albeit they were typically within 1km of the correct result. For example, when users asks to zoom into “a random seven eleven in japan“ the system returned a highlight around 1km away Figure 13: Results of the questionnaire for each system fea- ture. The Likert scale ranges from 1 (strongly disagree) to 7 (strongly agree). Outliers (represented by circles) were de- fined as values that lie beyond 1.5 times the inter-quartile range (IQR) from the quartiles. from a seven eleven. Another interesting find was for compound geojson queries like “the roman empire“ , the system returned the correct regions but some regions were fully highlighted as returned by the present day boundaries, whereas they should be represented with partial borders. In terms of usability, the average System Us- ability Scale (SUS) score was 83.7 (SD=6.9), indicating high usability. As shown in Figure 13, system features were rated positively in terms of both accuracy and usability. This indicates that participants found the system features to be reliable and easy to use. In terms of perceived usefulness, the average scores were 5.75 (SD=1.29) for the scripting phase, 5.33 (SD=1.15) for the research phase, and 5.83 (SD=0.83) for the creation phase, demonstrating high perceived usefulness across all phases (Figure 14). Figure 14: Results of the perceived usefulness of each work phase. The Likert scale ranges from 1 (not useful at all) to 7 (extremely useful). Outliers (represented by circles) were defined as values that lie beyond 1.5 times the inter-quartile range (IQR) from the quartiles. Creativity support by providing scaffolding .Figure 15 presents the results of the Creativity Support Index (CSI). As shown, partic- ipants generally felt that MapStory supported their creativity in addition to enabling them to create map animations, a task which they could not see themselves doing before. This perception was supported by interview data, which suggested that the system low- ered initial barriers to entry by providing them with an initial “guide“ or a starting point. "In the beginning, it’s probably hard for me be- cause there’s no starting point, and it’s hard to imagine. Everything is like drawing a picture without a draft. But once you have the draft, 12 Figure 15: Results of the Creativity Support Index (CSI). The Likert scale ranges from 1 (strongly disagree) to 7 (strongly agree). Each bar in the chart represents the number of partic- ipants who selected each corresponding response value for the CSI items. everything seemed pretty clear, you know, like the immediate next step. " (P9) Additionally, by decomposing the animation process into manageable steps, the system enabled more efficient prototyping and iteration, which participants found particularly beneficial. "A prototyping tool for different shots would be really interesting, and very quick to do actually. " (P6) These findings suggest that MapStory not only makes animation creation more accessible by reducing the cognitive load of starting from scratch but also provides structural support for sustained creative
|
https://arxiv.org/abs/2505.21966v1
|
exploration and refinement. Emerging Use Cases .Participants envisioned novel and often informal use scenarios that went beyond our initial expectations, particularly highlighting casual and presentation-oriented applica- tions outside educational or learning video creation. Notably, 8 out of 12 participants proposed using the tool to plan and share travel itineraries in a visually engaging way. "If I’m going on a trip or a vacation, I want to share with others, in a fun way, the route I’ll be taking and where we’re going. It sounds a lot more exciting than just giving them an Excel sheet with all the details" (P10). "If you were a travel agent, you could use it to visually present your proposed plan to clients" (P6). Also, participants proposed personal documentation scenarios. For instance, one participant mentioned using the tool to visually record their visits to restaurants around the city. "I can also use that to document all the restaurants that I’ve been to around the city" (P10). Beyond video creation, the tool was seen as a potential resource for real-time classroom presentations. "This would be a great educational tool. Like, if I were a teacher, I could demonstrate it to the whole class" (P9). These unexpected use cases highlight the flexibility of the tool and its potential to support a wide range of applications beyond our original scope. 9 EXPERT INTERVIEWS To assess MapStory’s value and feasibility in professional workflows and gain insights into potential use, we conducted expert interviews with six professional map animators (N=6), each with 3–15 years of experience using tools like Adobe After Effects and Davinci Resolve. 9.1 Method We recruited five professional map video creators (5 male, ages 22- 38). Two of these experts had also taken part in our earlier formativestudy. Each participant had at least two years of experience (2-10 years) producing map animation videos. Three out of the five ex- perts regularly upload videos to YouTube, while the remaining two primarily do client work. Collectively, their Youtube channels have an average of 120k subscribers (5k-300k as of March 2025). We con- tacted them via Youtube or Instagram, conducted semi-structured interviews of approximately 60 minutes over Zoom, and compen- sated each with $50 USD. Experts were given a demo of MapStory and asked to use the system to prototype animations based on a familiar script. We followed up with questions to understand how MapStory could potentially integrate into their workflows. 9.2 Insights and Findings Experts found script-driven animation intuitive and were satisfied with animation controllability, though several noted a desire for even more fine grained control available in professional software. We condense insights from these interviews below. Scene breakdown as the spine of story .Participants found that the scene breakdown not only provides animation guides but also acts as a medium of story exploration. Animators felt that they could stay in “script mode” , using short text edits to restructure the whole animation instead of dragging keyframes or rewriting code, while visualizing how the story is presented in real -time. E1 explained that the scene breakdown provides an initial guided path,
|
https://arxiv.org/abs/2505.21966v1
|
which “is effectively the basics of a story” and keeps narrative flow intact, and he can directly control and change the narrative: “The auto -generated ‘scene breakdown’ feels like way-points in your story that you can tweak in plain language . ”E5 finds the scene breakdown to be a medium of updating story beats first for edit- ing the visuals. He explains that he traditionally wrote scripts in “Then/but/therefore” chunks and then assigns timecodes to make an- imations in After Effects. But now that he can directly make/write those beats, he can spend his creative time adjusting nuance in- stead of laying every keyframe. He says, “[MapStory] generates those things, the clips, the timeline items, and then you can manipulate it later to fine-tune what you’re doing; that’s the most beneficial. ” Animation by means of conversation .Experts described the chat-based Researcher as a dialogue that lets them refine map an- imations and data with text instead of keyframes. E1 explained that when a border is not up to their linking they can “just ask for information and not just animation,“ saving them from “man- ually rearranging keyframes or remaking the scene by hand“ . E5 echoed the sentiment, saying “AI can take out the grunt work, but then people can tweak the results to a really high degree afterwards“ . E3 quantified the gain: being able to chat -edit a mock -up “could save me like 40% of the time”, and, when modern polygons include unwanted islands, “being able to iteratively change it for my lik- ing is the best part”. Across interviews, conversational animate instructions—whether querying historical borders, trimming over- seas territories, or requesting fresh coordinates or information, were praised as a faster, more flexible alternative to the traditional cycle of swapping between Wikipedia, Geo editing tools like QGIS and After Effects. Iterative and editable animation authoring .Creators repeat- edly framed MapStory as a way to rough -cut first, polish later , using 13 fast textual edits to explore ideas and postpone heavy key -framing until they are sure of the direction. E1 calls the system a quick tool that still gives high quality output—good enough to visualize and use for final polish—and estimates it can shave off about two days on a typical five -day timeline because the early prototyping phase no longer stalls on full -resolution renders. The creators still pre- ferred to polish the final video to their liking with advanced editing tools that offer precise control. E5 values the fact that MapStory cangenerate the timeline items and then manipulate them , which he says is the most beneficial part of his workflow: MapStory lets him iterate on narrative story beats before committing to detailed animation “The ability to manipulate is paramount“ . E1 notes that polygon edits are now more approachable and iterative instead of time consuming of GeoJSON changes: It does these computations by itself . . . searches for everything and compiles it for you. Even though the system wasn’t perfect, he still appreciated ability to keep prompting or manually make changes “I really
|
https://arxiv.org/abs/2505.21966v1
|
like how you can ask for information and not just animation“ . E2 contrasts this with his current After Effects practice where, once you make it, you can’t change it. . . But [with MapStory] I can change it, but it will be time-consuming . In MapStory he simply deletes a label and replaces a location highlight and the highlight and camera jumps from India to France, enabling many quick versions: In 20 or 10 minutes I can do this kind of base animation. . . then later move it into After Effects. Speed also opens space for parallel ideation. E4 keeps several prototypes open at once: You can do like three videos at the same time . . . it handles all animation by itself so you don’t have to worry about it. Because the Researcher answers questions and returns edits, E2 expects to brainstorm more adventurous storylines, noting that the tool itself becomes a live fact -checking partner. But he also mentions that this would only benefit him during time crunches, and that he would want more precise control (like After Effects) for other projects “It can’t replace After Effects. . . your product’s job is mapping, idea tool for a client demo.“ . Because each animation block is stored as editable prose, creators immediately thought about portability: E3 asked, Could you save the specific timeline as a file and then import it later? , treating the breakdown like an edit-decision list that can hop between tools, while E2 imagined the same promptable timeline living inside Adobe’s flagship: If we integrate this with After Effects, then I would love [to use it] . . . it gives me reassessing thing. Text-driven prototyping lowers the cost of experimentation, lets animators iterate early and often, and reserves advanced tools for the final layer of visual details. Limitations and caution .Several experts flagged the risk that an LLM -powered “Researcher” might invent or distort facts. E4 warned that prompts can return “wrong information or . . . hallu- cinations . . . I will double -check“ before publishing anything. Yet most participants stressed that fact -checking is not new creators have always had to validate sources. E5 summed it up bluntly: “You can’t trust Wikipedia . . . you’ve gotta do a lot of research to make sure things are historically accurate, a lot of stuff out there is just wrong. “ . Likewise, E3 treats AI outputs like any other draft, noting that he needs a “general idea” so he can spot errors and avoid “ac- cidentally including the AI mistake“ in his maps. Participants did appreciate the controllability our system provides. Moreover, theexperts explained that they would not find themselves using Map- Story as their final production tool but more for initial exploration, drafting, or pre-production for client demo or storyboarding. They prefer having complete creative control over their animations with advanced software which support keyframing like After Effects or Davinci Resolve. 10 FUTURE WORK Modular extensibility of AI -assisted, text -driven animation prototyping .Our formative explorations have yielded a corpus
|
https://arxiv.org/abs/2505.21966v1
|
of map -centric animation blocks; however, our modular system can be extended with new, custom blocks tailored to animators’ niche tastes. Participants envision creating their own stylistic mod- ules—such as filters, transitions, or bespoke map -animation blocks, while some imagine importing a hand -designed basemap and letting the engine treat it as another selectable style, “Imagine you can im- port . . . an image, and then it would do the same thing“ (E3). Because each block is merely metadata plus a renderer hook, contributors can publish plug -ins (e.g., “smear -frame transition,” “parchment basemap”) to a shared registry. Experts also envision a community forming around trading map -specific animation blocks, basemaps, transitions, and filters, similar to in-game items. Geocoding Ability of LLMs .Our system is constrained by how accurately LLMs can map textual descriptions onto geographic co- ordinates and generate valid GeoJSON (albeit geocoding accuracy improvement was never our goal). Although our functions give animators interactive, iterative control, reliable geocoding remains challenging. For example, when visualizing the Roman Empire we can highlight its key regions, but we must rely on present -day boundaries; portions of the historical empire correspond to only parts of modern countries—a level of detail our system cannot yet represent (users can, however, draw a rough outline manually). Despite this limitation, the text -driven animation workflow itself remains unchanged. We also acknowledge that our evaluation was restricted by API rate limits, which limited the number of prompts we could test. Future work should therefore conduct a more ex- tensive assessment focused on geocoding to explore the limits and capabilities of LLMs for any map -based visualization, not just ani- mations. Our agent design is also limited by a design features two agents. we chose this design based on our formative evaluations and time constraints of animation authoring. However, future ar- chitectures could explore agents interacting over multiple rounds to keep refining the geocoded result of the previous agent until a verification agent is satisfied. Beyond Maps: Broader implications of text or conversation driven animation workflows .The interviews suggest that Map- Story’s text -driven “scene breakdown + Researcher” model could serve as a control layer for broader animations, not just maps. Once the timeline itself is conversational, the Researcher can query far more than geographic data; E5 already envisions layering non -map treatments— “You can also add transitions or filters to the maps to give it more [board -game] kind of a thing. “ —and the same mechanism could contextually search a studio’s library of transitions, LUTs, or particle systems, insert the chosen effect blocks into the timeline (additionally add transitions or filters based on user’s intention), 14 and expose their parameters for manual tweaking at every stage. This can create a reusable, domain -agnostic interaction that can accelerate storyboarding and live client iteration for any genre of animation. 11 CONCLUSION This paper introduced MapStory, a text-driven map animation pro- totyping tool that enables users to create map-based animations through natural language editable at every stage. We contributed to the design space and LLM architecture of our system. Formative
|
https://arxiv.org/abs/2505.21966v1
|
evaluations detailed workflow of map animation creation with a design space of map-animation blocks. MapStory introduces mod- ular animation blocks, integrated geospatial querying researcher, and tight coupling between script and animation through a step-by- step scene breakdown. These features were implemented through novel architecture that leverages two LLM agents, one for scene breakdown and another one for research agent. Through techni- cal evaluations, usability studies, and expert interviews, we found that MapStory supports fast iteration, encourages creative explo- ration, and aligns well with existing professional workflows. On the other hand, our findings also revealed several limitations, in- cluding occasional inaccuracies from LLM-generated outputs and needing verification and finer grained control. Future work should address these limitations to enable more accessible, expressive, and verifiable map animations. REFERENCES [1] [n.d.]. Catalan Atlas. https://en.wikipedia.org/wiki/Catalan_Atlas [2] [n.d.]. Charles Joseph Minard. https://en.wikipedia.org/wiki/Charles_Joseph_ Minard#The_map_of_Napoleon’s_Russian_campaign [3] [n.d.]. Veo2. https://deepmind.google/technologies/veo/veo-2/ [4] AEScripts. 2025. GeoLayers: The Ultimate Map and Data Visualization Plugin for After Effects. https://aescripts.com/geolayers/ Accessed: 2025-03-12. [5] Tyler Angert, Miroslav Suzara, Jenny Han, Christopher Pondoc, and Hariharan Subramonyam. 2023. Spellburst: A node-based interface for exploratory creative coding with natural language prompts. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . 1–22. [6] Ian Arawjo, Priyan Vaithilingam, Martin Wattenberg, and Elena Glassman. 2023. ChainForge: An open-source visual programming environment for prompt en- gineering. In Adjunct proceedings of the 36th annual ACM symposium on user interface software and technology . 1–3. [7] Jan Henry Belz, Lina Madlin Weilke, Anton Winter, Philipp Hallgarten, Enrico Rukzio, and Tobias Grosse-Puppendahl. 2024. Story-Driven: Exploring the Im- pact of Providing Real-time Context Information on Automated Storytelling. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology . 1–15. [8]Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Gross- man. 2023. Promptify: Text-to-image generation through interactive prompt exploration with large language models. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . 1–14. [9] John Brooke et al .1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7. [10] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, et al .2024. Video generation models as world simulators. OpenAI Blog 1 (2024), 8. [11] Yining Cao, Jane L E, Zhutian Chen, and Haijun Xia. 2023. DataParticles: Block- based and language-oriented authoring of animated unit visualizations. In Pro- ceedings of the 2023 CHI Conference on Human Factors in Computing Systems . 1–15. [12] Mengyu Chen, Marko Peljhan, and Misha Sra. 2021. Entanglevr: A visual pro- gramming interface for virtual reality interactive scene generation. In Proceedings of the 27th ACM symposium on virtual reality software and technology . 1–6. [13] Erin Cherry and Celine Latulipe. 2014. Quantifying the creativity support of digital tools through the creativity support index. ACM Transactions on Computer- Human Interaction (TOCHI) 21, 4 (2014), 1–25. [14] John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. TaleBrush: Sketching stories with generative pretrained language models. In Proceedings of
|
https://arxiv.org/abs/2505.21966v1
|
the 2022 CHI Conference on Human Factors in Computing Systems . 1–19.[15] Richard C Davis, Brien Colwell, and James A Landay. 2008. K-sketch: a’kinetic’sketch pad for novice animators. In Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems . 413–422. [16] Tong Gao, Jessica R Hullman, Eytan Adar, Brent Hecht, and Nicholas Diakopoulos. 2014. Newsviews: an automated pipeline for creating custom geovisualizations for news. In Proceedings of the SIGCHI conference on human factors in computing systems . 3005–3014. [17] Google. 2025. Veo 2 - Google DeepMind. https://deepmind.google/technologies/ veo/veo-2/ [Online; accessed 2025-03-31]. [18] Aditya Gunturu, Yi Wen, Nandi Zhang, Jarin Thundathil, Rubaiat Habib Kazi, and Ryo Suzuki. 2024. Augmented Physics: Creating Interactive and Embedded Physics Simulations from Static Textbook Diagrams. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology . 1–12. [19] Anton Gustafsson, John Bichard, Liselott Brunnberg, Oskar Juhlin, and Marco Combetto. 2006. Believable environments: generating interactive storytelling in vast location-based pervasive games. In Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology . 24–es. [20] Geo History. 2021. History of the Major Trade Routes - Summary on a Map. https://www.youtube.com/watch?v=Q7xp1-VvtZ0 Accessed: 2025-01-06. [21] Bernd Huber, Hijung Valentina Shin, Bryan Russell, Oliver Wang, and Gautham J Mysore. 2019. B-script: Transcript-based b-roll video editing with recommenda- tions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems . 1–11. [22] Amir Jahanlou and Parmit K Chilana. 2022. Katika: An end-to-end system for authoring amateur explainer motion graphics videos. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems . 1–14. [23] Peiling Jiang, Jude Rayan, Steven P Dow, and Haijun Xia. 2023. Graphologue: Exploring large language model responses with interactive diagrams. In Proceed- ings of the 36th annual ACM symposium on user interface software and technology . 1–20. [24] Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, and George Fitzmaurice. 2014. Kitty: sketching dynamic and interactive illustrations. In Proceedings of the 27th annual ACM symposium on User interface software and technology . 395–405. [25] Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. 2014. Draco: Bringing life to illustrations. In CHI’14 Extended Abstracts on Human Factors in Computing Systems . 579–582. [26] Rubaiat Habib Kazi, Tovi Grossman, Nobuyuki Umetani, and George Fitzmaurice. 2016. Motion amplifiers: sketching dynamic illustrations using the principles of 2D animation. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems . 4599–4609. [27] Brian Felipe Keith Norambuena and Tanushree Mitra. 2021. Narrative maps: An algorithmic approach to represent and extract information narratives. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–33. [28] Knowledgia. 2022. How did The Silk Road Actually Work? https://www.youtube. com/watch?v=J-pfeFbssMw Accessed: 2025-01-06. [29] Mackenzie Leake, Abe Davis, Anh Truong, and Maneesh Agrawala. 2017. Com- putational video editing for dialogue-driven scenes. ACM Trans. Graph. 36, 4 (2017), 130–1. [30] Mackenzie Leake and Wilmot Li. 2024. ChunkyEdit: Text-first video interview editing via chunking. In Proceedings of the CHI Conference on Human Factors in Computing Systems . 1–16. [31] Fan Lei, Yuxin Ma,
|
https://arxiv.org/abs/2505.21966v1
|
A Stewart Fotheringham, Elizabeth A Mack, Ziqi Li, Mehak Sachdeva, Sarah Bardin, and Ross Maciejewski. 2023. GeoExplainer: A visual analytics framework for spatial modeling contextualization and report gener- ation. IEEE Transactions on Visualization and Computer Graphics 30, 1 (2023), 1391–1401. [32] Wanwan Li, Changyang Li, Minyoung Kim, Haikun Huang, and Lap-Fai Yu. 2023. Location-aware adaptation of augmented reality narratives. In Proceedings of the 2023 CHI conference on human factors in computing systems . 1–15. [33] Wenchao Li, Zhan Wang, Yun Wang, Di Weng, Liwenhan Xie, Siming Chen, Haidong Zhang, and Huamin Qu. 2023. GeoCamera: Telling stories in geographic visualizations with camera movements. In Proceedings of the 2023 CHI conference on human factors in computing systems . 1–15. [34] Jian Liao, Adnan Karim, Shivesh Singh Jadon, Rubaiat Habib Kazi, and Ryo Suzuki. 2022. Realitytalk: Real-time speech-driven augmented presentation for ar live storytelling. In Proceedings of the 35th annual ACM symposium on user interface software and technology . 1–12. [35] Real life Core. 2019. Why 50% of Canadians Live South of This Line. https: //www.youtube.com/watch?v=DFJAgb7dn78 Accessed: 2025-01-06. [36] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. arXiv:2307.03172 [cs.CL] https://arxiv.org/abs/2307.03172 [37] Vivian Liu, Rubaiat Habib Kazi, Li-Yi Wei, Matthew Fisher, Timothy Langlois, Seth Walker, and Lydia Chilton. 2024. LogoMotion: Visually Grounded Code Gen- eration for Content-Aware Animation. arXiv preprint arXiv:2405.07065 (2024). [38] Patrik Lundblad and Mikael Jern. 2013. Geovisual analytics and storytelling using HTML5. In 2013 17th International Conference on Information Visualisation . 15 IEEE, 263–271. [39] Jiaju Ma, Li-Yi Wei, and Rubaiat Habib Kazi. 2022. A layered authoring tool for stylized 3d animations. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems . 1–14. [40] Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, and Stefano Ermon. 2023. Geollm: Extracting geospatial knowledge from large language models. arXiv preprint arXiv:2310.06213 (2023). [41] Damien Masson, Sylvain Malacria, Géry Casiez, and Daniel Vogel. 2024. Direct- gpt: A direct manipulation interface to interact with large language models. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems . 1–16. [42] Tomer Moscovich and John F Hughes. 2001. Animation sketching: An approach to accessible animation. Unpublished Master’s Thesis, CS Department, Brown University 5 (2001). [43] OpenAI. 2023. Function Calling and Other API Updates. https://openai.com/ blog/functioncalling-and-other-api-updates. Accessed: 2023-XX-XX. [44] OpenAI. 2025. Sora. https://openai.com/sora/. [Online; accessed 28-March-2025]. [45] Pika. [n.d.]. Pika. https://pika.art/ [Online; accessed 2025-03-31]. [46] Jonathan Roberts, Timo Lüddecke, Sowmen Das, Kai Han, and Samuel Albanie. 2023. GPT4GEO: How a Language Model Sees the World’s Geography. arXiv preprint arXiv:2306.00020 (2023). [47] Karl Toby Rosenberg, Rubaiat Habib Kazi, Li-Yi Wei, Haijun Xia, and Ken Perlin. 2024. DrawTalking: Building Interactive Worlds by Sketching and Speaking. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology . 1–25. [48] Inc. Runway AI. [n.d.]. Runway | Tools for human imagination. https:// runwayml.com/ [Online; accessed 2025-03-31]. [49] Kadek Ananta Satriadi, Barrett Ens, Tobias Czauderna, Maxime Cordeil, and Bernhard Jenny. 2021. Quantitative data visualisation on
|
https://arxiv.org/abs/2505.21966v1
|
virtual globes. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . 1–14. [50] Kadek Ananta Satriadi, Jim Smiley, Barrett Ens, Maxime Cordeil, Tobias Czaud- erna, Benjamin Lee, Ying Yang, Tim Dwyer, and Bernhard Jenny. 2022. Tangible globes for data visualisation in augmented reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems . 1–16. [51] Vidya Setlur, Sarah E Battersby, Melanie Tory, Rich Gossweiler, and Angel X Chang. 2016. Eviza: A natural language interface for visual analysis. In Pro- ceedings of the 29th annual symposium on user interface software and technology . 365–377. [52] Jae-Eun Shin and Woontack Woo. 2023. How space is told: linking trajectory, narrative, and intent in augmented reality storytelling for cultural heritage sites. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems . 1–14. [53] Paul Sonnentag, Alexander Obenauer, and Geoffrey Litt. 2023. Embark: Dynamic Documents as Personal Software. In LIVE 2023: The Ninth Workshop on Live Programming . Cascais, Portugal. https://www.inkandswitch.com/embark/ [54] Hariharan Subramonyam, Wilmot Li, Eytan Adar, and Mira Dontcheva. 2018. Taketoons: Script-driven performance animation. In Proceedings of the 31st An- nual ACM Symposium on User Interface Software and Technology . 663–674.[55] Bekzat Tilekbay, Saelyne Yang, Michal Adam Lewkowicz, Alex Suryapranata, and Juho Kim. 2024. ExpressEdit: Video Editing with Natural Language and Sketching. In Proceedings of the 29th International Conference on Intelligent User Interfaces . 515–536. [56] Anh Truong, Floraine Berthouzoz, Wilmot Li, and Maneesh Agrawala. 2016. Quickcut: An interactive tool for editing narrated video. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology . 497–507. [57] Tiffany Tseng, Ruijia Cheng, and Jeffrey Nichols. 2024. Keyframer: Empowering animation design using large language models. arXiv preprint arXiv:2402.06071 (2024). [58] Edward R Tufte. 1991. Envisioning information. Optometry and Vision Science 68, 4 (1991), 322–324. [59] Edward R Tufte and Peter R Graves-Morris. 1983. The visual display of quantitative information . Vol. 2. Graphics press Cheshire, CT. [60] Bryan Wang, Yuliang Li, Zhaoyang Lv, Haijun Xia, Yan Xu, and Raj Sodhi. 2024. LAVE: LLM-Powered Agent Assistance and Language Augmentation for Video Editing. In Proceedings of the 29th International Conference on Intelligent User Interfaces . 699–714. [61] Haochen Li1 Jonathan Leung1 Hao Wang and Zhiqi Shen. [n.d.]. Prompting LLMs to Solve Complex Tasks: A. ([n. d.]). [62] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023. Plan-and-solve prompting: Improving zero-shot chain- of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091 (2023). [63] Zhijie Wang, Yuheng Huang, Da Song, Lei Ma, and Tianyi Zhang. 2024. Promptcharm: Text-to-image generation through multi-modal prompting and refinement. In Proceedings of the 2024 CHI Conference on Human Factors in Com- puting Systems . 1–21. [64] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al .2022. Chain-of-thought prompting elicits reason- ing in large language models. Advances in neural information processing systems 35 (2022), 24824–24837. [65] Denis Wood and John Fels. 1992. The power of maps . Guilford Press. [66] Haijun Xia. 2020. Crosspower:
|
https://arxiv.org/abs/2505.21966v1
|
Bridging graphics and linguistics. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology . 722–734. [67] Haijun Xia, Tony Wang, Aditya Gunturu, Peiling Jiang, William Duan, and Xiaoshuo Yao. 2023. CrossTalk: Intelligent Substrates for Language-Oriented Interaction in Video-Based Communication and Collaboration. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . 1–16. [68] Zihan Yan, Chunxu Yang, Qihao Liang, and Xiang’Anthony’ Chen. 2023. XCre- ation: A graph-based crossmodal generative creativity support tool. In Proceed- ings of the 36th Annual ACM Symposium on User Interface Software and Technology . 1–15. [69] Hui Ye, Chufeng Xiao, Jiaye Leng, Pengfei Xu, and Hongbo Fu. 2025. Mo- GraphGPT: Creating Interactive Scenes Using Modular LLM and Graphical Control. arXiv preprint arXiv:2502.04983 (2025). [70] Lei Zhang and Steve Oney. 2020. Flowmatic: An immersive authoring tool for creating interactive scenes in virtual reality. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology . 342–353. 16
|
https://arxiv.org/abs/2505.21966v1
|
arXiv:2505.21969v1 [cs.RO] 28 May 2025 DORAEMON: Decentralized Ontology-aware Reliable Agent with Enhanced Memory Oriented Navigation Tianjun Gu1Linfeng Li1Xuhong Wang3Chenghua Gong1Jingyu Gong1 Zhizhong Zhang1Yuan Xie1,3Lizhuang Ma1Xin Tan1,2 1East China Normal University,2Shanghai AI Lab,3Shanghai Innovation Institute Abstract Adaptive navigation in unfamiliar environments is crucial for household service robots but remains challenging due to the need for both low-level path planning and high-level scene understanding. While recent vision-language model (VLM) based zero-shot approaches reduce dependence on prior maps and scene-specific training data, they face significant limitations: spatiotemporal discontinuity from discrete observations, unstructured memory representations, and insufficient task under- standing leading to navigation failures. We propose DORAEMON (Decentralized Ontology-aware Reliable Agent with Enhanced Memory Oriented Navigation), a novel cognitive-inspired framework consisting of Ventral and Dorsal Streams that mimics human navigation capabilities. The Dorsal Stream implements the Hierarchical Semantic-Spatial Fusion and Topology Map to handle spatiotemporal discontinuities, while the Ventral Stream combines RAG-VLM and Policy-VLM to improve decision-making. Our approach also develops Nav-Ensurance to ensure navigation safety and efficiency. We evaluate DORAEMON on the HM3D, MP3D, and GOAT datasets, where it achieves state-of-the-art performance on both success rate (SR) and success weighted by path length (SPL) metrics, significantly outper- forming existing methods. We also introduce a new evaluation metric (AORI) to assess navigation intelligence better. Comprehensive experiments demonstrate DO- RAEMON’s effectiveness in zero-shot autonomous navigation without requiring prior map building or pre-training. 1 Introduction Adaptive navigation in complex and unseen environments [ 2] is a key capability for household service robots. This task requires robots to move from a random starting point to the location of a target object without any prior knowledge of the environment. For humans, navigation appears almost trivial thanks to spatial cognitive abilities honed through evolution. For robots, however, navigation remains a highly challenging problem: it demands not only low-level path planning to avoid obstacles and reach the destination, but also high-level scene understanding to interpret and make sense of the surrounding environment. Most existing navigation methods rely on the construction of prior maps[ 4] or require extensive scene-specific data for task-oriented pre-training[ 36]. However, in entirely unfamiliar environments, building maps in advance can be time-consuming and labor-intensive, and the scarcity of scene data further limits the practicality of these approaches. Recently, some works [ 14,34] have begun to Homepage: https://grady10086.github.io/DORAEMON/ Figure 1: (a) Illustrates limitation of typical VLM navigation (red arrow) in an unfamiliar environment. (b) DORAEMON’s cognitive inspiration: a Decentralized Ontology-aware approach with a Dorsal Stream for “What” and a Ventral Stream for “Where”. (c) DORAEMON constructs the Topology Map and uses Hierarchical Semantic-Spatial Fusion(green arrow). explore zero-training and zero-shot navigation strategies based on vision-language models (VLMs). Relying on textual descriptions of the current task, image inputs, and previously observed historical information, these approaches achieve navigation without dependence on environment- or task- specific data, gradually shedding the reliance on scene priors. Although VLM-based zero-shot navigation methods offer a novel perspective for adapting to unfa- miliar environments, they still face numerous challenges in practical applications. On the one hand, VLMs typically take task descriptions and observation histories as input [ 34][14]. However,
|
https://arxiv.org/abs/2505.21969v1
|
due to the discrete nature of input image descriptions at each time step, this spatiotemporal discontinuity often makes it difficult for VLMs to understand the relationships between targets and obstacles in complex environments. On the other hand, while many existing navigation systems incorporate some form of memory functionality, most VLM methods [ 25,27,30] adopt a single-step decision paradigm, treating historical information merely as a reference log. Even though End-to-End methods like VLMnav[ 14] utilize historical information, they typically store this information in a flat, unstructured manner, which fundamentally limits their ability to perform long-range navigation. Additionally, VLMs sometimes insufficient understanding of task semantics often leads to poor decision-making (e.g., going to look for a TV but finding a computer instead), and the lack of reliable check mecha- nisms for navigation states frequently results in unreliable behaviors such as spinning in place during navigation tasks. Fig 1 conceptually illustrates limitations of traditional VLN methods and contrasts them with the cognitive-inspired approach of our DORAEMON. Inspired by cognitive science ‘Decentralized Ontology’ principles [ 3] suggesting human knowledge is often represented and accessed in a distributed and context-dependent manner rather than via a single monolithic structure, we propose the Decentralized Ontology-aware Reliable Agent with Enhanced Memory Oriented Navigation (DORAEMON), which consists mainly of Ventral Stream and Dorsal Stream. The Ventral Stream processes object identity (“what”) information, while the Dorsal Stream handles spatial (“where”) processing in the human brain. Our decentralized architecture distributes cognitive functions across complementary streams: The Dorsal Stream addresses spatio-temporal discontinuities through a Topology Map and a Hierar- chical Semantic-Spatial Fusion, allowing our agent to reason accurately about target-environment relationships. Additionally, the Ventral Stream improves task understanding by utilizing a Retrieval- augmented Generation model (RAG-VLM) and Policy-VLM for navigation. Additionally, DORAE- MON features a Nav-Ensurance system that enables agents to autonomously detect and respond to 2 abnormal conditions, such as becoming stuck or blocked during navigation. To evaluate navigation performance more comprehensively, we propose a new metric called the Adaptive Online Route Index (AORI). In summary, the main contributions of this work are: •We propose DORAEMON, a novel adaptive navigation framework inspired by cognitive principles of decentralized knowledge, consisting of ventral and Dorsal Streams, enabling End-to-End and zero-shot navigation in completely unfamiliar environments without pre- training. •We propose the Dorsal Stream, which involves designing a Topology Map and a Hierarchical Semantic-Spatial Fusion Network to effectively manage spatio-temporal discontinuities. Additionally, we introduce the Ventral Stream, incorporating a synergistic reasoning com- ponent that combines RAG-VLM for understanding ontological tasks and Policy-VLM for enhanced task comprehension and policy planning. •We develop Nav-Ensurance, which includes multi-dimensional stuck detection and context- aware escape mechanisms. We propose a new evaluation metric called AORI to quantify the efficiency of the agent’s exploration. Our method demonstrates state-of-the-art performance across various navigation tasks. 2 Related Work 2.1 Zero-shot Object Goal Navigation Object navigation methods are broadly supervised or zero-shot. Supervised approaches train visual encoders with reinforcement/imitation learning [ 11,17,23,31] or build semantic maps from training data [ 24,45,49], struggling with novel scenarios due to data dependency. Zero-shot methods address this using open-vocabulary
|
https://arxiv.org/abs/2505.21969v1
|
understanding, increasingly leveraging foundation models like Vision- Language Models (VLMs) and Large Language Models (LLMs). LLMs provide commonsense reasoning via object-room correlation [ 39,41,51], semantic mapping [ 43], and chain-of-thought planning [ 5,6,33,41], while VLMs align visual observations with textual goals. These foundation model-guided techniques include image-based methods mapping targets to visual embeddings [ 1, 13,22,38] and map-based approaches using frontier [ 10,19,33,43,45,50,51] or waypoint-based maps [ 39] with LLM/VLM reasoning. VLM-based strategies either use VLMs for recognition with traditional planning and extra perception models [ 21,27,37,44,46], or, like PIVOT [ 25] and VLMnav [ 14], directly produce actions End-to-End via visual prompting. Despite progress, many zero-shot methods, especially those processing observations independently, face challenges integrating temporal information and handling complex spatial reasoning in unfamiliar environments. 2.2 Memory Mechanisms in Navigation Memory representations in navigation systems have evolved through various architectures, including episodic buffers that maintain observation sequences [ 14,16,34], spatial representations prioritizing geometric information [ 46,50], graph-based semantic structures capturing object relationships [ 41], and predictive world models attempting to forecast environmental states [ 7,26]. These systems typically process semantic and spatial information separately, with limited integration between perception and reasoning modules. Most approaches focus on either building representations or enhancing reasoning mechanisms independently. Differently, DORAEMON integrates these aspects through a hierarchical semantic-spatial fusion network with bidirectional information flow between ventral and dorsal processing streams. 2.3 Cognitive Neuroscience Inspiration in Navigation Object navigation systems are influenced by cognitive neuroscience, especially Decentralized Ontology[ 3], which suggests that human knowledge is organized through interconnected cognitive systems that enable context-dependent reasoning. Recent models like CogNav[ 7] and BrainNav[ 20] incorporate cognitive elements, but they do not fully embody Decentralized Ontology. CogNav utilizes a finite state machine for cognitive states, but may have limitations in knowledge integra- 3 Figure 2: Architecture of the DORAEMON Navigation Framework. tion. BrainNav mimics biological functions but doesn’t deeply engage in decentralized information processing. In contrast, DORAEMON aims for a more comprehensive Decentralized Ontology-aware processing. It emphasizes the integration and bidirectional exchange of information between Dorsal Stream and Ventral Stream, allowing for the construction of semantic relationships that enhance spatial understanding and support flexible, context-aware navigation. 3 Methods Task Formulation We address the ObjectNav task [ 2], where an agent, starting from an initial pose, must locate and navigate to a target object within a previously unseen indoor environment. At step t, the agent receives observation It, current pose Ptand a task specification T, which can be either a simple object category (e.g., “sofa”) or an instruction (e.g., “find the red chair” or“the plant on the desk”) for tasks like GOAT [ 18]. Based on these inputs, the agent must decide on an action at. While many prior works utilize a discrete action space, our End-to-End framework employs a continuous action representation in polar coordinates (rt, θt), where rtspecifies the forward distance to move, andθtdenotes the change in orientation. Crucially, the action space also includes a stop action. The task is considered successful if the agent executes the stop action after meeting successive stop triggers in steps tandt+ 1. The
|
https://arxiv.org/abs/2505.21969v1
|
trigger occurs when 1) the agent is within a predefined distance threshold dsuccess of the target object; 2) the target object is visually confirmed within the agent’s current observation It. Methods Overview Our DORAEMON framework achieves End-to-End and zero-shot navigation through the ontology of two decentralized cognitive-inspired streams, as depicted in Figure 2. Given an input with a panoramic image Itand a pose Ptat step t, they are processed by the Action Proposer Module (Appendix A) and the Dorsal Stream Module (Section 3.1), respectively. In the Action Proposer Module, a candidate image It annois generated with a set of action candidates At finalthrough parameterized geometric reasoning. Concurrently, the Dorsal Stream extracts semantic and spatial information from Itusing Hierarchical Semantic-Spatial Fusion and stores it within the Topology Map as node vt. The relevant node vrelecan be accessed by op-down retrieval. After that, vreleandIt annoare input to the Policy-VLM to select the best action based on the given information(Section 3.2.2). At the same time, the Policy-VLM receives a database containing information on several key dimensions relevant to the navigation task T, which is generated by the RAG-VLM (Section3.2.1) in the Ventral Stream Module (Section 3.2). The Policy-VLM integrates the information through a chain of thought (Appendix H), identifies abnormal conditions (Section 3.3), and outputs the final choice action at. The agent performs this action atin the environment, navigates, and makes the next decision at step t+ 1. 4 Figure 3: Architecture of Topological Map and Hierarchical Construction built in Dorsal Stream for spatio-temporal memory. The top view in the middle shows the content of different nodes during navigation, and the upper right part represents the Hierarchical Construction of a node. 3.1 Dorsal Stream The Dorsal Stream, similar to the “where/how” pathway in cognition, is responsible for processing the spatial information to effectively navigate. As illustrated in Figure 3, at each step t, the agent constructs vkon the Topology Map (Section 3.1.1). Subsequently, the Hierarchical Semantic-Spatial Fusion (Section 3.1.2) organizes the information into a hierarchical structure from the bottom up. 3.1.1 Topological Map The topological map, defined as G= (V,E), represents the environment and incorporates historical observations. This map is built incrementally as the agent explores. Each node vt∈ V in the topological map formally integrates multimodal observations as: vt= (pt, qt, It, Lt, ot,st), (1) where pt,qt,It,Lt,ot,stcorrespond to the agent’s position, orientation represented as unit quaternion from agent pose Pt, visual observation, language description of It, target likelihood estimation, and optional semantic embedding (e.g., CLIP features).The topological map Gis built progressively through spatio-temporal criteria. A new node vnewis added to the node set Vif either of the following conditions is met: 1) Temporal Criterion: A fixed number of exploration steps Supdate have elapsed since the last node addition: tcurr−tprev≥Supdate , ensuring periodic state updates; 2) Spatial Criterion: The agent’s displacement from the last node exceeds a spatial sampling threshold: ∥pcurr−pprev∥2> δ sample,. where δsample controls exploration granularity. When vnewis added to V, it is immediately connected to the nearest node vprev. 3.1.2 Hierarchical Semantic-Spatial Fusion Hierarchical Construction. Building upon the information associated with the
|
https://arxiv.org/abs/2505.21969v1
|
Topological Map nodes vt∈ V, our module organizes information of vtinto a hierarchical structure. The nodes hjon the hierarchical structure are defined as: hj= idj, lj,Pj,Cj , (2) where idj,lj∈ {L0, L1, L2, L3},Pj,Cjcorrespond to unique string identifier, hierarchy level tag, parent node references, and child node references. 5 Figure 4: The structure RAG-VLM in Ventral Stream, handling the task “NA VIGATE TO THE NEAREST SOFA”. The memory hierarchy organizes nodes hjinto four semantic levels through structural and functional relationships (Appendix F): L3(Observation, directly linked to topological map nodes vt),L2(Area), L1(Room), L0(Environment). The memory hierarchy is constructed bottom-up ( L3→L2→ L1→L0) after an initial exploration phase or periodically. While the overall process involves sequential clustering or integration steps for each level transition, the specific logic and parameters differ between levels. Hierarchical Memory Retrieval. To efficiently find relevant information within the constructed hierarchy (e.g., observations related to sofa), the system employs a top-down search, conceptually outlined in the AppendixG. This search is guided by a scoring function S(ni)evaluated at nodes hi during traversal the constructed hierarchy: S(hi) =αsemSsemantic (hi, T) +αspaSspatial(hi) +αkeySkeyword (hi, T) +αtimeStime(hi),(3) where Ssemantic computes embedding similarity between node niand task T,Sspatial measures proxim- ity to current position, Skeyword evaluates keyword overlap, and Stimeprioritizes recent observations. The weights αbalance these components based on their relative importance. To manage computa- tional cost, the retrieval process incorporates beam search, expanding only the top-scoring nodes at each level. 3.2 Ventral Stream The Ventral Stream, analogous to the “what” pathway in human cognition, integrates two key components: RAG-VLM (Section 3.2.1) for semantic knowledge processing and Policy-VLM (Section 3.2.2) for decision-making. 3.2.1 RAG-VLM To build a comprehensive understanding of the task, RAG-VLM leverages the extensive world knowledge embedded within a vision-language model. Upon receiving the task T, the system extracts key semantic attributes (general description, appearance features, structure/shape, and common location) to efficiently build the task database. Figure 4 illustrates this knowledge extraction process. The structured information gathered across these dimensions forms a database of the task T. This representation enables the agent not only 6 to verify whether an object encountered during navigation matches the task description but also to potentially inform planning by suggesting likely areas to explore first, thereby interfacing with the spatial reasoning components of the Dorsal Stream. 3.2.2 Policy-VLM The Policy-VLM combines visual observations, spatial awareness, and task semantics to determine optimal actions. It utilizes the reasoning capabilities of large vision-language models through Chain- of-Thought (CoT). The CoT breaks down the complex navigation task into interpretable sub-steps: current state analysis, memory integration, goal analysis, scene assessment, path planning, and action decision. 3.3 Nav-Ensurance To enhance the evaluation of safety and efficiency in navigation, we present a new metric, the Area Overlap Redundancy Index (AORI) (Section 3.3.1). Additionally, we develop Nav-Ensurance, including Multimodal Stuck Detection (Section 3.3.2), context-aware escape strategies (Section 3.3.3), and adaptive precision navigation (Section 3.3.4), to ensure that navigation systems operate reliably and effectively. 3.3.1 Area Overlap Redundancy Index (AORI) We introduce the Area Overlap Redundancy Index (AORI) to quantify the efficiency of the agent’s navigation strategy by
|
https://arxiv.org/abs/2505.21969v1
|
measuring overlap in area coverage. A high AORI indicates excessive path overlap and inefficient exploration, specifically addressing the limitations of conventional coverage metrics that neglect temporal-spatial redundancy. AORI is formally defined as: AORI = 1.0−(wc·(1.0−roverlap)2+wd·(1.0−dnorm)), (4) where roverlap represents the ratio of revisited areas to total observed areas, dnormis the normalized density, and wc= 0.8, wd= 0.2are weighting coefficients. For further details, refer to the Appendix D. 3.3.2 Multimodal Stuck Detection Agent detects navigation stuck by analyzing metrics calculated over a sliding window of step T: η=∥pT−p0∥2PT t=1∥pt−pt−1∥2, ρ =PT t=1|θt−θt−1| PT t=1∥pt−pt−1∥2. (5) A weighted scoring function combines these metrics: S=wη·I[η < τ η] +wρ·I[ρ > τ ρ]. (6) Agent confirms stuck when S≥Sthpersists for kconsecutive windows. This formula detects whether the agent is stuck or spinning during the navigation process through ηandρ. 3.3.3 Context-aware Escape Strategies When a stuck state is detected, the system selects an appropriate escape strategy based on the perceived information from Dorsal Stream(Section 3.1). For instance, in corner traps (perceived dead ends), a large turn (near 180◦) is executed. In narrow passages, a small backward step followed by a randomized direction change is employed. If the environmental context is ambiguous, the agent will analyze recent successful movement directions and attempt to move perpendicularly, significantly improving escape capabilities from complex trap situations. 7 3.3.4 Adaptive Precision Navigation As the agent nears the target object, it will activate a precision navigation mode. In this mode, the distance component dof all proposed actions (d, θ)is scaled down by a factor γstepto enable fine-grained positioning adjustments: aprecise = (d·γstep, θ)for action (d, θ)∈Aactions. (7) Additionally, when activating the precision navigation mode, the system can utilize visual analysis (using VLM) to create more detailed action options, thereby maximizing final positioning accuracy relative to the task. 4 Experiments Datasets We evaluate our proposed DORAEMON within the Habitat simulator [ 32] on three large-scale datasets: HM3Dv1 (using HM3D-Semantics-v0.1 [ 29] from the 2022 Habitat Challenge, featuring 2000 episodes across 20 scenes with 6 goal categories), HM3Dv2 (using HM3D-Semantics- v0.2 [ 40] from the 2023 Habitat Challenge, with 1000 episodes across 36 scenes and 6 goal categories), and MP3D [ 8] from the 2021 Habitat Challenge, comprising 2195 episodes across 11 scenes with 21 goal categories. We also include evaluations on GOAT [ 18] (using HM3D-Semantics-v0.2), a benchmark focusing on generalized object semantics with 1000 validation episodes across 100 scenes and 25 object categories. Implement Details and Evaluation Metrics The action space includes a stop action, a move_forward action where the distance parameter is sampled from the continuous range [0.5m,1.7m], and a rotate action. We adopt standard metrics to evaluate navigation performance: Success Rate (SR), the percentage of episodes where the agent successfully stops near a target object; Success weighted by Path Length (SPL), defined as1 NPN i=1Sili max( pi,li), rewarding both success and efficiency; and our proposed Area Overlap Redundancy Index (AORI) (Equation (4)), which quantifies navigation intelligence by penalizing redundant exploration (lower is better). More information is set in the Appendix E. Baselines We compare DORAEMON against several state-of-the-art object navigation methods
|
https://arxiv.org/abs/2505.21969v1
|
on the HM3Dv2[ 40], HM3Dv1[ 29], and MP3D[ 8]. Our main comparison focuses on End-to-End Vision-Language Model (VLM) approaches [14, 25]. Beyond these direct End-to-End counterparts, we also consider a broader set of recent methods for non-End-to-End object navigation methods. More baseline details are set in the Appendix I. 4.1 Methods Comparision End-to-End Methods: We evaluate our approach on the HM3Dv2 (ObjectNav,val, Table 1 (a)) and HM3Dv1(GOAT, val, Table 1 (b)) with other end-to-end baselines. DORAEMON achieves state-of-the-art performance on both datasets, outperforming other methods by a significant margin. Table 1: Comparison of End-to-End navigation methods on different benchmarks. (a) HM3Dv2 ObjectNav benchmark Method SR (%) ↑ SPL (%) ↑ AORI (%) ↓ Prompt-only 29.8 0.107 - PIVOT[25] 24.6 10.6 63.3 VLMNav[14] 51.6 18.3 61.5 DORAEMON (Ours) 62.0 23.0 50.1 Improvement 20.2 10.0 18.5(b) GOAT benchmark Method SR (%) ↑ SPL (%) ↑ AORI (%) ↓ Prompt-only 11.3 3.7 - PIVOT[25] 8.3 3.8 64.9 VLMNav[14] 22.1 9.3 63.6 DORAEMON (Ours) 24.3 10.3 56.9 Improvement 10.0 10.8 10.5 Comprehensive Methods Analysis: To ensure a fair comparison with the above non-End-to-End methods that utilize a discrete action set A:move forward 0.25m, turn left/turn right 30◦, look up/lookdown 30◦,stop , and a common 500 steps episode limit, we conduct an additional set of experiments. In these, we normalize our agent’s interactions by approximating an equivalent number of standard discrete steps for each of DORAEMON’s actions. 8 Compared to the non-End-to-End approach in the Table 2, DORAEMON achieves state-of-the- art performance on SR, despite normalizing our action to set A. Each action performed by ours corresponds to several actions in this set (details are provided in the Appendix B). Table 2: Comprehensive comparison with state-of-the-art methods on ObjectNav benchmarks. TF refers to training-free, ZS refers to zero-shot, and E2E refers to End-to-End. Method ZS TF E2E HM3Dv1 HM3Dv2 MP3D SR(%)↑SPL(%) ↑SR(%)↑SPL(%) ↑SR(%)↑SPL(%) ↑ ProcTHOR [12] × × × 54.4 31.8 - - - - SemEXP [9] ✓× × - - - - 36.0 14.4 Habitat-Web[31] ✓× × 41.5 16.0 - - 31.6 8.5 PONI [28] ✓× × - - - - 31.8 12.1 ProcTHOR-ZS [12] ✓× × 13.2 7.7 - - - - ZSON [22] ✓× × 25.5 12.6 - - 15.3 4.8 PSL [35] ✓× × 42.4 19.2 - - - - Pixel-Nav [5] ✓× × 37.9 20.5 - - - - SGM [47] ✓× × 60.2 30.8 - - 37.7 14.7 ImagineNav [48] ✓× × 53.0 23.8 - - - - CoW [13] ✓ ✓ × - - - - 7.4 3.7 ESC [51] ✓ ✓ × 39.2 22.3 - - 28.7 14.2 L3MVN [43] ✓ ✓ × 50.4 23.1 36.3 15.7 34.9 14.5 VLFM [42] ✓ ✓ × 52.5 30.4 63.6 32.5 36.4 17.5 V oroNav [39] ✓ ✓ × 42.0 26.0 - - - - TopV-Nav [50] ✓ ✓ × 52.0 28.6 - - 35.2 16.4 SG-Nav [41] ✓ ✓ × 54.0 24.9 49.6 25.5 40.2 16.0 DORAEMON (Ours) ✓ ✓ ✓ 55.6 21.4 66.5 20.6 41.1 15.8 Ablation Studies: 1) The effect of different modules: To represent the contribution of each
|
https://arxiv.org/abs/2505.21969v1
|
module, we compared three variants (Dorsal Stream, RAG-VLM of Ventral Stream, and Policy-VLM of Ventral Stream) on HM3D v2. Removing the Dorsal Stream and RAG-VLM implies that the model relies solely on the Policy-VLM of the Dorsal Stream in decision-making. The results reported for SR, SPL, and AORI, as presented in Table 3(a), respectively, show the effectiveness of Dorsal Stream and Ventral Stream. 2) We further evaluated the performance of different Visual Language Models (VLMs), as shown in Table 3(b). The Gemini-1.5-Pro demonstrated outstanding capabilities in this task. Even when using a smaller model, our approach yielded excellent results, indicating that our framework is effective on its own rather than solely relying on the reasoning capabilities of the VLMs. More importantly, as VLMs continue to evolve, the effectiveness of our plug-and-play approach will also improve. Table 3: Ablation of HM3Dv2 (100 episodes) (a) Ablation of different modules Method SR (%) ↑ SPL (%) ↑ AORI (%) ↓ w/o Dorsal & Ventral Stream 51.6 18.3 61.5 w/o Dorsal & RAG-VLM 54.0 19.8 59.1 w/o Dorsal Stream 59.0 22.7 56.3 w/o Nav-Ensurance 60.0 22.5 54.9 DORAEMON 61.0 23.7 48.8(b) Ablation of different VLMs VLM SR (%) ↑ SPL (%) ↑ AORI (%) ↓ Qwen-7B 49.5 20.6 68.7 Gemini-1.5-Flash 58.0 20.1 54.8 Gemini-2-Flash 59.0 21.5 57.9 Gemini-1.5-Pro 61.0 23.7 48.8 5 Conclusion In this paper, we present DORAEMON (Decentralized Ontology-aware Reliable Agent with Enhanced Memory Oriented Navigation), a novel cognitive-inspired framework consisting of Ventral and Dorsal Streams that mimics human navigation capabilities. The Dorsal Stream implements the Hierarchical Semantic-Spatial Fusion and Topology Map to handle spatiotemporal discontinuities, while the Ventral Stream combines RAG-VLM and Policy-VLM to improve decision-making. Our approach also develops Nav-Ensurance to ensure navigation safety and efficiency. Extensive experimental results demonstrate the superior performance of DORAEMON. 9 References [1]Ziad Al-Halah, Santhosh K. Ramakrishnan, and Kristen Grauman. Zero experience required: Plug & play modular transfer learning for semantic visual navigation, 2022. [2]Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, Alexander Toshev, and Erik Wijmans. Objectnav revisited: On evaluation of embodied agents navigating to objects. CoRR , abs/2006.13171, 2020. [3]Paolo Bouquet, Fausto Giunchiglia, Frank Van Harmelen, Luciano Serafini, and Heiner Stuckenschmidt. Contextualizing ontologies. Journal of Web Semantics , 1(4):325–343, 2004. [4]Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Chai, Davide Scaramuzza, John Leonard, Ian Reid, and Simon Henein. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics , 32(6):1309–1332, 2016. [5]Wenzhe Cai, Siyuan Huang, Guangran Cheng, Yuxing Long, Peng Gao, Changyin Sun, and Hao Dong. Bridging zero-shot object navigation and foundation models through pixel-guided navigation skill, 2023. [6]Yuxin Cai, Xiangkun He, Maonan Wang, Hongliang Guo, Wei-Yun Yau, and Chen Lv. Cl-cotnav: Closed- loop hierarchical chain-of-thought for zero-shot object-goal navigation with vision-language models, 2025. [7]Yihan Cao, Jiazhao Zhang, Zhinan Yu, Shuzhen Liu, Zheng Qin, Qin Zou, Bo Du, and Kai Xu. Cognav: Cognitive process modeling for object goal navigation with llms, 2025. [8]Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Nießner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d
|
https://arxiv.org/abs/2505.21969v1
|
data in indoor environments, 2017. [9]Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, and Ruslan Salakhutdinov. Object goal navigation using goal-oriented semantic exploration, 2020. [10] Junting Chen, Guohao Li, Suryansh Kumar, Bernard Ghanem, and Fisher Yu. How to not train your dragon: Training-free embodied object goal navigation with semantic frontiers, 2023. [11] Peihao Chen, Dongyu Ji, Kunyang Lin, Weiwen Hu, Wenbing Huang, Thomas H. Li, Mingkui Tan, and Chuang Gan. Learning active camera for multi-object navigation, 2022. [12] Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Jordi Salvador, Kiana Ehsani, Winson Han, Eric Kolve, Ali Farhadi, Aniruddha Kembhavi, and Roozbeh Mottaghi. Procthor: Large-scale embodied ai using procedural generation, 2022. [13] Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, Ludwig Schmidt, and Shuran Song. Cows on pasture: Baselines and benchmarks for language-driven zero-shot object navigation, 2022. [14] Dylan Goetting, Himanshu Gaurav Singh, and Antonio Loquercio. End-to-end navigation with vision lan- guage models: Transforming spatial reasoning into question-answering. arXiv preprint arXiv:2411.05755 , 2024. [15] Dylan Goetting, Himanshu Gaurav Singh, and Antonio Loquercio. End-to-end navigation with vision language models: Transforming spatial reasoning into question-answering, 2024. [16] Hao-Lun Hsu, Qiuhua Huang, and Sehoon Ha. Improving safety in deep reinforcement learning using unsupervised action planning, 2021. [17] Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. Simple but effective: Clip embeddings for embodied ai, 2022. [18] Mukul Khanna, Ram Ramrakhya, Gunjan Chhablani, Sriram Yenamandra, Theophile Gervet, Matthew Chang, Zsolt Kira, Devendra Singh Chaplot, Dhruv Batra, and Roozbeh Mottaghi. Goat-bench: A benchmark for multi-modal lifelong navigation, 2024. [19] Yuxuan Kuang, Hai Lin, and Meng Jiang. Openfmnav: Towards open-set zero-shot object navigation via vision-language foundation models. 2024. [20] Luo Ling and Bai Qianqian. Endowing embodied agents with spatial reasoning capabilities for vision-and- language navigation, 2025. 10 [21] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, and Lei Zhang. Grounding dino: Marrying dino with grounded pre-training for open-set object detection, 2024. [22] Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, and Dhruv Batra. Zson: Zero-shot object-goal navigation using multimodal goal embeddings, 2023. [23] Oleksandr Maksymets, Vincent Cartillier, Aaron Gokaslan, Erik Wijmans, Wojciech Galuba, Stefan Lee, and Dhruv Batra. Thda: Treasure hunt data augmentation for semantic navigation. 2021. [24] So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, and Ruslan Salakhutdinov. Film: Following instructions in language with modular methods, 2022. [25] Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu, Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, and Brian Ichter. Pivot: Iterative visual prompting elicits actionable knowledge for vlms, 2024. [26] Dujun Nie, Xianda Guo, Yiqun Duan, Ruijun Zhang, and Long Chen. Wmnav: Integrating vision-language models into world models for object goal navigation, 2025. [27] Pooyan Rahmanzadehgervi, Logan Bolton, Mohammad Reza Taesiri, and Anh Totti Nguyen. Vision language models are blind: Failing to translate detailed visual features into words, 2025. [28] Santhosh Kumar Ramakrishnan, Devendra Singh Chaplot, Ziad Al-Halah,
|
https://arxiv.org/abs/2505.21969v1
|
Jitendra Malik, and Kristen Grauman. Poni: Potential functions for objectgoal navigation with interaction-free learning, 2022. [29] Santhosh Kumar Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Austin Clegg, John M Turner, Manolis Savva, Angel X Chang, and Dhruv Batra. Habitat-Matterport 3D Dataset (HM3D): 1000 large-scale 3D environments for embodied AI. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages 16203–16213, 2021. [30] Santhosh Kumar Ramakrishnan, Erik Wijmans, Philipp Kraehenbuehl, and Vladlen Koltun. Does spatial cognition emerge in frontier models?, 2025. [31] Ram Ramrakhya, Eric Undersander, Dhruv Batra, and Abhishek Das. Habitat-web: Learning embodied object-search strategies from human demonstrations at scale, 2022. [32] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A platform for embodied AI research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages 9339–9347, 2019. [33] Dhruv Shah, Michael Equi, Blazej Osinski, Fei Xia, Brian Ichter, and Sergey Levine. Navigation with large language models: Semantic guesswork as a heuristic for planning, 2023. [34] Dhruv Shah, Michael Yang, Michael Laskin, Pieter Abbeel, and Sergey Levine. LM-Nav: Robotic navigation with large pre-trained models of language, vision, and action. In Conference on Robot Learning (CoRL) , pages 1083–1093. PMLR, 2023. [35] Xinyu Sun, Lizhao Liu, Hongyan Zhi, Ronghe Qiu, and Junwei Liang. Prioritized semantic learning for zero-shot instance navigation, 2024. [36] Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John M Turner, Manolis Savva, Angel X Chang, and Dhruv Batra. Habitat 2.0: Training home assistants to rearrange their habitat. In Advances in Neural Information Processing Systems (NeurIPS) , volume 34, pages 30153–30168, 2021. [37] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, 2022. [38] Congcong Wen, Yisiyuan Huang, Hao Huang, Yanjia Huang, Shuaihang Yuan, Yu Hao, Hui Lin, Yu-Shen Liu, and Yi Fang. Zero-shot object navigation with vision-language models reasoning, 2024. [39] Pengying Wu, Yao Mu, Bingxian Wu, Yi Hou, Ji Ma, Shanghang Zhang, and Chang Liu. V oronav: V oronoi-based zero-shot object navigation with large language model, 2024. [40] Karmesh Yadav, Ram Ramrakhya, Santhosh Kumar Ramakrishnan, Theo Gervet, John Turner, Aaron Gokaslan, Noah Maestre, Angel Xuan Chang, Dhruv Batra, Manolis Savva, Alexander William Clegg, and Devendra Singh Chaplot. Habitat-matterport 3d semantics dataset, 2023. 11 [41] Hang Yin, Xiuwei Xu, Zhenyu Wu, Jie Zhou, and Jiwen Lu. Sg-nav: Online 3d scene graph prompting for llm-based zero-shot object navigation, 2024. [42] Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, and Bernadette Bucher. Vlfm: Vision- language frontier maps for zero-shot semantic navigation, 2023. [43] Bangguo Yu, Hamidreza Kasaei, and Ming Cao. L3mvn: Leveraging large language models for visual target navigation. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , page 3554–3560. IEEE, October 2023. [44] Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae, Seungkyu Lee, and Choong Seon Hong. Faster segment anything: Towards lightweight sam for mobile applications, 2023. [45] Lingfeng Zhang, Xiaoshuai Hao, Qinwen Xu, Qiang Zhang, Xinyao Zhang, Pengwei Wang, Jing Zhang,
|
https://arxiv.org/abs/2505.21969v1
|
Zhongyuan Wang, Shanghang Zhang, and Renjing Xu. Mapnav: A novel memory representation via annotated semantic maps for vlm-based vision-and-language navigation, 2025. [46] Mingjie Zhang, Yuheng Du, Chengkai Wu, Jinni Zhou, Zhenchao Qi, Jun Ma, and Boyu Zhou. Apexnav: An adaptive exploration strategy for zero-shot object navigation with target-centric semantic fusion, 2025. [47] Sixian Zhang, Xinyao Yu, Xinhang Song, Xiaohan Wang, and Shuqiang Jiang. Imagine before go: Self- supervised generative map for object goal navigation. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . [48] Xinxin Zhao, Wenzhe Cai, Likun Tang, and Teng Wang. Imaginenav: Prompting vision-language models as embodied navigator through scene imagination, 2024. [49] Kaizhi Zheng, Kaiwen Zhou, Jing Gu, Yue Fan, Jialu Wang, Zonglin Di, Xuehai He, and Xin Eric Wang. Jarvis: A neuro-symbolic commonsense reasoning framework for conversational embodied agents, 2022. [50] Linqing Zhong, Chen Gao, Zihan Ding, Yue Liao, Huimin Ma, Shifeng Zhang, Xu Zhou, and Si Liu. Topv-nav: Unlocking the top-view spatial reasoning potential of mllm for zero-shot object navigation, 2025. [51] Kaiwen Zhou, Kaizhi Zheng, Connor Pryor, Yilin Shen, Hongxia Jin, Lise Getoor, and Xin Eric Wang. Esc: Exploration with soft commonsense constraints for zero-shot object navigation, 2023. 12 A Action Proposer DORAEMON employs an Action Proposer[ 15] to generate a refined set of candidate actions, which the Policy- VLM then evaluates for the final action decision. As shown in Figure 5, first parameterized action candidates At init are generated by the parameterized action space (Equation (8)). Second, adaptive filtering (Equation (9)) refines At candusing exploration state Vtand historical patterns Ht. Safety-critical recovery (Equation (10)) enforces a rotation cooldown γthrough viability evaluation F(·). Finally, the projection module visually encodes At final intoIt annowith numeric tagging (0 for rotation) to interface with VLM’s semantic space. Figure 5: Action proposal: (a) Collision-free action generation within ±θmaxFOV , (b) Exploration- aware filtering with ∆θangular resolution, (c) Safety-constrained and action projection. Parameterized Action Space Define the action space through symbolic parameters: At init= (θi,min ( ηri, rmax)) θi=k∆θ, k∈ K . (8) whereK= [−⌊θmax/∆θ⌋,⌊θmax/∆θ⌋]ensures full FOV coverage. The safety margin ηand collision check are derived from depth-based navigability analysis. Adaptive Action Filtering Refinement combines exploration state Vtand historical search patterns Ht: At cand= (θi, ri)∈At init α(Ht)·s(Vt)> τ, min θj∈Acand|θi−θj| ≥θδ. (9) where α(·)models temporal search impact and s(·)quantifies spatial exploration potential. Safety-Critical Recovery The next action set enforces, where F(·)evaluates action viability and γcontrols rotation cool down: At final=( {(π,0)}if,F(At cand)∧(t−trot> γ) At cand otherwise .(10) Action Projection The following phase focuses on visually anchoring these operational elements within the comprehensible semantic realm of the VLM. The projection component annotated visual depiction It annofrom At finalandIt. We use numeric encoding, assigning a distinct code to each actionable option that is displayed on the visual interface. It is worth noting that rotation is assigned the code 0. B Steps Conversion To establish temporal equivalence between DORAEMON’s continuous actions and Habitat’s discrete steps, we implement the conversion protocol formalized in Algorithm 1. Given a polar action a= (r, θ)∈R+×(−π, π] with radial displacement rmeters and angular rotation θradians:
|
https://arxiv.org/abs/2505.21969v1
|
This formulation enables direct comparison with baseline methods by normalizing both: Tepisode =500X t=1Nt≤500 (11) where Ntdenotes converted steps for action at time step t. During our experiments, one DORAEMON step t was equivalent to about 7-8 Nt 13 Algorithm 1 Discrete Step Conversion Require: Polar action (r, θ), displacement unit ∆r= 0.25m, angular unit ∆θ= 30◦ 1:ifaction is stop then 2: return 1 ▷Explicit stop handling 3:else 4: sr← ⌈r/∆r⌉ ▷Radial step calculation 5: θdeg←180|θ|/π ▷ Radian-degree conversion 6: sθ← ⌈θdeg/∆θ⌉ ▷Angular step calculation 7: N←max( sr+sθ,1) ▷Step composition 8: return N 9:end if C Navigation Case Figure 6 depicts a full navigation episode in which our memory -augmented agent searches for a chair in an unfamiliar indoor scene. The seven consecutive frames show the agent’s visual observations and planned motions from entry to target acquisition. Throughout the sequence, the agent (i) reasons about semantic priors—chairs are more likely near tables or in living/dining areas; (ii) fuses transient visual evidence with its episodic memory to avoid revisiting explored regions; and (iii) selects actions that maximise expected information gain while respecting safety constraints. The case, therefore, illustrates how the proposed memory module complements on-the-fly perception to yield efficient, goal-directed exploration in cluttered, real-world layouts. The agent is currently in a foyer where no chair is visible. Chairs are more likely deeper inside the house (e.g., living or dining rooms). Among the options, moving forward-right takes the agent directly into the open interior while avoiding an obstructing wall, whereas moving right is less direct and moving backward heads toward a closed door. Hence, Action1 (forward-right) is chosen. The agent is in a narrow hallway with no chair in sight. Chairs are more likely in a living space glimpsed through the partially open door to the right. Other directions head under stairs, down a narrow dead-end hall, or backward—all unlikely to reveal a chair. Therefore, Action5 (enter through the open door on the right) is selected. The agent stands in a hallway beside a damaged door. Through the opening it sees chairs in what looks like a dining room. To reach a chair quickly and avoid retracing explored paths, it chooses Action1 (forward-left) to pass through the doorway into that room. The recent memory of the hallway to my left suggests that’s an already explored area, and going backward would be backtracking.[left direction, 1.7m, 1 steps ago], […], […] Visisted Memory 14 Visisted Memory memory suggests the chair might be in this room. and the area in front of me seems to be an unexplored area.[backward direction, 1.7m, 1 steps ago], […]The agent is already in a combined living/dining room where chairs are visible, but to cover the still-unexplored central area it chooses Action3, which moves deepest into the room. Visisted Memory "Memory suggests the chair might be nearby, potentially in the area I just came from, but I should prioritize exploring new spaces."[backward direction, 1.7m, 1 steps ago], […]The agent is in a living/dining room and spots chairs around a small round table on the right. Since those visible chairs are a more
|
https://arxiv.org/abs/2505.21969v1
|
reliable cue than vague earlier memories, it chooses Action3 to move toward that table. The agent is in a dining-style room with a central table and clearly visible chairs. To reach them most directly it selects Action5, which moves straight toward the table and chairs. Visisted Memory [left direction, 1.3m, 1 steps ago], […], […] “Memory suggests the chair might be in the environment. The closest memory of a possible chair location was 2 steps ago in a different room. However, I’ve already found chairs in this room.”Inside a dining room the agent already sees two chairs near a table and aims to approach the closest one. Among the available moves, Action3 advances slightly nearer to that chair while keeping within unexplored space, so it is selected.Figure 6: Navigation case Each row shows one decision step. Left: the green circle highlights the action selected for this step. Upper-right dashed green box displays the most relevant episodic memory retrieved at this step. Lower-right speech bubble is the agent’s natural -language rationale that fuses (i) semantic priors, (ii) current visual evidence, and (iii) memory cues. 15 D Detailed Description of AORI D.0.1 Area Overlap Redundancy Index (AORI) The Area Overlap Redundancy Index (AORI) quantifies exploration efficiency through spatial overlap analysis. We formalize the computation with parameters from our implementation: Parameter Basis: • Map resolution: 5,000×5,000grid (map_size=5000) • V oxel ray casting resolution: 60×60(voxel_ray_size=60) • Exploration threshold: 3 observations per voxel (explore_threshold=3) • Density scaling factor: η= 0.8(e_i_scaling=0.8) Step-wise Calculation: For each step t∈[1, T]: 1. Compute observed area At=St i=1V(xi, yi)whereV(x, y)is the visible region defined by: ∥V(x, y)∥=map_size2 voxel_ray_size2·π (12) 2. Calculate overlap ratio roverlap : roverlap =Pt−1 i=1I[V(xt, yt)∩ V(xi, yi)≥explore_threshold ] t−1(13) 3. Compute normalized density dnormalized using Poisson expectation: dnormalized = min 1,Nobs λ , λ =η·∥At∥ map_size2·t (14) where Nobscounts voxels with ≥3 visits, λis expected active voxels Boundary Cases: •Optimal Case (AORI=0): When roverlap = 0 & dnormalized = 0⇒1−(0.8·12+ 0.2·1) = 0 •Worst Case (AORI=1):When roverlap = 1 & dnormalized = 1⇒1−(0.8·0 + 0 .2·0) = 1 Calculation Examples: •Case1: stay still (t=100 steps): roverlap =99 99= 1.0, λ= 0.8·π(60/5000)2 1·100≈0.014, dnorm= min 1,100 0.014 = 1.0, AORI = 1−[0.8(1−1)2+ 0.2(1−1)] = 1 .0(15) •Case2: go around (t=500 steps): roverlap≈38 499≈0.076, λ= 0.8·π(60/5000)2 1·500≈0.069, dnorm= min 1,62 0.069 = 1.0, AORI = 1−[0.8∗(1−0.076)2+ 0.2∗(1−1)]≈0.285(16) 16 E Experimental Setup Details Implementation Details. The maximal navigation steps per episode are set to 40. The agent’s body has a radius of 0.17mand a height of 1.5m. Its RGB-D sensors are positioned at 1.5mheight with a −0.45 radian downward tilt and provide a 131◦Field of View (FoV). For rotation, the agent selects an angular displacement corresponding to one of 60 discrete bins that uniformly discretize the 360◦range. Success requires stopping within dsuccess = 0.3m of the target object and visually confirming it. Success requires stopping within dsuccess = 0.3mof the target object and visually confirming it. Our DORAEMON framework primarily utilizes Gemini-1.5-pro as the VLM and CLIP ViT-B/32 for semantic embeddings, with caching implemented for efficiency. Key hyperparameters include: topological map
|
https://arxiv.org/abs/2505.21969v1
|
connection distance δconnect = 1.0m, node update interval Supdate = 3 steps, L1hierarchical clustering weight w= 0.4, AORI grid resolution δgrid= 0.1m, minimum obstacle clearance dmin_obs = 0.5m, and various stuck detection thresholds (e.g., path inefficiency ηpath<0.25, small area coverage δarea_gain <0.35m2, high rotation/translation ratio ρrot/trans >2.0for short paths when ∥path∥<0.5m) and a precision movement factor γstep= 0.1. F Hierarchical Construction F.1 Level L3: Observation Anchoring •Input : Raw topological nodes vt∈ V from Eq 1 •Process : Directly mapping to memory nodes h(3) j= id(3) j, L3,∅,{vt} . (17) •Output :h(3) jnodes storing original pt,stfromvt F.2 Level L2: Area Formation ( L3→L2) •Input :h(3) jnodes with spatial coordinates pt •Clustering : 1. Compute combined distance: dcomb= 0.4∥pi−pj∥2+ 0.6 1−si·sj ∥si∥∥sj∥ . (18) 2. Apply adaptive threshold: θ′ 1= 1.5θ1(|O|>20) 0.8θ1(|O|<10) θ1 otherwise .(19) 3. Generate clusters using scipy.linkage + fcluster •Functional Labeling : area_type = arg max tX v∈C(2) jX k∈KtI[k∈v.Lt]. (20) •Output :h(2) mnodes with: –Parent: h(1) n(L1room node). –Children: {h(3) j}( observations). –Spatial boundary: Convex hull of ptpositions. F.3 Level L1: Room Formation ( L2→L1) •Input :h(2) mareas with spatial centroids PA •Two-stage Clustering : 1.Spatial Pre-clustering : Cspatial=fcluster (linkage (dspatial), θ2= 3.0m). (21) 2.Functional Refinement : Fs={As,f|f=MapToRoomFunction (area_type )}. (22) •Output :h(1) nnodes containing: –Parent: h(0) 0(L0root) –Children: {h(2) m}(L2areas) 17 F.4 Level L0: Environment Root •Input : Allh(1) nroom nodes •Consolidation : h(0) 0= GLOBAL_ROOT , L0,∅,{h(1) n} . (23) •Function : Global access point for memory queries G Memory Retrieval Scoring Details G.1 Scoring Function Decomposition The retrieval score combines four evidence components through weighted summation: S(hi) = 0 .45Ssem+ 0.30Sspa+ 0.20Skey+ 0.05Stime. (24) G.2 Component Specifications G.2.1 Semantic Similarity •Input : CLIP embeddings sq(query) and si(node) •Calculation : Ssem=1 2 1 +s⊤ qsi ∥sq∥∥si∥! ∈[0,1]. (25) G.2.2 Spatial Proximity •Input : Agent position pa, node position pi •Decay function : Sspa= exp −∥pa−pi∥2 5.0 . (26) G.2.3 Keyword Relevance •Input : Query terms T, node keywords Ki(from Lt) •Matching score : Skey=|T∩Ki| max(|T|,1). (27) G.2.4 Temporal Recency •Input : Current time tc, observation time ti •Decay model : Stime= exp −|tc−ti| 600 . (28) G.3 Parameter Configuration Table 4: Scoring Component Weights Component Symbol Value Semantic Similarity αsem 0.45 Spatial Proximity αspa 0.30 Keyword Relevance αkey 0.20 Temporal Recency αtime 0.05 G.4 Search Process The beam search executes through these discrete phases: 18 Initialization Phase • Start from root node(s): F0={hroot} • Set beam width: B= 5 Iterative Expansion For each hierarchy level l∈ {L3, L2, L1, L0}: • Score all children: S(hchild)∀hchild∈ C(hj), hj∈ Fl • Select top- Bnodes Termination Conditions •Success : Reached L0nodes and selected top- Kresults •Failure : No nodes satisfy S(hi)>0.4threshold G.5 Computational Properties •Time Complexity :O(B·D)for depth D= 4 •Memory Complexity :O(B)nodes per level •Score Normalization : X k∈{sem,spa,key,time }αk= 1.0. (29) H Chain-of-Thought Prompt Our Policy-VLM leverages a structured Chain-of-Thought (CoT) prompt to guide the decision-making process. The complete prompt is provided below: TASK : NAVIGATE TO THE NEAREST [ TARGET_OBJECT ], and get as close to it ,→as possible . Use your prior knowledge about where
|
https://arxiv.org/abs/2505.21969v1
|
items are typically located ,→within a home . There are [N] red arrows superimposed onto your observation , which ,→represent potential actions . These are labeled with a number in a white circle , which represent ,→the location you would move to if you took that action . [ TURN_INSTRUCTION ] Let ’s solve this navigation task step by step : 1. Current State Analysis : What do you observe in the environment ? ,→What objects and pathways are visible ? Look carefully for the target object , even if it ’s partially ,→visible or at a distance . 2. Memory Integration : Review the memory context below for clues ,→about target location . - Pay special attention to memories containing or near the target ,→object - Use recent memories ( fewer steps ago) over older ones - Consider action recommendations based on memory 3. Goal Analysis : Based on the target and home layout knowledge , ,→where is the [ TARGET_OBJECT ] likely to be? 4. Scene Assessment : Quickly evaluate if [ TARGET_OBJECT ] could ,→reasonably exist in this type of space : - If you ’re in an obviously incompatible room (e.g., looking for a ,→[ TARGET_OBJECT ] but in a clearly different room type ), ,→choose action 0 to TURN AROUND immediately 5. Path Planning : What ’s the most promising direction to reach the ,→target ? Avoid revisiting 19 previously explored areas unless necessary . Consider : - Available paths and typical room layouts - Areas you haven ’t explored yet 6. Action Decision : Which numbered arrow best serves your plan ? ,→Return your choice as {" action ": <action_key >}. Note : - You CANNOT GO THROUGH CLOSED DOORS , It doesn ’t make any sense to ,→go near a closed door . - You CANNOT GO THROUGH WINDOWS AND MIRRORS - You DO NOT NEED TO GO UP OR DOWN STAIRS - Please try to avoid actions that will lead you to a dead end to ,→avoid affecting subsequent actions , unless the dead end is ,→very close to the [ TARGET_OBJECT ] - If you see the target object , even partially , choose the action ,→that gets you closest to it I Detailed Description of Baseline To assess the performance of DORAEMON , we compare it with 16recent baselines for (zero-shot) object-goal navigation. Summaries are given below. ProcTHOR [12]: A procedurally–generated 10K-scene suite for large-scale Embodied AI. ProcTHOR_ZS [12]: ProcTHOR_ZS trains in ProcTHOR and evaluates zero -shot on unseen iTHOR/RoboTHOR scenes to test cross-domain generalisation. SemEXP [9]: Builds an online semantic map and uses goal -oriented exploration to locate the target object efficiently, achieving state-of-the-art results in Habitat ObjectNav 2020. Habitat -Web [31]: Collects large -scale human demonstrations via a browser interface and leverages behaviour cloning to learn object-search strategies. PONI [28]: Learns a potential -field predictor from static supervision, enabling interaction -free training while preserving high navigation success. ZSON [22]: Encodes multimodal goal embeddings (text + images) to achieve zero -shot navigation towards previously unseen object categories. PSL [35]: Prioritised Semantic
|
https://arxiv.org/abs/2505.21969v1
|
Learning selects informative targets during training and uses semantic expansion at inference for zero-shot instance navigation. Pixel -Nav [5]: Introduces pixel -guided navigation skills that bridge foundation models and ObjectNav, relying solely on RGB inputs. SGM [47]: “Imagine Before Go” constructs a self -supervised generative map to predict unseen areas and improve exploration efficiency. ImagineNav [48]: Prompts vision–language models to imagine future observations, guiding the agent toward information-rich viewpoints. CoW [13]: Establishes the “Cows on Pasture” benchmark for language -driven zero -shot ObjectNav and releases baseline policies without in-domain training. ESC [51]: Employs soft commonsense constraints derived from language models to bias exploration, markedly improving zero-shot success over CoW. L3MVN [43]: Utilises large language models to reason about likely room sequences, while a visual policy executes the suggested path. VLFM [42]: Combines VLM goal -localisation with frontier -based exploration, removing the need for reinforce- ment learning or task-specific fine-tuning. VoroNav [39]: Simplifies the search space via V oronoi partitions and pairs this with LLM -driven semantic planning for improved zero-shot performance. TopV -Nav [50]: Lets a multimodal LLM perform spatial reasoning directly on top -view maps, with adaptive visual prompts for global–local coordination. SG-Nav [41]: Online builds a 3D scene graph and uses hierarchical Chain --of--Thought prompting so an LLM can infer probable target locations. 20
|
https://arxiv.org/abs/2505.21969v1
|
arXiv:2505.21972v1 [cs.LG] 28 May 2025Judging LLMs on a Simplex Patrick Vossler1Fan Xia1Yifan Mai2Jean Feng1 Abstract Automated evaluation of free-form outputs from large language models (LLMs) is challenging because many distinct answers can be equally valid. A common practice is to use LLMs themselves as judges, but the theoretical properties of this approach are not yet well understood. We show that a geometric framework that represents both judges and candidates as points on a probability simplex can provide helpful insight on what is or is not identifiable using LLM judges. Our theoretical analysis uncovers a “phase transition” in ranking identifiability: for binary scoring systems, true rankings are identifiable even with weak judges under mild assumptions, while rankings become non-identifiable for three or more scoring levels even with infinite data, absent additional prior knowledge. This non- identifiability highlights how uncertainty in rankings stems from not only aleatoric uncertainty (i.e., inherent stochasticity in the data) but also epistemic uncertainty regarding which assumptions hold, an aspect that has received limited attention until now. To integrate both types of uncertainty, we use Bayesian inference to encode assumptions as priors and conduct sensitivity analysis of ranking estimates and credible intervals. Empirical evaluations across multiple benchmarks demonstrate that Bayesian inference yields more accurate rankings and substantially improves coverage rates. These results underscore the importance of taking a more holistic approach to uncertainty quantification when using LLMs as judges. 1 Introduction Scalable benchmarking of LLMs is necessary in the face of the growing number of models, the frequent release of model updates, and the continuous introduction of novel benchmarks. While answers in certain formats (e.g., multiple choice or numerical answers) can be verified algorithmically, many benchmark tasks, from medical diagnosis to mathematical proofs, allow free-form text responses that are much more difficult to verify. These responses often include complex formats ranging from natural language to multi-part LaTeX formulas. The current gold standard of consensus voting by multiple human experts, while effective, is prohibitively expensive and unscalable. In response to these challenges, recent work has proposed using LLMs themselves as judges (Fig 1.1) [51] by leveraging the “generator-verifier gap”: models can often evaluate answers correctly even when they cannot generate answers correctly [ 26,35]. Still, numerous concerns have been raised regarding the accuracy of such approaches, as LLM judges are known to suffer from various biases, including position bias, length bias, and self-preference [ 22]. Various mechanisms have been shown to improve LLM judging empirically , including taking the average vote from an LLM jury [ 41], chain of thought, self-consistency [ 43], inclusion of a grading rubric [ 8,24], and LLM debate [ 9,5]. Despite these concerns, the theoretical limits of LLM-as-a-judge are not well understood. This work addresses a fundamental question at the core of LLM judging: Under what conditions can we recover the gold-standard (true) rankings between LLM candidates using one or more LLM judges, without access to gold-standard labels? We approach this question through a novel geometric 1University of California, San Francisco 2Stanford University Preprint. Under review. Figure 1.1: LLM judge workflow: For each benchmark question, LLM
|
https://arxiv.org/abs/2505.21972v1
|
judges score each candidate’s answer according to a rubric. Candidates are ranked based on their judge-assigned scores. Shaded boxes indicate cases where the same LLM serves as both candidate and judge (self-judging). framework, representing judges and candidates as points on a probability simplex. By visualizing the possible configurations of judges versus candidates, we can establish intuitive visual proofs about ranking identifiability. The analysis reveals positive and negative findings. On the positive side, when tasks are scored on two levels (e.g., correct versus incorrect), true rankings can be identified under minimal assumptions, even if absolute scores of LLM candidates cannot be identified. However, for three or more scoring levels (e.g., Likert scales)—common in many benchmarks—true rankings are non-identifiable without stronger assumptions or additional prior knowledge. These theoretical results underscore how ranking uncertainty stems from two sources: epistemic uncertainty about which assumptions hold and aleatoric uncertainty from inherent randomness in the data [ 18]. This has important practical implications. While prior work has essentially equated ranking uncertainty with aleatoric uncertainty [ 6,29,49], it is also necessary to incorporate epistemic uncertainty. To this end, we use Bayesian inference to integrate both sources of uncertainty, where epistemic uncertainty is explicitly controlled through the prior. By varying the prior, we can conduct sensitivity analyses to understand the effect of different assumptions on ranking estimates and uncertainty. Through empirical validation across multiple benchmarks, we demonstrate that this more holistic approach to uncertainty quantification improves ranking estimates and coverage rates. The contributions of this paper are: (i) a geometric simplex formalization for visualizing LLM judges and candidates, (ii) results on the theoretical limits of recovering gold-standard rankings in the absence of gold-standard labels; (iii) a Bayesian framework that explicitly controls assumptions and prior knowledge to more holistically characterize the uncertainty and sensitivity of ranking estimates; and (iv) empirical validation showing improved ranking accuracy and substantially better uncertainty calibration compared to existing methods (increase in coverage by 30% in certain cases). 2 Related work Evaluation Metrics for LLM Outputs: Traditional approaches to evaluating LLM outputs have evolved from n-gram overlap metrics to embedding-based and learned methods [ 30,28,44]. While these metrics offer computational efficiency and some semantic understanding, they rely on reference outputs and struggle with the open-ended nature of LLM generation tasks. This limitation, combined with the high cost of human evaluation, motivated a shift toward using LLMs themselves as judges. LLM-as-a-Judge Approaches: The use of LLM-as-a-Judge has gained prominence due to its relatively low cost and high correlation with human preference [ 7,25,15]. However, there is substantial evidence that LLM judges exhibit systematic biases that can compromise evaluation reliability. Studies have demonstrated that factors such as response length, presentation order, and stylistic elements can influence LLM judges’ evaluation abilities [51, 22, 46]. Researchers have proposed various techniques to enhance the reliability of LLM judges, though these approaches typically lack theoretical grounding. These include averaging LLM judge scores [ 27], LLM juries [ 41,21], LLM judge debates [ 5], chain-of-thought and self-consistency [ 51], and rubrics for grading [ 8,24]. Notably, Guerdan et al. [16] discussed the limitations of
|
https://arxiv.org/abs/2505.21972v1
|
LLM judges when there is no true agreed-upon rating scale, but our work addresses a more fundamental question: even with an agreed-upon rating scale, what theoretical guarantees can we provide about ranking accuracy? Uncertainty Quantification in LLM Evaluation: Current frameworks lack uncertainty quantifica- tion that reflects both epistemic andaleatoric uncertainty, particularly in the absence of gold-standard labels. Existing approaches assume LLM judge scores accurately reflect ground truth and apply bootstrapping methods [ 14,48], which can result in undercoverage when assumptions fail. Methods like prediction-powered inference [ 6,2], conformal inference [ 20], or consensus MaxEnt models 2 [32,47] can integrate LLM judging if some subset of gold-standard labels are available for each new model and benchmark, limiting scalability. Imperfect reference standards: Using LLMs as imperfect judges parallels evaluation of medical diagnostics with imperfect reference standards [ 34,40,38,39]. Our results expand on ideas used in this field, beyond the classical setting of evaluating binary diagnostic tests to the case of multi-level ratings, which are commonly used for LLM evaluation. The geometric arguments substantially extend [12, 3, 19, 10], which focused on identifying diagnostic test performance rather than ranking. Ranking Method Based on Win-Rates: Models such as Bradley-Terry used by rating systems like Elo, along with their extensions, estimate relative abilities of candidates based on pairwise comparisons [ 4,31,17,1]. However, these models make strong parametric assumptions and do not allow for judges to exhibit systematic biases like self-preference. In contrast, this work investigates thetheoretical limits of identifiability without making strong parametric assumptions and how to adjust for inconsistent judging across candidates. 3 Identifiability and non-identifiability results We study the LLM judge pipeline shown in Figure 1.1, where LLM judges score answers by candidate LLMs to questions for a benchmark task. Formally, let QandAbe the space of free-text questions and answers, respectively. A benchmark task is defined as a tuple with a probability distribution over questions and a function s∗:Q × A 7→ { 1,···, M}that outputs the true score of question-answer (QA) pairs per an M-level scoring rubric. The KLLM candidates are represented by the (potentially stochastic) functions fk:Q 7→ A fork= 1toK. Using the shorthand S∗ k:=s∗(Q, fk(Q)), the true (mean) score for candidate fkisE[S∗ k]. Similarly, LLM judges are defined as (potentially stochastic) functions ˆsj:Q×A 7→ { 1,···, M}forj= 1toJ. The assigned score by judge jfor an answer by candidate fkis denoted ˆS(j) k:= ˆs(j) k(Q, fk(Q)). These scores form the basis for ranking candidates. 3.1 Some basic geometry Given one or more LLM judges, what are the theoretical limits for what can or cannot be determined about the performance of LLM candidates when the true scoring function s∗is unavailable? To study these limits, we assume access to infinite samples from the benchmark task so that the (multinomial) distributions of assigned scores by each judge jto each candidate fk, denoted γ(j) k, are known. Using geometric arguments, we can jointly visualize γ(j) kand the accuracy of judge jas points on a probability simplex as follows. Columns of the judge’s confusion matrix, also the conditional distribution of the judge score given the true score,
|
https://arxiv.org/abs/2505.21972v1
|
correspond to the Mpoints on the simplex θ(j) m,k= Pr ˆS(j) k= 1|S∗ k=m ,···,Pr ˆS(j) k=M|S∗ k=m ∀m= 1,···, M. (3.1) Then the distribution of judge-assigned scores for the candidate is the convex mixture γ(j) k=PM m=1πk,mθ(j) m,k, where πk,m= Pr( S∗ k=m)are the prevalences of the true scores. Prevalences ⃗ πk= (πk,1,···, πk,M)can also be interpreted as the barycentric coordinates for vertices θ(j) m,k. Geometrically, this means each candidate’s position is uniquely determined by weighting the judge vertices by the true prevalences. When an LLM judge is used, an implicit assumption is that the judge is (somewhat) consistent in how it assign scores, i.e. (3.1) does not vary too much across candidates k[36]. (Generally speaking, judges tend to be more consistent when the benchmark task is objective and relatively easy to score.) We refer to the most extreme version of this assumption “strong constancy” and a weaker version as “moderate constancy,” which we formalize below: Assumption 1. Judge ˆsjsatisfies “strong constancy” if its confusion matrix is the same across all K candidates: For each m, there is some θ(j) msuch that θ(j) m,k=θ(j) mfork= 1,···, K. Assumption 2. Judge ˆsjsatisfies “moderate constancy” if its confusion matrix is the same for all non-self candidates: For each m, there is some θ(j) msuch that θ(j) m,k=θ(j) mfor all k̸=j.1 1For simplicity, the theoretical results study the case where self-judging is defined as the same exact LLM is used to judge itself. In practice, we adjust for self-judging by disallowing each LLM judge from judging its family (e.g., Claude 3.5 Haiku is not allowed to judge Claude 3.5 Sonnet and Claude 3.7.) 3 Figure 3.2: Visualizing distribution of judge-assigned scores for candidates k= 1,2,3(orange) as a convex combination of the columns of a judge’s confusion matrix (judge vertices) (blue) under the strong constancy assumption. Left: visualization when answers are scored on 2 levels, Middle : visualization when answers are scored on 3 levels, Right : the height of the vertical projection from each candidate (i.e. height of “X”) onto the diagonal plane corresponds to its true score. Under such constancy assumptions, we can jointly visualize all candidates that are consistently scored by judge jas points within the convex hull of the judge’s vertices. (For notational ease, we omit the judge superscript when discussing individual judges.) The candidates’ barycentric coordinates become sufficient for determining their relative performance. For instance, under strong constancy in a 2-level scoring system, every candidate kfalls between θ1andθ2, where its barycentric coordinates (as well as its true mean score) are determined by the relative distance of γkto vertices θ1andθ2 (Fig 3.2 left). Similarly, under strong constancy in a 3-level scoring system, every candidate kfalls in the triangle △θ1,θ2,θ3(Fig 3.2 middle)2and its barycentric coordinates are determined by relative area of its subtriangles (Fig 3.3 c), i.e., πk,1=Area of △(γk,θ2,θ3) Area of △(θ1,θ2,θ3), π k,2=Area of △(γk,θ1,θ3) Area of △(θ1,θ2,θ3), π k,3=Area of △(γk,θ1,θ2) Area of △(θ1,θ2,θ3). Socomparing true score prevalences reduces to comparing triangle areas . Furthermore, because γk E[S∗ k] =PM m=1πk,m θm,k m ,we can visualize each candidate’s true score by
|
https://arxiv.org/abs/2505.21972v1
|
augmenting each judge vertex with the true score, drawing the triangle between these augmented vertices, and finding the intersection between the triangle and a vertical line at each candidate (Fig 3.2 right) Then the true score of each candidate corresponds to the height of this intersection. That is, comparing performance of candidates reduces to comparing heights . We note that both constancy assumptions are weaker than much of the existing literature. For instance, if one were to average judge-assigned scores directly, the implicit assumption is that LLM scores are unbiased, i.e., E[ˆSk] =E[ˆS∗ k]. This assumes the judge vertices fall in very specific locations. Using these geometric properties, we can build intuition for how candidate rankings can be determined. For instance, if the judge vertices are known, we can determine the barycentric coordinates for every candidate as long as every candidate is consistently scored by at least one judge, which then implies that we can recover the true scores for all candidates. To understand the theoretical limits of identifiability when gold-standard labels are unavailable, we examine identifiability when the judge vertices are unknown. For the following results, we present only the geometric intuition; detailed proofs are in the Appendix. We begin with 2-level scoring systems and then build up to 3+ levels. 3.2 2-level scoring systems When access to the gold-standard scoring function s∗is unavailable, the location of judge vertices generally cannot be recovered even with access to infinitely large benchmark datasets, as θ1andθ2 can be any two points along the segment (0,1)to(1,0)that envelopes the candidates (Figure 3.3 left). So in general, neither constancy assumption is sufficient for recovering the absolute values of the candidate scores E[S∗ k], even if one increased the number of judges.3 2Note that the probability of score 1 is implicit since probabilities sum to 1, allowing the 2-D simplex to be projected onto a triangle. 3There is an edge case where we canrecover rankings under the constancy assumption. If we have at least one candidate with only judge-assigned scores of 1s and another candidate with only 0s, the constancy assumption implies that the judge is perfect and the true score coincides with the judge-assigned score. 4 Figure 3.3: When gold-standard labels are unavailable, the judge performance characteristics, and thus judge vertices, are unknown. Multiple configurations are possible given candidates k=1 and k=2. Two such configurations ( θvsθ′) are shown for 2-level scoring systems (a,b) and 3-level scoring systems (c,d); a 3-D version is shown in (e). While prevalence and score rankings are the same under different configurations for the 2-level scoring system, these rankings are flipped between the two configurations for the 3-level scoring system. Nevertheless, under strong constancy, we can see that the position of each candidate along the line is sufficiently informative for the purposes of ranking. Assuming the judge doesn’t prefer to assign score 2 when the true score is 1 (known as the “label-flipping problem” [ 38]), a candidate with higher prevalence of judge-assigned scores of 2 is indeed better. Thus we canrank candidates by their relative positions along the line (or equivalently,
|
https://arxiv.org/abs/2505.21972v1
|
their relative barycentric coordinates) as long as the judges satisfy a weak monotonicity condition: Assumption 3. Thej-th judge’s probability of assigning the lowest score when the true score is equal to mis decreases with respect to m. Under moderate constancy, we use a similar idea. For candidates that can only be evaluated by a subset of judges, we can determine their position relative to candidates that can be evaluated by all judges, which can then be used to rank allcandidates. This leads to the following positive result: Theorem 1. For 2-level scoring systems, the rank of candidates is identifiable from the distribution of assigned scores if: (i) there is at least one judge that satisfies Assumptions 1 and 3 or (ii) there are at least two judges that satisfy Assumptions 2 and 3 and at least four candidates. 3.3 3+-level scoring systems The story is more complex for scoring systems with 3 or more levels. To build intuition, we begin with asking whether the score prevalences can be ranked, and then whether the true candidate scores can be ranked. We discuss the geometric intuition for 3-level scoring systems below, but the same arguments can be readily extended to more levels. (i) Ranking prevalences. WLOG, suppose we wanted to rank the prevalence of the true score m= 3. Under strong constancy, the denominators for determining the barycentric coordinates are the same, so we simply need to compare the areas of the subtriangles △(γk,θ1,θ2)(Fig 3.3 c). This boils down to comparing the distances from γk’s to the line− − →θ1θ2, which can be determined solely from the slope of this line (not the exact locations of θ1andθ2). However, in the absence of gold-standard labels, the slope of− − →θ1θ2is generally unknown, as there are an infinite number of triangles that envelope the candidates (Fig 3.3 c vs d). So neither absolute nor relative prevalences can be determined, in general. While this negative result may be disappointing, this analysis also shows that there is hope: if one had strong prior beliefs about the slope, one can determine which prevalence rankings are more likely. In addition, if candidates have extremely different distributions of judge-assigned score, the differences may be so large that the ranking is undebatable, regardless of one’s prior beliefs. Similarly, if an LLM judge satisfies the constancy assumption for a wide variety of candidates, the number of possible triangles enveloping all candidates, and thus the possible prevalence rankings, becomes very limited (see Appendix for visualizations). (ii) Ranking mean scores . Given the nonidentifiability of prevalence rankings, it is unsurprising that the ranking of true candidate scores is also nonidentifiable when gold standard labels are unavailable. To see this directly, note that the relative scores, which corresponds to the relative heights of each candidate’s vertical projection onto the augmented triangle (Fig 3.2 right), varies depending on where the judge’s vertices are located. If a judge’s vertices are shifted enough, we may flip the relative heights between candidates; this is certainly possible if candidates are closer together (Fig 3.3 e). This argument applies to both constancy
|
https://arxiv.org/abs/2505.21972v1
|
assumptions and any number of judges. While this negative 5 result again may be disappointing, there are various ways to filter down possible rankings as outlined above. These results highlight that ranking uncertainty comes not just from sampling variation (aleatoric uncertainty) but also uncertainty about which assumptions hold (epistemic uncertainty). Putting these results together, we have the following. Theorem 2. Consider grading questions across 3+ levels and suppose one has Jjudges that satisfy Assumptions 1 and 3 or Assumptions 2 and 3. Given only the distribution of judge-assigned scores, there exist candidates whose prevalence and score rankings cannot be identified. 4 Bayesian Ranking on a Simplex The identifiability analyses in Section 3 reveal the significant role of epistemic uncertainty. For binary scoring systems, ranks can be identified under relatively mild constancy and monotonicity assumptions; for 3+-level scoring systems, ranks are generally non-identifiable unless much stronger prior information is provided. As such, we use a Bayesian framework to explicitly control the assumptions and prior information used for ranking estimation and inference. In particular, we use the Bayesian prior to both take away information by exploring relaxations of the constancy assumption and injecting information by specifying likely slopes between judge vertices. We present these key pieces of the Bayesian model below. Additional implementation details and recommendations for priors are provided in the Appendix. 4.1 Probability model We model the assigned score ˆS(j) ikby judge jto candidate k’s answer to the i-th question, given its true score S∗ ik, as independent draws from a multinomial distribution with parameter θ(j) S∗ ik,k. Marginalizing out the true latent scores, the likelihood of the observed data is nY i=1JY j=1Y k̸=jhMX m=1Pr ˆS(j) ik|S∗ ik=m;θ(j) m,k | {z } =θ(j) m,k, ˆS(j) ikPr(S∗ ik=m)|{z} =πk,mi , (4.2) where nis the number of questions. To remove the influence of self-preference, note that (4.2) filters fork̸=j. Although judge scores are likely correlated in practice and (4.2) is an overt simplification of the data generating mechanism, the alternative of modeling the full joint distribution introduces additional risks without significant gain: it increases model complexity, requires additional constancy assumptions, risks model misspecification, and yet does notchange the identifiability results in Section 3, which were all based on the marginal distribution of judge-assigned scores. As shown in the Appendix, for binary scoring systems, posterior inference under (4.2) yields consistent estimators forθ(j) m,k, prevalences ⃗ πk, and candidate rankings, even if the correlation structure is ignored. 4.2 Relaxing the constancy assumption Our identifiability results rest on constancy assumptions that assume shared θ(j) mparameters. To explore how sensitive rankings are to this assumption, we introduce random effects (RE). There are two options for injecting RE: we can either allow for perturbations in judge performance (i.e. θ(j) m,k as perturbations of θ(j) m) or score prevalences (i.e. ⃗ π(j) kas perturbations of ⃗ πk). Because the latter involves fewer parameters and thus a lower risk of misspecification, we opt for the latter: Zk∼Dirichlet( δ)(candidate-specific random direction) Rj∼Beta( ωJ, J )(judge random effect) Wk∼Beta( ωK, K )(candidate random effect) ⃗ π(j) k= (1−WkRj)⃗ πk+WkRjZk,(4.3) where the perturbed
|
https://arxiv.org/abs/2505.21972v1
|
prevalences ⃗ π(j) kwould then replace ⃗ πkin(4.2) . The magnitude of REs is thus controlled through the hyperparameter ω∈[0,∞)(ω= 0implies the constancy assumption holds). 6 α(m1,m2)→(m′ 1,m′ 2) (m′ 1,m′ 2)s.t.(m1,m2)→(m′ 1,m′ 2)∼Dirichlet( ⃗β(m1,m2)) θ(j) m′ 1,m′ 2=X (m1,m2)→(m′ 1,m′ 2)θ(j) (m1,m2)α(m1,m2)→(m′ 1,m′ 2) Figure 4.4: Bayesian prior for judge vertices θm, as described using a transition matrix. The transition weights α(m1,m2)→(m′ 1,m′ 2)for edges outgoing from the same node must sum to one and are drawn from a Dirichlet prior with parameters ⃗β(m1,m2). Here we show an example for 3-levels. We recommend setting δto detect directions of interest. For instance, to detect over-inflated scores, we set δto have larger weights for higher scores (e.g., δ= [1,4,10]for a 3-level scoring system). 4.3 Injecting prior knowledge about judge quality By defining a prior over the judge vertices, we can place soft restrictions on the parameter space to exclude regions in which solutions are unlikely to lie. The identifiability results in Section 3 tell us that prevalence rankings can be improved by injecting prior knowledge about likely slopes between judge vertices. This is ideal, as one generally has prior beliefs about the relative locations of the judge vertices but not their absolute locations. By defining a prior over slopes, we minimize the information injected by the prior to only that needed to rank candidates and nothing more and, thus, minimize the risk of having misaligned/misspecified priors. We use the prior described in Figure 4.4, in which the probability weight from each node is distributed across its children nodes per the transition weights. By designing the transition matrix such that only edges from nodes (m1, m2)to(m′ 1, m′ 2)have nonzero weight if m2≤m′ 2, we ensure that Assumption 3 is satisfied. The parameters of the Dirichlet prior ⃗β(m1,m2)can then either be set based on prior belief or, in our implementation, be sampled from a hyperprior. For instance, for 3-level scoring, we constructed hyperpriors like ⃗β(1,1)= [1,1 +ρ∗βmax,1]where ρ∼Beta(1 ,1), where the hyperparameter βmaxcontrols the hyperprior over judge quality. See Appendix for details. 5 Experiments Our theoretical analysis reveals that ranking identifiability depends fundamentally on epistemic and aleatoric uncertainty. Here we evaluate these findings through Bayesian ranking on a simplex, across diverse benchmarks. We find that considering both sources of uncertainty yields more accurate rankings with better-calibrated uncertainty quantification compared to existing methods. We evaluate the LLM Judge adjudication frameworks across three types of benchmark datasets: (i)Verifiable tasks with 2-level scores : GPQA [ 33] and MMLU Pro [ 45] contain multiple-choice questions, where answers are graded on two levels (correct or incorrect). Judges evaluate candidate answers for correctness with the option to abstain when uncertain. (ii)Multi-level human-judged tasks : MTBench [ 50] evaluates multi-turn conversations while TLDR [ 42] assesses summarizations. Both datasets are assessed on multi-level Likert scales, by LLMs as well as human experts. (iii)“Semi-verifiable” tasks : Omni-MATH [ 13] contains mathematical reasoning problems with reference solutions, though there is no single ground truth answer. LLM judges are asked to evaluate answers on a 3-level scale (correct, partial credit, incorrect). To
|
https://arxiv.org/abs/2505.21972v1
|
evaluate judge adjudication methods, we designed a unified two-stage protocol. First, LLM judges evaluate candidates without access to ground truth, mirroring real-world usage. Second, we generate ground truth scores for each answer by comparing against the correct multiple choice answer on verifiable tasks and obtaining human-assigned scores for human-judged tasks. For semi-verifiable tasks, we rescore each candidate’s answer by providing the LLM judge the provided reference answer. We compare our Bayesian adjudication framework against established baselines: single judge scoring, averaging scores from multiple judges (i.e., LLM judges as a jury), bootstrap confidence intervals [48], and Bradley-Terry pairwise comparison models [ 31]. For the Bayesian inference procedure, we report results with and without adjustment for self-scoring to examine the impact of potential self-preference bias on rankings and uncertainty estimates. Bayesian inference was implemented using Stan [ 37] and is extremely fast (less than a minute on a normal laptop). For GPQA, MMLU Pro, and Omni-MATH, we considered 19 contemporary models as candidates, including Claude, 7 Table 5.1: Performance of judge adjudication methods on benchmark datasets. Bayesian ranking on a simplex is run with and without adjustment for self-scoring (adjustment indicated by adj), judge quality hyperparameter βmax= 0, and RE hyperparameter ω=0 or 1. Bayesian ranking consistently achieves both higher correlation (Corr) with ground truth and substantially better coverage rates (Cov) than comparators. Note that there is no self-scoring in TLDR. Method GPQA MMLU TLDR MTBench Omni-MATH Corr Cov Corr Cov Corr Cov Corr Cov Corr Cov Bayesian: adj,ω=0 0.818 0.852 0.822 0.931 0.885 0.917 1.000 1.000 0.773 0.561 Bayesian: adj,ω=1 0.810 0.852 0.823 0.938 0.907 0.917 1.000 1.000 0.749 0.719 Bayesian: ω=0 0.760 0.778 0.846 0.898 – – 0.971 1.000 0.749 0.649 Bayesian: ω=1 0.760 0.796 0.844 0.901 – – 0.971 1.000 0.726 0.614 Bootstrap 0.720 0.722 0.834 0.898 0.885 0.729 0.971 1.000 0.758 0.544 Bradley-Terry 0.742 0.630 0.841 0.793 0.902 0.771 0.971 1.000 0.756 0.456 Simple Average 0.720 0.093 0.834 0.168 0.885 0.271 0.971 0.833 0.758 0.158 Single Judge 0.696 0.102 0.808 0.151 0.787 0.285 0.986 0.917 0.735 0.202 GPT, Gemini, Llama, Mistral, and Qwen variants. Their answers were then judged by Claude 3.5 Haiku and GPT-4o mini. We chose these relatively weaker judges to demonstrate that the Bayesian approach remains robust even with less capable evaluators. For MTBench and TLDR, we utilize their existing candidates and judges. Complete candidate and judge lists are available in the Appendix. Performance was evaluated in terms of Spearman correlation with the ground truth rankings and coverage rates of 95% credible/confidence intervals (averaged across candidates). Because Section 3 highlighted the importance of judge constancy, all analyses are stratified by task characteristics (e.g., GPQA’s undergraduate/graduate/post graduate question levels) to maintain approximate constancy within strata. Additional experiment and implementation details can be found in the Appendix. Bayesian ranking estimates achieve higher correlations and substantially better coverage than com- parator methods (Table 5.1). This is true across all datasets and even different settings of the Bayesian model, illustrating the robustness of the approach. The jump in correlation was highest in GPQA, which is perhaps unsurprising: GPQA was designed to
|
https://arxiv.org/abs/2505.21972v1
|
be a difficult dataset, so judge quality is lower in this dataset and rank uncertainty is higher. This is in contrast to datasets like MTBench, where the candidates are so easy to distinguish that the choice of method is less important. That said, because there was evidence of self-preference in MTBench (see Appendix), the adjustment for self-preference was necessary to get perfect correlation with the truth. To better understand ranking identifiability, we visualized results from the Bayesian framework, which revealed the following: Rankings are generally quite stable across various relaxations of the constancy assumption and judge quality hyperparameters. We conducted Bayesian inference with RE hyperparameter ω=0, 1, 2, 4, and 8, corresponding to expected RE factor magnitudes (i.e., E[Wk]) of 0, 0.5, 0.67, 0.8 and 0.89, respectively. Rankings demonstrated notable stability across datasets, with Omni-MATH being the sole exception (Figure 5.5 top left). Indeed, Omni-MATH’s coverage rates reflects this instability, jumping from 0.56 to 0.72 when increasing ω=0 to 1 and reaching 0.90 at ω= 8 (Table 5.1). Thus these visualizations can help identify when ranking estimates may be sensitive to modeling assumptions. Variations in the judge quality hyperparameter βmaxproduced minimal impact, suggesting that the data itself contains sufficient information to constrain plausible judge qualities. Ranking is more uncertain in datasets that are more difficult for LLMs. The posterior distributions in Figure 5.5 (bottom) demonstrate how ranking uncertainty differs across question difficulty tiers in the GPQA dataset. We observe monotonically increasing uncertainty as difficulty increases. This pattern aligns with previous findings that an LLM’s ability to correctly answer questions from a dataset is strongly connected to its ability to judge responses for that dataset [23]. Ranking is more certain when LLM-assigned scores to candidates differ widely. To visualize can- didates and judges on a probability simplex, we collapse the benchmarks with 5-point Likert scale into three levels (see Appendix for mapping) (Figure 5.5 top right). The MTBench simplex shows well-separated candidates forming a clear performance gradient, resulting in ranking estimates that are stable due to low epistemic uncertainty. Conversely, TLDR candidates cluster more densely 8 2 4 6 8 RE Hyperparameter () 0.8750.9000.9250.9500.9751.000Mean Correlation with No RE (=0) GPQA MMLU Pro MTBench OmniMATH TLDR 1 5 10 15 20 Judge Quality Hyperparameter (max) 0.9920.9940.9960.9981.000Mean Correlation with Weak Judge (max=0) 0.0 0.5 1.0 2 (Prob. of Score 2) 0.000.250.500.751.003 (Prob. of Score 3) Claude 3.5 Haiku 0.0 0.5 1.0 2 (Prob. of Score 2) 0.000.250.500.751.003 (Prob. of Score 3) GPT-3.5 Turbo Llama 3.1 8B Mixtral 8x7B v0.1Mistral 7B v0.3 Mixtral 8x22B v0.1Qwen 2.5 7B Claude 3.5 HaikuGPT-4o miniQwen 2.5 72B Llama 3.1 70BGemini 1.5 Flash Gemini 2.0 Flash LiteGPT-4o Llama 3.1 405B Gemini 1.5 ProDeepSeek V3 Gemini 2.0 FlashClaude 3.5 Sonnet Claude 3.7 Sonnet24681012141618Rank Hard Undergrad Hard Graduate Post GraduateFigure 5.5: Visualizing uncertainty. Top left two : Sensitivity of estimated rankings when varying random effects (RE) and judge quality hyperparameters, by plotting the correlation between the estimated ranking for each hyperparameter and their base values. Top right two : Candidates visualized on the probability simplex, when using Claude 3.5 Haiku as a judge
|
https://arxiv.org/abs/2505.21972v1
|
on MTBench (left) and GPT-3.5 Turbo as a judge on TLDR (right). Blue triangles are judge configurations sampled from posterior. Bottom : Posterior distributions for candidate rankings across difficulty levels in GPQA. in different regions of the simplex, leading to much higher ranking uncertainty. These contrasting patterns underscore the need to account for both aleatoric and epistemic uncertainty. 6 Discussion This work leverages a novel geometric framework for understanding when gold-standard rankings can and cannot be recovered when one has access to imperfect LLM judges but no gold-standard labels. By visualizing both judges and candidates as points on a probability simplex, we find there is a “phase change”: rankings are generally identifiable for 2-level scoring systems but non-identifiable for 3+-level scoring systems unless prior knowledge is available. This highlights the key roles of aleatoric and epistemic uncertainty when estimating rankings. We then explore how Bayesian inference can be used to integrate both sources of uncertainty and conduct sensitivity analyses by varying the amount of epistemic uncertainty. As the Bayesian framework treats judges as black boxes, it is compatible with anyjudge ensemble. Empirically, we find that Bayesian ranking on a simplex consistently outperforms existing baselines in both correlation with and coverage of the true rankings. This work suggests the following analysis protocol for practitioners using LLM judge rankings: Recommendations for judge analysis : • Score all LLM candidates using two or more LLM judges. •Primary analysis: Perform Bayesian posterior inference on all non-self-judged scores assuming zero random effects. Obtain posterior mean and credible intervals for candidate rankings. •Constancy sensitivityanalysis: Test sensitivity to the constancy assumption by varying RE hyperparameter ω. Plot how much ranking estimates vary across different values of ω. •Judge qualitysensitivityanalysis: Assess the impact of judge quality by varying the judge quality hyperparameter βmax. Plot how much ranking estimates vary across different values of βmax. Future work. The primary goal of this work is to understand the theoretical limits of rank estimation and, thus, considered a relatively simple Bayesian model. Future directions may consider more sophisticated models that, say, incorporate the correlation structure to further improve estimation and uncertainty quantification. Other extensions include extending the Bayesian model to hybrid systems with some gold-standard labels and understanding how this model can help strategically allocate human effort to improve ranking estimates [2, 6]. Impact statement. This work highlights both potential risks and benefits of using LLMs as judges in the absence of human labels. Recommendations in this work can be used alongside benchmarking frameworks and leaderboards such as HELM [ 27] to improve uncertainty quantification and measure sensitivity to assumptions. Findings in this work can also help LLM practitioners and researchers take a more deliberate approach when leveraging these technologies to scale LLM evaluation. 9 References [1]Siavash Ameli, Siyuan Zhuang, Ion Stoica, and Michael W Mahoney. A statistical framework for ranking LLM-based chatbots. In The Thirteenth International Conference on Learning Representations , 2025. [2]Anastasios N Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I Jordan, and Tijana Zrnic. Prediction-powered inference. Science , 382(6671):669–674, November 2023. [3]Michael A Black and Bruce A Craig. Estimating disease prevalence
|
https://arxiv.org/abs/2505.21972v1
|
in the absence of a gold standard. Stat. Med. , 21(18):2653–2669, September 2002. [4]Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324, December 1952. [5]Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. ChatEval: Towards better LLM-based evaluators through multi-agent debate. InThe Twelfth International Conference on Learning Representations , 2024. [6]Ivi Chatzi, Eleni Straitouri, Suhas Thejaswi, and Manuel Gomez Rodriguez. Prediction- powered ranking of large language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , November 2024. [7]Cheng-Han Chiang and Hung-Yi Lee. Can large language models be an alternative to human evaluations? Annual Meeting of the Association for Computational Linguistics , pages 15607– 15631, May 2023. [8]Databricks. Enhancing LLM-as-a-judge with grading notes. https://www.databricks.com/ blog/enhancing-llm-as-a-judge-with-grading-notes , July 2024. Accessed: 2025- 2-14. [9] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv [cs.CL] , May 2023. [10] Rui Duan, Ming Cao, Yang Ning, Mingfu Zhu, Bin Zhang, Aidan McDermott, Haitao Chu, Xiaohua Zhou, Jason H Moore, Joseph G Ibrahim, Daniel O Scharfstein, and Yong Chen. Global identifiability of latent class models with applications to diagnostic test accuracy studies: A gröbner basis approach. Biometrics , 76(1):98–108, March 2020. [11] Alexander R Fabbri, Wojciech Kry ´sci´nski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. SummEval: Re-evaluating summarization evaluation. Trans. Assoc. Comput. Linguist. , 9:391–409, April 2021. [12] Stephen E Fienberg and John P Gilbert. The geometry of a two by two contingency table. J. Am. Stat. Assoc. , 65(330):694–701, June 1970. [13] Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, and Baobao Chang. Omni-MATH: A universal olympiad level mathematic benchmark for large language models. InThe Thirteenth International Conference on Learning Representations , October 2024. [14] Harvey Goldstein and David J Spiegelhalter. League tables and their limitations: Statistical issues in comparisons of institutional performance. J. R. Stat. Soc. Ser. A Stat. Soc. , 159(3):385, 1996. [15] Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, Yuanzhuo Wang, and Jian Guo. A survey on LLM-as-a-judge. arXiv [cs.CL] , November 2024. [16] Luke Guerdan, Solon Barocas, Kenneth Holstein, Hanna Wallach, Zhiwei Steven Wu, and Alexandra Chouldechova. Validating LLM-as-a-judge systems in the absence of gold labels. arXiv [cs.LG] , March 2025. 10 [17] Ralf Herbrich, Tom Minka, and Thore Graepel. TrueSkill ™: A bayesian skill rating system. Advances in Neural Information Processing Systems , 19, 2006. [18] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Mach. Learn. , 110(3):457–506, March 2021. [19] Geoffrey Jones, Wesley O Johnson, Timothy E Hanson, and Ronald Christensen. Identifiability of models for multiple diagnostic testing in the absence of a gold standard. Biometrics , 66(3):
|
https://arxiv.org/abs/2505.21972v1
|
855–863, September 2010. [20] Jaehun Jung, Faeze Brahman, and Yejin Choi. Trust or escalate: LLM judges with provable guarantees for human agreement. In The Thirteenth International Conference on Learning Representations , 2025. [21] Nimit Kalra and Leonard Tang. VERDICT: A library for compound LLM judge systems. [22] Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. Benchmarking cognitive biases in large language models as evaluators. In Findings of the Association for Computational Linguistics ACL 2024 , pages 517–545, Stroudsburg, PA, USA, 2024. Association for Computational Linguistics. [23] Michael Krumdick, Charles Lovering, Varshini Reddy, Seth Ebner, and Chris Tanner. No free labels: Limitations of LLM-as-a-judge without human grounding. arXiv [cs.CL] , March 2025. [24] Yukyung Lee, Joonghoon Kim, Jaehee Kim, Hyowon Cho, and Pilsung Kang. CheckEval: Robust evaluation framework using large language model via checklist. arXiv [cs.CL] , March 2024. [25] Dawei Li, Bohan Jiang, Liangjie Huang, Alimohammad Beigi, Chengshuai Zhao, Zhen Tan, Amrita Bhattacharjee, Yuxuan Jiang, Canyu Chen, Tianhao Wu, Kai Shu, Lu Cheng, and Huan Liu. From generation to judgment: Opportunities and challenges of LLM-as-a-judge. arXiv [cs.AI] , November 2024. [26] Xiang Lisa Li, Vaishnavi Shrivastava, Siyan Li, Tatsunori Hashimoto, and Percy Liang. Bench- marking and improving generator-validator consistency of language models. In The Twelfth International Conference on Learning Representations , October 2023. [27] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Alexander Cosgrove, Christopher D Manning, Christopher Re, Diana Acosta-Navas, Drew Arad Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Andrew Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models. Transactions on Machine Learning Research , February 2023. [28] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, 2004. [29] Evan Miller. Adding error bars to evals: A statistical approach to language model evaluations. arXiv [stat.AP] , November 2024. [30] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Associa- tion for Computational Linguistics - ACL ’02 , Morristown, NJ, USA, 2001. Association for Computational Linguistics. [31] P V Rao and L L Kupper. Ties in paired-comparison experiments: A generalization of the bradley-terry model. J. Am. Stat. Assoc. , 62(317):194, March 1967. 11 [32] V Raykar, Shipeng Yu, Linda H Zhao, G Hermosillo, Charles Florin, L Bogoni, and Linda Moy. Learning from crowds. J. Mach. Learn. Res. , 11(43):1297–1322, March 2010. [33] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. GPQA: A graduate-level google-proof Q&A benchmark. arXiv [cs.AI] ,
|
https://arxiv.org/abs/2505.21972v1
|
November 2023. [34] Johannes B Reitsma, Anne W S Rutjes, Khalid S Khan, Arri Coomarasamy, and Patrick M Bossuyt. A review of solutions for diagnostic accuracy studies with an imperfect or missing reference standard. J. Clin. Epidemiol. , 62(8):797–806, August 2009. [35] Juan Diego Rodriguez, Wenxuan Ding, Katrin Erk, and Greg Durrett. RankAlign: A ranking view of the generator-validator gap in large language models. arXiv [cs.CL] , April 2025. [36] Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, and Lidong Bing. Large language models are not yet human-level evaluators for abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 4215–4233, Stroudsburg, PA, USA, December 2023. Association for Computational Linguistics. [37] Stan Development Team. Stan modeling language users guide and reference manual, 2021. [38] Ao Sun and Xiao-Hua Zhou. Estimation of diagnostic test accuracy without gold standards. Stat. Med. , 44(3-4):e10315, February 2025. [39] Jiarui Sun, Chao Tang, Wuxiang Xie, and Xiao-Hua Zhou. Nonparametric receiver operating characteristic curve analysis with an imperfect gold standard. Biometrics , 80(3):ujae063, July 2024. [40] Chinyereugo M Umemneku Chikere, Kevin Wilson, Sara Graziadio, Luke Vale, and A Joy Allen. Diagnostic test evaluation methodology: A systematic review of methods employed to evaluate diagnostic tests in the absence of gold standard - an update. PLoS One , 14(10): e0223832, October 2019. [41] Pat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus, Arkady Arkhangorodsky, Minjie Xu, Naomi White, and Patrick Lewis. Replacing judges with juries: Evaluating LLM generations with a panel of diverse models. arXiv [cs.CL] , April 2024. [42] Michael Völske, Martin Potthast, Shahbaz Syed, and Benno Stein. TL;DR: Mining reddit to learn automatic summarization. In Lu Wang, Jackie Chi Kit Cheung, Giuseppe Carenini, and Fei Liu, editors, Proceedings of the Workshop on New Frontiers in Summarization , pages 59–63, Stroudsburg, PA, USA, 2017. Association for Computational Linguistics. [43] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. International Conference on Learning Representations , 2023. [44] Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, and Yue Zhang. PandaLM: An automatic evaluation benchmark for LLM instruction tuning optimization. arXiv [cs.CL] , June 2023. [45] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. MMLU-pro: A more robust and challenging multi-task language understanding benchmark. arXiv [cs.CL] , June 2024. [46] Hui Wei, Shenghua He, Tian Xia, Fei Liu, Andy Wong, Jingyang Lin, and Mei Han. Systematic evaluation of LLM-as-a-judge in LLM alignment tasks: Explainable metrics and diverse prompt templates. In ICLR 2025 Workshop on Building Trust in Language Models and Applications , March 2025. [47] Peter Welinder and Pietro Perona. Online crowdsourcing: Rating annotators and obtaining cost-effective labels. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops , pages 25–32.
|
https://arxiv.org/abs/2505.21972v1
|
IEEE, June 2010. 12 [48] Minge Xie, Kesar Singh, and Cun-Hui Zhang. Confidence intervals for population ranks in the presence of ties and near ties. J. Am. Stat. Assoc. , 104(486):775–788, June 2009. [49] Qiujie Xie, Qingqiu Li, Zhuohao Yu, Yuejie Zhang, Yue Zhang, and Linyi Yang. An empirical analysis of uncertainty in large language model evaluations. In The Thirteenth International Conference on Learning Representations , October 2024. [50] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, E Xing, Haotong Zhang, Joseph E Gonzalez, and Ion Stoica. Judging LLM-as-a-judge with MT-bench and chatbot arena. Neural Inf Process Syst , abs/2306.05685, June 2023. [51] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E Gonzalez, and Ion Stoica. Judging LLM-as-a-judge with MT-bench and chatbot arena. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , November 2023. 13 Appendix A Additional figures Figure A.6: Example reasons why ranking may still be identifiable when there are 3+ levels. Left: Candidates have assigned scores that are very different. Right : If a judge satisfies strong constancy and has judged a lot of candidates, the range of judge performance characteristics becomes very limited. B Bayesian model B.1 Model description The full Bayesian model is specified by the plate diagram in Figure B.7. As described in Section 4.3 of the main text, Dirichlet prior over the transition weights ⃗ αmis parameterized by ⃗βm. For a setting with3true scores, the parameters are defined as follows: ⃗β(j) 1,1= [1,1 +ρ(j)βmax,1] ⃗β(j) 1,2= [β(j) 1,1,2, β(j) 1,1,3] ⃗β(j) 2,1= [1,1,1 +ρ(j)βmax] ⃗β(j) 2,2= [β(j) 2,1,2, β(j) 2,1,3], where βmaxis a hyperparameter that controls the prior over judge quality. (Note that there is no need for a hyperprior for α1,3andα2,3as they are both equal to one.) Extensions to fewer or more levels is follows a similar format (see code for exact implementation). Figure B.7: Plate diagram of the Bayesian model 14 C Implementation details Bayesian inference : We conduct posterior inference using Hamiltonian Monte Carlo (HMC), coded up using Stan [ 37]. HMC ran with 4 chains, each running 1000 iterations for warmup and another 1000 for sampling. Abstention : In practice, the judge may not always be sure what score to assign. So we give the judge the option to abstain when there are only two levels, e.g. correct or not. D Experiment configuration D.1 Benchmarks •GPQA [33]: Questions resistant to simple internet searches across STEM domains, stratified by difficulty (undergraduate, graduate, post-graduate). •MMLU Pro [45]: Enhanced professional knowledge questions across 16 domain-specific subcategories from natural sciences, social sciences, and humanities. •MTBench [50]: A conversational benchmark evaluating single- and multi-turn dialogue capabilities across diverse scenarios (creative writing, reasoning, coding, mathematics, role- playing). Human judges rated responses on a 10-point scale. For LLM judges, we ask for ratings on a simplified 5-point scale. •TLDR (aka SummEval) [ 11]: A summarization benchmark where models condense news articles into concise summaries, with human ratings across four
|
https://arxiv.org/abs/2505.21972v1
|
dimensions (relevance, consistency, fluency, coherence) on 5-point scales. •Omni-MATH [13]: A benchmark focusing on high-difficulty competition-level problems from International and National Olympiads. These problems present particular challenges for automated evaluation as solutions vary significantly in approach, notation, and presentation; multiple valid solution paths may exist for the same problem; and partial correctness must be assessed along multiple dimensions. D.1.1 Strata Definitions for Benchmark Datasets To maintain approximate judge constancy within evaluation groups, we stratify each benchmark dataset based on task characteristics that are likely to affect judge performance consistency. The stratification approach differs across benchmarks based on the available metadata and inherent structure of each dataset. GPQA Stratification: For GPQA, we stratify questions according to the difficulty levels assigned by the original question writers. This stratification yields three distinct strata: •Hard Undergraduate : Questions designed to challenge advanced undergraduate students in STEM fields •Hard Graduate : Questions requiring graduate-level knowledge and reasoning •Post Graduate : Questions at the level expected for advanced graduate students or re- searchers MMLU Pro Stratification: For MMLU Pro, we focus on a subset of subjects that include educational level metadata, allowing us to stratify by both subject domain and academic level. We consider seven major subject areas, each divided into high school and college levels where available: •Physics : High school physics, College physics •Mathematics : High school statistics, High school mathematics, College mathematics •History : High school US history, High school European history, High school world history •Economics : High school macroeconomics, High school microeconomics •Computer Science : High school computer science, College computer science •Chemistry : High school chemistry, College chemistry 15 •Biology : High school biology, College biology Omni-MATH Stratification: For Omni-MATH, we use the competition tier structure inherent in the benchmark data. Each competition is mapped to one of three tiers (T0, T1, T2) based on the typical difficulty and selectivity of the mathematical olympiad: •Tier 0 (T0) : Highest difficulty competitions including IMO, IMO Shortlist, IMC, and Putnam •Tier 1 (T1) : High-level national competitions such as USAMO, national olympiads, and team selection tests •Tier 2 (T2) : Regional and junior competitions including USAJMO, JBMO, APMO, and Balkan MO D.2 LLM judges We use two LLM Claude 3.5 Haiku (anthropic/claude-3-5-haiku-20241022) and GPT-4o Mini (gpt-4o-mini-2024-07-18) for all benchmark datasets but TLDR. For TLDR, we use the provided judge-assigned scores from older LLMs: GPT-4-0314, GPT-3.5-Turbo-0301, and Llama-2-70b-chat- hf. To mitigate position bias in evaluation, we randomized the order of candidate responses presented to judges. D.3 LLM candidates For consistent comparison across datasets, we evaluated the set of candidate LLMs shown in Ta- ble D.2. Table D.2: LLM candidates evaluated across all experimental settings Model Family Model Name Version/Date Experiments Anthropic Claude 3.5 Haiku claude-3-5-haiku-20241022 GPQA, MMLU Pro, Omni-MATH Anthropic Claude 3.5 Sonnet claude-3-5-sonnet-20241022 GPQA, MMLU Pro, Omni-MATH Anthropic Claude 3.7 Sonnet claude-3-7-sonnet-20250219 GPQA, MMLU Pro, Omni-MATH DeepSeek DeepSeek V3 deepseek-v3 GPQA Google Gemini 1.5 Flash gemini-1.5-flash-002 GPQA, MMLU Pro, Omni-MATH Google Gemini 1.5 Pro gemini-1.5-pro-002 GPQA, MMLU Pro, Omni-MATH Google Gemini 2.0 Flash gemini-2.0-flash-001 GPQA, MMLU Pro, Omni-MATH Google Gemini 2.0 Flash Lite gemini-2.0-flash-lite-preview-02-05
|
https://arxiv.org/abs/2505.21972v1
|
GPQA, MMLU Pro, Omni-MATH Meta Llama 3.1 405B llama-3.1-405b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Meta Llama 3.1 70B llama-3.1-70b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Meta Llama 3.1 8B llama-3.1-8b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Meta Llama 4 Maverick 17B llama-4-maverick-17b-128e-instruct-fp8 MMLU Pro, Omni-MATH Meta Llama 4 Scout 17B llama-4-scout-17b-16e-instruct MMLU Pro, Omni-MATH Mistral AI Mistral 7B mistral-7b-instruct-v0.3 GPQA Mistral AI Mixtral 8x22B mixtral-8x22b-instruct-v0.1 GPQA Mistral AI Mixtral 8x7B mixtral-8x7b-instruct-v0.1 GPQA OpenAI GPT-4.1 gpt-4.1-2025-04-14 MMLU Pro, Omni-MATH OpenAI GPT-4.1 mini gpt-4.1-mini-2025-04-14 MMLU Pro, Omni-MATH OpenAI GPT-4.1 nano gpt-4.1-nano-2025-04-14 MMLU Pro, Omni-MATH OpenAI GPT-4o gpt-4o-2024-11-20 GPQA, MMLU Pro, Omni-MATH OpenAI GPT-4o mini gpt-4o-mini-2024-07-18 GPQA, MMLU Pro, Omni-MATH Qwen Qwen 2.5 72B qwen2.5-72b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Qwen Qwen 2.5 7B qwen2.5-7b-instruct-turbo GPQA, MMLU Pro, Omni-MATH For the MTBench dataset [ 50], we assess the candidates in the provided dataset: GPT-4-0613, Claude-1, Llama-2-13B-Chat, Vicuna-13B, and Alpaca-13B. Similarly, for the TLDR (SummEval) benchmark [11], we assess the 12 provided language models. 16 D.3.1 Comparison Methods We compare our Bayesian adjudication framework against the following baseline methods: Simple Averaging: This approach computes the mean score for each candidate across all evaluations and determines rankings based on these averages. It treats each judge’s assessment with equal weight and assumes judge scores accurately reflect true performance. Single Judge Aggregation: This approach collapses distinctions between multiple judges, treating all evaluations as if they came from a single judge. It computes the mean score for each candidate across all judge evaluations, ignoring judge identity. Simple Averaging with Bootstrap Confidence Intervals: This comparator uses the bootstrap approach for population ranks proposed in [ 48] to generate confidence intervals for the simple averaging estimate. Pairwise Comparison Approach: As a representative pairwise comparison method, we an extension of the Bradley-Terry model with ties [ 31], which estimates candidate ability parameters based on win-loss-tie patterns in pairwise evaluations. We calculate confidence intervals for this approach using bootstrap. To assess the performance of our framework against these baselines, we use several evaluation metrics. We measure the accuracy of ranking estimates using Spearman’s rank correlation with ground truth rankings when available. The quality of uncertainty quantification is evaluated through coverage rates of 95% credible/confidence intervals. This combination of metrics allows us to evaluate both the point estimate accuracy and the calibration of uncertainty in our framework’s predictions, providing a holistic assessment of each methods performance in the evaluation tasks. D.4 Prompts Below are the prompts used for evaluating candidate answers in each of our experiments. Where possible, we use structured generation to produce JSON structured outputs. D.4.1 Binary Verification Judge Prompt For the GPQA and MMLU Pro dataset experiments where we have multiple-choice questions with ground truth answers, we use the following prompt: You are evaluating candidate answers to a multiple-choice question. - Consistency: How well the candidate’s explanation aligns with their final multiple choice selection (1-5 scale). * 1 = The explanation contradicts the selected answer * 2 = Major disconnects between explanation and selected answer * 3 = Explanation partially supports the answer with some inconsistencies * 4 = Explanation mostly supports the answer with minor inconsistencies *
|
https://arxiv.org/abs/2505.21972v1
|
5 = Explanation perfectly aligns with and justifies the selected answer - Accuracy: Did the candidate select the correct answer choice? (-1 = no, 1 = yes, 0 = unsure) * Provide a concise explanation referencing key facts or reasoning that makes the answer correct or incorrect <QUESTION> [[question]] </QUESTION> [[candidates_section]] Respond with a JSON object containing evaluations for all candidates and ensure that your JSON response: - Uses the exact structure provided below - Includes only the evaluation without additional preamble or commentary - Properly escapes any special characters in the reasoning strings 17 { "evaluations" : [ { "model_id": "1", "consistency": { "reasoning": str, "score": int }, "accuracy": { "reasoning": str, "score": int } }, ... ] } In the actual implementation, the [[candidates_section]] placeholder is dynamically populated with candidate answers and explanations using the following format: <CANDIDATE #{i} ANSWER> {candidate.get(answer_key)} </CANDIDATE ANSWER> <CANDIDATE #{i} EXPLANATION> {rationale} </CANDIDATE #{i} EXPLANATION> where {i} is the candidate number, {candidate.get(answer_key)} retrieves the candidate’s multiple-choice answer, and {rationale} contains their explanation for that answer. D.4.2 Human Judgement Prompts MTBench Judge Prompts : For MTBench, we use a prompt similar to the original prompt given to human judges as described in [50] for the single-turn and two-turn cases. Similar to the binary tasks, we replace the [[question]] and[[candidates_section]] placeholders with the prompt and the conversation history using the following format: Please act as an impartial judge and evaluate the quality of the responses provided by AI assistants to the user question displayed below. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants.Be as objective as possible. Rate the response on a scale of 1 to 5 (1=Very Bad, 5=Very Good), along with the reasoning. Initial User Question: <PROMPT> [[question]] </PROMPT> Assistant Conversation(s): [[candidates_section]] { "evaluations" : [ { "model_id": "1", "overall": { "reasoning": str, "score": int 18 } }, { "model_id": "2", "overall": { "reasoning": str, "score": int } }, { "model_id": "3", "overall": { "reasoning": str, "score": int } } ] } The candidate presentation format differs between single-turn and multi-turn evaluations. For single- turn interactions, we present only the initial response: <CANDIDATE #{i}> {response1} </CANDIDATE #{i}> For two-turn interactions, we present the complete conversation history with delineation between turns: <CANDIDATE #{i}> <TURN 1> [User Prompt] {prompt1} [Assistant Response] {response1} </TURN 1> <TURN 2> [User Prompt] {prompt2} [Assistant Response] {response2} </TURN 2> </CANDIDATE #{i}> In cases where a candidate fails to respond to the second turn, we explicitly note this absence: <TURN 2> [No second turn response provided] </TURN 2> TLDR (SummEval) Judge Prompts : For the TLDR (SummEval) dataset, we use the same evaluation framework as in [ 11] that assesses news article summarization quality across four distinct criteria: relevance, consistency, fluency, and coherence. Each dimension is evaluated on a 5-point Likert
|
https://arxiv.org/abs/2505.21972v1
|
scale with specific definitions provided to ensure consistent interpretation: Instructions: In this task you will evaluate the quality of summaries written for a news article. You will be shown the original article and [[num_candidates]] candidate summaries. To correctly solve this task, follow these steps: 1. Carefully read the original news article provided below. 19 2. Read the candidate summaries presented in the <CANDIDATE #i ANSWER> sections. 3. Rate each summary on a scale from 1 (very low) to 5 (very high) based on its relevance, consistency, fluency, and coherence. Note that summaries that are very similar on an axis may receive the same score. Definitions: * Relevance: The rating measures how well the summary captures the key points of the article. Summaries in which all and only the important aspects are contained will receive the highest rating. * Consistency: The rating measures whether the facts in the summary are consistent with the facts in the original article. The summary should stay true to the facts reported and not make up untrue information. * Fluency: This rating measures the quality of individual sentences-are they well- written and grammatically correct? * Coherence: This rating measures the quality of all sentences collectively-do they fit together and sound natural? Consider the quality of the summary as a whole. Original news article: [[question]] Candidate Summaries: [[candidates_section]] Now provide your scores in the following JSON format. Ensure your response is a single JSON object, starting with {{ and ending with }}, and includes evaluations for all [[num_candidates]] candidates: {{ "evaluations": [ // Evaluation for Candidate #1 {{ "model_id": "1", // Corresponds to Candidate #1 "relevance": {{ "reasoning": "Provide your reasoning for the relevance score here.", "score": int // Score from 1 to 5 }}, "consistency": {{ "reasoning": "Provide your reasoning for the consistency score here.", "score": int // Score from 1 to 5 }}, "fluency": {{ "reasoning": "Provide your reasoning for the fluency score here.", "score": int // Score from 1 to 5 }}, "coherence": {{ "reasoning": "Provide your reasoning for the coherence score here.", "score": int // Score from 1 to 5 }} }}, // Add evaluations for Candidate #2, #3, ... up to #[[num_candidates]] following the same structure // Example for Candidate #2: /* {{ "model_id": "2", // Corresponds to Candidate #2 "relevance": {{ "reasoning": "...", "score": int }}, "consistency": {{ "reasoning": "...", "score": int }}, "fluency": {{ 20 "reasoning": "...", "score": int }}, "coherence": {{ "reasoning": "...", "score": int }} }} */ // ... other candidates ... ] }} Here we fill in the [[question]] and[[candidates_section]] placeholders with the news article and the candidate summaries with the same format as the other datasets. D.4.3 Semi-verifiable task Judge Prompt For the Omni-MATH dataset experiments, we implement a two-stage evaluation process that addresses the unique challenges of assessing mathematical reasoning when multiple solution paths may be valid. This approach allows us to measure both standalone solution quality and alignment with reference solutions. Stage 1: Evaluation Without Ground Truth : In the first stage, judge LLMs evaluate candidate solutions based solely on their mathematical correctness without access to
|
https://arxiv.org/abs/2505.21972v1
|
reference answers. This mimics how human experts might evaluate mathematical work without preconceived notions of the "correct" approach. The prompt emphasizes accuracy assessment on a 3-point scale (-1 for incorrect, 0 for partially correct, 1 for correct), with explicit instructions to use the middle category sparingly. The full prompt is as follows: Instructions: Evaluate the quality of candidate answers to mathematical questions. You will be shown the original question and [[num_candidates]] candidate answers. To correctly solve this task, follow these steps: 1. Carefully read the original question to understand what is being asked. 2. Read each candidate answer carefully. 3. Rate each answer according to the criteria below based on general mathematical knowledge and reasoning. 4. Provide clear justification for each score with specific references to the candidate’s answer. Rate each answer using the following criteria: ### Accuracy Assessment (1 for correct, 0 for partially correct/borderline, -1 for incorrect) Based on your mathematical knowledge, how accurate is the candidate answer? Strive to categorize answers as either Correct (1) or Incorrect (-1). Reserve the Partially Correct/Borderline (0) score for answers that contain significant correct elements but also notable errors or omissions, making a definitive Correct/Incorrect judgment difficult, or for answers that are technically correct but incomplete in a way that affects the final conclusion. * 1 (Correct): The answer is mathematically sound, reaches a valid conclusion, and is substantially free of errors. * 0 (Partially Correct / Borderline): The answer contains significant correct elements but also notable errors or omissions preventing a clear "Correct" score OR the answer is technically correct but misses key steps or context, making it significantly less complete. Use this score sparingly. * -1 (Incorrect): The answer contains significant mathematical errors or reaches an incorrect conclusion. Question: [[question]] 21 Candidates Summaries: [[candidates_section]] Respond with a JSON object containing evaluations for all candidates and ensure that your JSON response: - Uses the exact structure provided below - Includes only the evaluation without additional preamble or commentary - Properly escapes any special characters in the reasoning strings - Always output the reasoning before providing a final score { "evaluations" : [ { "model_id": "1", "accuracy": { "reasoning": str, "score": int } }, ... ] } Stage 2: Evaluation With Ground Truth Reference : In the second stage, judge LLMs re-evaluate candidate solutions with access to reference solutions. This provides a benchmark for assessing alignment with established approaches while still allowing for alternative valid solution paths. The prompt maintains the same 3-point scale but refocuses the evaluation criteria on comparison with the reference solution. The full prompt is as follows: Instructions: Evaluate the quality of candidate answers to mathematical questions. You will be shown the original question, the ground truth reference answer, and [[num_candidates]] candidate answers. To correctly solve this task, follow these steps: 1. Carefully read the original question. 2. Carefully read the ground truth reference answer to understand the correct approach and solution. 3. For each candidate answer: - Read the entire response - Evaluate it against the ground truth reference answer - Score it according to the criteria below
|
https://arxiv.org/abs/2505.21972v1
|
- Provide clear justification for each score with specific references to both the candidate answer and ground truth Rate each answer using the following criteria relative to the ground truth reference answer: ### Accuracy Assessment (1 for correct, 0 for partially correct/borderline, -1 for incorrect) Based on the reference answer, how accurate is the candidate answer? Strive to categorize answers as either Correct (1) or Incorrect (-1). Reserve the Partially Correct/Borderline (0) score for answers that contain significant correct elements but also notable errors or omissions, making a definitive Correct/Incorrect judgment difficult, or for answers that are technically correct but incomplete in a way that affects the final conclusion compared to the reference. * 1 (Correct): The answer reaches the same mathematical conclusion as the reference answer (even if using a different valid approach) and is substantially free of errors. * 0 (Partially Correct / Borderline): The answer contains significant correct elements but also notable errors or omissions preventing a clear "Correct" score OR the answer is technically correct but misses key steps or context 22 provided in the reference, making it significantly less complete. Use this score sparingly. * -1 (Incorrect): The answer reaches a different conclusion from the reference answer or contains significant mathematical errors that invalidate the result. Question: [[question]] Ground Truth Reference Answer: [[ground_truth_answer]] Candidates Summaries: [[candidates_section]] Respond with a JSON object containing evaluations for all candidates and ensure that your JSON response: - Uses the exact structure provided below - Includes only the evaluation without additional preamble or commentary - Properly escapes any special characters in the reasoning strings - Always output the reasoning before providing a final score { "evaluations" : [ { "model_id": "1", "accuracy": { "reasoning": str, "score": int } }, ... ] } Combined Analysis : This two-stage evaluation approach enables separate analyses of intrinsic solution quality and refer- ence alignment. For our primary analysis, we use the second-stage (reference-based) evaluations as pseudo-ground truth when comparing our Bayesian ranking methods against baselines. The first-stage evaluations provide valuable insights into the judge’s standalone mathematical reasoning capabilities, allowing us to assess how often judges can identify correct solutions without reference answers. We hypothesize that the delta between stage one and stage two evaluations provides a useful signal about the difficulty of problems and the capabilities of both candidate models and judge models. In the actual implementation, the [[question]] ,[[ground_truth_answer]] , and [[candi- dates_section]] placeholders are dynamically populated with the mathematical problem state- ment, reference solution, and candidate solutions, respectively. The candidate solutions are presented in the same format as in the other experimental settings. D.5 MTBench self-preference Table D.3 presents the frequency of scores (ranging from 1, low, to 5, high) assigned by the two LLM judges—Claude 3.5 Haiku and GPT-4o mini—to various candidate LLMs based on their responses to two-turn questions from the MTBench dataset. The results suggest potential self-preference biases in the LLM-based evaluations. The Claude 3.5 Haiku judge awarded its predecessor, Claude v1, a high frequency of top scores (48 instances of ’5’), surpassing other models like GPT-4 (40 instances of ’5’).
|
https://arxiv.org/abs/2505.21972v1
|
The GPT-4o mini judge assigned an equal number of perfect ’5’ scores (50 instances each) to both its own family model, GPT-4, and to Claude v1. While GPT-4o mini gives the same number of top scores to GPT-4 and Claude, the scoores from the Claude judge suggests that models tend to receive more favorable evaluations from judges within the same model family, highlighting a self-preference. 23 Table D.3: Distribution of scores (1-5) assigned by Claude 3.5 Haiku and GPT-4o mini judges to candidate LLMs on the MTBench two-turn benchmark. Cell values represent the frequency of each score. Claude 3.5 Haiku GPT-4o mini Model Name 1 2 3 4 5 1 2 3 4 5 Llama 13B 29 32 11 5 0 43 24 15 0 0 Alpaca 13B 3 33 33 8 3 17 25 29 10 0 Vicuna 13B v1.2 1 7 24 27 19 7 12 19 25 18 Claude v1 0 3 2 21 48 1 6 6 18 50 GPT-3.5 Turbo 0 2 11 38 20 1 7 14 23 37 GPT-4 0 1 6 26 40 0 4 8 20 50 D.6 Score Mapping for Simplex Visualizations For the probability simplex visualizations of 3+-level Likert scale datasets, we map the original 5-point scales to 3-point scales to enable more interpretable representations within the simplex. The mapping functions f:{0,1,2,3,4,5} 7→ { 1,2,3}are defined as follows: TLDR mapping: fTLDR(s) = 1,ifs∈ {1,2} 2,ifs∈ {0,3,4} 3,ifs= 5 MTBench mapping: fMTBench (s) = 1,ifs∈ {1,2} 2,ifs∈ {0,3} 3,ifs∈ {4,5} The major difference between these mapping is the treatment of score 4. For TLDR, we group it with score 3 in the middle category, while for MTBench, we group it with score 5 in the top category. These different groupings reflect the empirical distribution patterns observed in each dataset. MTBench evaluations exhibit more separation between high-performing candidates, while TLDR shows finer distinctions between middle performing candidates. E Proofs We present the proofs for the main results in the paper. For the identifiability results, recall that the assumptions are as follows: Assumption 4. Judge ˆsjsatisfies “strong constancy” if its confusion matrix is the same across all K candidates: For each m, there is some θ(j) msuch that θ(j) m,k=θ(j) mfork= 1,···, K. Assumption 5. Judge ˆsjsatisfies “moderate constancy” if its confusion matrix is the same for all non-self candidates: For each m, there is some θ(j) msuch that θ(j) m,k=θ(j) mfor all k̸=j. Assumption 6. Thej-th judge’s probability of assigning the lowest score when the true score is equal to mdecreases with respect to m. E.1 Proof for Theorem 1 Proof for Theorem 1(i). By Assumption 1, we have θ1= Pr ˆSk= 1|S∗ k= 1 , θ 0= Pr ˆSk= 1|S∗ k= 0 for all k= 1,···, K. Moreover, by Assumption 6, we have that θ1> θ0. 24 For candidate k, let πk= Pr S∗ k= 1 , γ k= Pr ˆSk= 1 . Then γk=πkθ1+ (1−πk)θ0. For any set of Kcandidates with non-equal prevalences of the judge-assigned scores, suppose we have that WLOG that the ordering indices k1,···, kKare such that γk1> γk2>···>
|
https://arxiv.org/abs/2505.21972v1
|
γkK. This implies that θ0+πk1(θ1−θ0) > θ0+πk2(θ1−θ0) >···> θ0+πkK(θ1−θ0) which implies that πk1> πk2>···> πkK. Proof for Theorem 1(ii). WLOG, let the candidates who are not also judges (referred to as core candidates ) have indices k= 3,4,···, J. Let the two LLMs who are both judges and candidates be k=j∈ {1,2}(reffered to as judge-candidates ). Per Assumption 2, we have for each judge j= 1,2that there exists θ(j) 1> θ(j) 0such that θ(j) 1= Pr ˆSk= 1|S∗ k= 1 , θ(j) 0= Pr ˆSk= 1|S∗ k= 0 , for all core candidates k. Applying our result for Theorem 1(i), we can rank core candidates k= 3 andk= 4 using their ranking on the line segment from θ(1) 1toθ(1) 0(or the line segment from θ(2) 1toθ(2) 0). Then all core-candidates kcan then be jointly ranked (including k= 3,4) by assessing: γ(1) k−γ(1) k γ(1) 4−γ(1) 3=πk−π3 π4−π3. We can rank judge-candidates in a similar way, using their non-self-judged position. Specifically, for candidate k= 1, we use its score distribution from judge j= 2, i.e.γ(2) 1= Pr( ˆS(2) 1= 1) , relative to candidates k= 3andk= 4to compute γ(2) 1−γ(2) 3 γ(2) 4−γ(2) 3=π1−π3 π4−π3. Likewise, for candidate k= 2, we use its score distribution from judge j= 1, i.e.γ(1) 2= Pr( ˆS(1) 2= 1), relative to candidates k= 3andk= 4to compute γ(1) 2−γ(1) 3 γ(1) 4−γ(1) 3=π2−π3 π4−π3. Thus we have γ(1) 2−γ(1) 3 γ(1) 4−γ(1) 3=π1−π3 π2−π3. By ranking these shifted and scaled judge-assigned score distributions, we can recover the true ranking between all candidates. E.2 Proof for Theorem 2 We provide the proof for three levels. It can be easily extended to 3+ levels. Proof for Theorem 2. We first prove Theorem 2 for a single judge. It suffices to show that there exist two candidates for whom their relative rankings cannot be identified given only the judge-assigned score distribution. 25 Consider two candidates in the strict interior of the probability simplex, whose marginal judge- assigned score distributions are denoted γ1andγ2. Let us first consider some judge whose vertices are denoted Θ = ( θ1θ2θ3), whose vertices are also strictly in the interior of the probability simplex and where Θhas full column rank. Then the distribution of the true scores (the barycentric coordinates) for candidate kis given by πk= Θ−1γk. The true scores of the two candidates are thus equal if Θ−1(γ1−γ2)⊤ 0 1 2! = 0. (E.4) Now it is possible that γ1andγ2were generated by a slightly different judge with triangle corners defined by Θ′ h= Θ + h∆, where h∈Rand∆is any matrix such that the columns sum to zero. (As long as his sufficiently small, Θ′ his a valid set of judge vertices.) If this other judge were the true judge, then the difference of the true scores between the two candidates would be given by (Θ + h∆)−1(γ1−γ2)⊤ 0 1 2! . (E.5) To prove nonidentifiability of the rankings, it thus suffices to show that for any Θ, there exists γ1, γ2,∆such that the score difference (E.5) is zero at h= 0, i.e. (E.4) holds, and
|
https://arxiv.org/abs/2505.21972v1
|
the derivative of the score difference (E.5) is nonzero, i.e. ∇h (Θ + h∆)−1(γ1−γ2)⊤ 0 1 2! ∝ Θ−1(γ1−γ2)⊤(Θ−1∆)⊤ 0 1 2! ̸= 0. (E.6) If this were to hold, then there exists some h >0such that the true score rankings between two candidates with marginal score distributions γ1andγ2from a judge with vertices Θ−h∆would be the opposite if the judge instead had vertices Θ +h∆. To find such a γ1, γ2, and ∆, let¯π= (1 3,1 3,1 3)Tandπ1= ¯π+ϵa 2,π2= ¯π−ϵa 2, where a= (1,−2,1)T, andϵ >0and small enough such that π1andπ2belong to the probability simplex. Note thatπT 11=πT 21= ¯πT1= 1since aT1= 0. Letγk= Θπkfork= 1,2. Then (E.5) = 0 ath= 0. To prove that (E.6)̸= 0 ath= 0, let u= (Θ−1)T 0 1 2! and¯u=1Tu 3. Define ∆ = ( u−¯u1)a⊤∈R3×3. Because 1⊤(u−¯u1)aT=⃗0⊤,we have constructed a ∆such that every column sums to 0. Moreover, because Θ−1(γ1−γ2) =ϵα, (E.6) simplifies to Θ−1(γ1−γ2)⊤(Θ−1∆)⊤ 0 1 2! =uT∆ϵa=uT(u−¯u1)aTaϵ =3X m=1(um−¯u)2∥a∥2ϵ ≥0. Now we prove that the inequality is in fact strict. We do this by contradiction. In particular, note that the equity holds if and only if ⃗ u= (Θ−1)T 0 1 2! =c⃗1for some c∈R. Left-multiplying all elements in the equality by Θ⊤, this is equivalent to assuming Θ⊤⃗ u= 0 1 2! =cΘ⊤⃗1. 26 However, because each column of Θis a probability vector, this would imply that (0,1,2) = c⃗1, which would be a contradiction. This implies the score derivative is strictly nonzero. Note that by adding more judges, the non-identifiability results remain the same. We can still find candidates and judges that satisfy the above conditions. E.3 Proof for asymptotic consistency of independent model As outlined in the main text, the Bayesian model assumes the probability model nY i=1JY j=1Y k̸=jhMX m=1Pr ˆS(j) ik|S∗ ik=m;θ(j) m,k | {z } =θ(j) m,k, ˆS(j) ikPr(S∗ ik=m)|{z} =πk,mi . (E.7) Here we prove that the candidate ranking that maximizes (E.7) still converges asymptotically to the true ranking as long as the true rank is identifiable, despite the probability model ignoring correlation between judge-assigned scores (and despite the fact that θandπare not identifiable). This can be viewed as an extension of traditional statistical results that the MLE for parameters that only depend on the first-moment is asymptotically consistent even when we do not correctly model the second- moments or higher. This result is stated more formally below. For simplicity, we present the result for one judge that satisfies Assumption 1. Extensions to multiple judges and to use Assumption 2 rather than 1 are straightforward. Theorem 3. Suppose Assumption 1 holds for one judge. The estimated ranking ˆξnusing scores from the judge by maximizing (4.2) converges to the true ranking ξ∗asn→ ∞ as long as: •the true ranking is identifiable from the marginals and •the true mean score for any two candidates are not exactly the same Proof for Theorem 3. Let us first consider a multinomial model of the judge-assigned scores where (γ1,···, γK)are the paramerers of the judge-assigned score distributions, i.e. max γ1,···,γKnY i=1KY k=1Pr ˆSi,k;γk . (E.8) For notational convenience, let
|
https://arxiv.org/abs/2505.21972v1
|
ℓndenote the log likelihood, i.e. ℓn(γ1,···, γK) =1 nnX i=1KX k=1log Pr ˆSi,k;γk . Because the solution to (E.8) corresponds to the MLE for a multinomial model, and standard regularity conditions apply, the estimator is asymptotically consistent. Formally, this implies that the estimated parameters ˆγk,nsatisfy Pr (ℓn(γ1,···, γK)−ℓn(γ∗ 1,···, γ∗ K)<0∀∥γ−γ∗∥ ≥a)→1 (E.9) for any a >0. Let’s analyze the optimization problem (4.2) optimized with respect to the parameters (θ, δ, ξ )where ξis the ranking of the candidate, δ∈RK,+, andπk=PK i=11{i <=ξk}δi. Let a function gdenote the mapping from a parameter set (θ, δ, ξ )describing judge and candidates to the set of marginal parameters γ1,···, γK. Under Assumption 1, there is some true set of parameters (θ∗, δ∗, ξ∗)with corresponding marginal parameters γ∗ 1,···, γ∗ K. Then because the set of models considered in (4.2) is a subset of that in (E.8), we have that Pr (ℓn(g(θ, δ, ξ ))−ℓn(g(θ∗, δ∗, ξ∗))<0∀∥g(θ, δ, ξ )−q(θ∗, δ∗, ξ∗)∥ ≥a) (E.10) ≥Pr (ℓn(γ1,···, γK)−ℓn(γ∗ 1,···, γ∗ K)<0∀∥γ−γ∗∥ ≥a). (E.11) 27 Because (E.11) converges to one for any a >0, then (E.10) converges asymptotically to one as well for all a >0. Taking a→0, we have that sup θn,δn,ξng(ˆθn, δn, ξn)−(γ∗ 1,···, γ∗ K)→p0. (E.12) Furthermore, because we assumed that (i) the true ranking ξ∗is identifiable from the marginals and (ii) no two candidates have the same exact true score, then the mapping from candidate marginals (γ1,···, γK)to candidate rankings is unique and locally constant at (γ∗ 1,···, γ∗ K). Then by the con- tinuous mapping theorem, we have that the estimated rankings ˆξnwhen maximizing (4.2) converges asymptotically to the true rankings ξ∗. 28
|
https://arxiv.org/abs/2505.21972v1
|
arXiv:2505.21981v1 [cs.RO] 28 May 2025Learning Compositional Behaviors from Demonstration and Language Weiyu Liu1*, Neil Nie1*, Ruohan Zhang1, Jiayuan Mao2†, Jiajun Wu1† 1Stanford University2MIT Abstract: We introduce Behavior from Language and Demonstration ( BLADE ), a framework for long-horizon robotic manipulation by integrating imitation learning and model-based planning. BLADE leverages language-annotated demonstrations, extracts abstract action knowledge from large language models (LLMs), and con- structs a library of structured, high-level action representations. These represen- tations include preconditions and effects grounded in visual perception for each high-level action, along with corresponding controllers implemented as neural network-based policies. BLADE can recover such structured representations auto- matically, without manually labeled states or symbolic definitions. BLADE shows significant capabilities in generalizing to novel situations, including novel initial states, external state perturbations, and novel goals. We validate the effectiveness of our approach both in simulation and on a real robot with a diverse set of objects with articulated parts, partial observability, and geometric constraints. Keywords: Manipulation, Planning Abstractions, Learning from Language 1 Introduction Developing autonomous robots capable of completing long-horizon manipulation tasks is a significant milestone. We want to build robots that can directly perceive the world, operate over extended periods, generalize to various states and goals, and are robust to perturbations. A promising direction is to combine learned policies with model-based planners, allowing them to operate on different time scales. In particular, imitation learning-based methods have proven highly successful in learning policies for various “behaviors,” which usually operate over a short time span [e.g., 1]. To solve more complex and longer-horizon tasks, we can compose these behaviors by planning in abstract action spaces [2–4], in latent spaces [5], or via large pre-trained models such as large language models [6]. However, one of the key challenges of all high-level planning approaches is the automatic acquisition of an abstraction for the learned “behaviors” to support long-horizon planning. The goal of this behavior abstraction learning is to build representations that describe the preconditions and effects of behaviors, to enable chaining and search. These representations should depend on the environment, the set of possible goals, and the specifications of individual behaviors. Furthermore, these representations should be grounded on high-dimensional perception inputs and low-level robot control commands. Our insight into tackling this challenge is to leverage knowledge from two sources: the low-level, mechanical understanding of robot-object contact, and the high-level, abstract understanding of object-object interactions described in language that can be extracted from language models as the knowledge source. Our framework, behavior from language and demonstration ( BLADE ), takes as input a small number of language-annotated demonstrations (Fig. 1a). It segments each trajectory based on which object is in contact with the robot. Then, it uses a large language model (LLM), conditioned on the contact sequences and the language annotations, to propose abstract behavior descriptions with preconditions and effects that best explain the demonstration trajectories. During training, we extract the state abstraction terms from the preconditions and effects (e.g., turned-on , ∗denotes equal contribution.†denotes equal advising. Project page and videos: https://blade-bot.github.io/. (b) Generalizations (c) Examples: Robust to Perturbations and Geometry Constraints Recovery:
|
https://arxiv.org/abs/2505.21981v1
|
move faucet head ခ place in sink Perturbation: kettle moved Kettle Filled & On Stove… Geometric constraint: stove blocked Recovery: move pot to table … Stove is not blocked Kettle In Sink Unseen Initial Condition State Perturbation Partial ObservabilityGeometric Constraints ❌ Kettle Filled & On Stove Place In Sink Move Faucet Head Turn On Faucet Turn Off Faucet Move Faucet Away Place Pot on Table Move Kettle to Stove(a) Annotated Human Demonstrations | Task: Boil Water On StoveFigure 1: BLADE , a robot manipulation framework combining imitation learning and model-based planning. (a) BLADE takes language-annotated demonstrations as training data. (b) It generalizes to unseen initial conditions, state perturbations, and geometric constraints. (c) In the depicted scenarios, BLADE recovers from perturbations such as moving the kettle out of the sink, and resolves geometric constraints including a blocked stove. aligned-with ), and learn their groundings on perception inputs. We also learn the control policies associated with each behavior (e.g., turn on the faucet ). Our model offers several advantages. First, unlike prior work that relies on manually defined state abstractions or additional state labels, our method automatically generates state abstraction labels based on the language-annotated demonstrations and LLM-proposed behavior descriptions. BLADE recovers the visual grounding of these abstractions without any additional label. Second, BLADE generalizes to novel states and goals by composing learned behaviors using a planner. Shown in Fig. 1b, it can handle various novel initial conditions and external perturbations that lead to unseen states. Third, our method can handle novel geometric constraints (Fig. 1c) and partial observability from articulated bodies like drawers. 2 Related Work Composing skills for long-horizon manipulation. A large body of model-based planning methods use manually-defined transition models [ 2,7–12] or models learned from data [ 13–18] to generate long-horizon plans. However, learning dynamics models with accurate long-term predictions and strong generalization remains challenging. A related direction is to introduce hierarchical structures into the policy models [ 19–25], where different methods can segment continuous demonstrations into short-horizon skills [ 23,26,27]. Facing the challenges in modeling action dependencies, these methods are limited to following sequentially specified subgoals. Some work addresses this issue by learning the dependencies between actions from data, but they require large-scale supervised datasets [ 28–31]. Our approach is related to methods that learn symbolic action representations [ 32– 36]; the difference is that BLADE uses a LLM to generates causal models of the environment and learns their groundings on sensory inputs. Using LLMs for planning. Many researchers have explored using LLMs for planning. Methods for direct generation of action sequences [ 37,38] can struggle to produce accurate plans [ 39,40]. Researchers have also leveraged LLMs as translators from natural language instructions to symbolic goals [ 41–44], as generalized solvers [ 45], as memory modules [ 46], and as world models [ 47,48]. To improve the planning accuracy, prior work has explored techniques including using programs [ 49,50], learning affordance functions [ 6,51], replanning [ 52], finetuning [ 53–55], embedding reasoning in a behavior tree [ 56], and VLM-based decision-making [ 57,58].BLADE shares a similar spirit as methods
|
https://arxiv.org/abs/2505.21981v1
|
using LLMs to generate planning-compatible action representations [ 59–61]. However, they make assumptions on the availability of state abstractions, while BLADE grounds LLM-generated action definitions without additional labels. Also complementary to methods that leverage these representations for skill learning [ 62,63], our approach uses them for composing skills in novel ways. 2 (:action open-cabinet-door :precondition (not (is-door-open ?door)) :effect (is-door-open ?door) :body (then (close-gripper ) (push ?door) (open-gripper ))) (a) Annotated Human Demonstrations “Open Cabinet Door” “Grab Kettle” (b) Structured Behavior RepresentationsOpen Cabinet Door ... ... Bi-Level Planning & ExecutionNovel Goal States : in(teabag, kettle) closed(cabinet-door), closed(drawer) Novel Initial ConditionContact Segmentation + LLM Proposal Learning Algorithms ... Place Cut On Stove Robot Actions Diffusion Policy“Open Drawer” “is-open(drawer)” “is-filled(cup)”... ...“is-blocked(door)”Classifier Model FalseTrue ...... FalseSkill Library Classifier Library(c) Generalization Figure 2: Overview of BLADE . (a) BLADE receives language-annotated human demonstrations, (b) segments demonstrations into contact primitives, and learns a structured behavior representation. (c) It generalizes to novel conditions by leveraging bi-level planning and execution to achieve goal states. 3 Problem Formulation We consider the problem of learning a language-conditioned goal-reaching manipulation policy. Formally, the environment is modeled as a tuple ⟨X,U,T ⟩whereXis the raw state space, Uis the low-level action space, and T:X × U → X is the transition function (which may be stochastic and unknown). Furthermore, the robot will receive observations o∈ O that may be partially observable views of the states. At test time, the robot also receives a natural language instruction ℓt, which corresponds to a set of goal states. An oracle goal satisfaction function defines whether the language goal is reached, i.e., gℓt:X → { T, F}. Given an initial state x0∈ X and the instruction ℓt, the robot should generate a sequence of low-level actions {u1, u2, ..., u H} ∈ UH. In the language-annotated learning setting, the robot has a dataset of language-annotated demonstra- tionsD. Each demonstration is a sequence of robot actions {u1, ..., u H}paired with observations {o0, ..., o H}. Each trajectory is segmented into Msub-trajectories, and natural language descriptions {ℓ1, ..., ℓ M}are associated with the segments (e.g., “ place the kettle on the stove ”). In this paper, we assume that there is a finite number of possible ℓ’s—each corresponding to a skill to learn. Directly learning a single goal-conditioned policy that can generalize to novel states and goals is challenging. Therefore, we recover an abstract state and action representation of the environment and combine online planning in abstract states and offline policy learning for low-level control to solve the task. In BLADE , behaviors are represented as temporally extended actions with preconditions and effects characterized by state predicates. Formally, we want to recover a set of predicates Pthat define an abstract state space S. We focus on a scenario where all predicates are binary. However, they are grounded on high-dimensional sensory inputs. Using P, a state can be described as a set of grounded atoms such as {kettle(A),stove(B),filled(A),on(A,B)}for a two-object scene. BLADE will learn a function Φ :O → S that maps observations to abstract states. In its current implementation,
|
https://arxiv.org/abs/2505.21981v1
|
BLADE requires humans to additionally provide a list of predicate names in natural language, which we have found to be helpful for LLMs to generate action definitions. We provide additional ablations in the Appendix A.2. Based on S, we learn a library of behaviors (a.k.a., abstract actions ). Each behavior a∈ A is a tuple of ⟨name,args,pre,eff, π⟩.name is the name of the action. args is a list of variables related to the action, often denoted by ?x,?y.preandeffare the precondition and effect formula defined in terms of the variables args and the predicates P. A low-level policy π:O → U is also associated with a. The semantics of the preconditions and effects is: for any state xsuch that pre(Φ(x))is satisfied, executing πatxwill lead to a state x′such that eff(Φ(x′))[64]. 4 Behavior from Language and Demonstration BLADE is a method for learning abstract state and action representations from language-annotated demonstrations. It works in three steps, as illustrated in Fig. 2. First, we generate a symbolic behavior definition conditioned on the language annotations and contact sequences in the demonstration using a large language model (LLM). Next, we learn the classifiers associated with all state predicates and the control policies, all from the demonstration without additional annotations. At test time, we use a bi-level planning and execution strategy to generate robot actions. 3 (:action turn_on_faucet :precondition (and (is-turned-off ?faucet-knob) ( is-aligned ?kettle ?faucet-head) ( is-placed-in ?kettle ?sink)) :effect (and (is-turned-on ?faucet-knob) (not ( is-turned-off ?faucet-knob))) :body (then ( close-gripper ) (push ?faucet-knob) (open-gripper )))(:action move_faucet_head_over_sink :precondition (and (is-placed-in ?kettle ?sink ) ( is-turned-away ?faucet-head) ...) :effect (and (is-aligned ?kettle ?faucet-head) (not ( is-turned-away ?faucet-head))) :body (then ( close-gripper ) (push ?faucet-head) ( open-gripper )))Move Faucet Head Turn On Faucet … Place In Sink Wait to be Filled push open-gripper close-gripper(a) Demonstrations (b) Temporal Segmentation with Contact Primitives push open-gripper close-gripper … … … transit …transit(b) Temporal Segmentation with Contact Primitives (c) Automatic Predicate Annotation (d) Behavior Description Generation with LLMs…Figure 3: Behavior Descriptions Learning. (a) A demonstration is provided along with corresponding language annotations. (b) The demonstration is segmented into a sequence of contact primitives. (c) A large language model interprets the annotation and contact sequence, generating a symbolic behavior definition. (d) The system automatically generates data to learn classifiers for state predicates. 4.1 Behavior Description Learning Given a finite set of behaviors with language descriptions {ℓ}and corresponding demonstration segments, we generate an abstract description for each ℓby querying large language models. To facilitate LLM generation, we provide additional information on the list of objects with which the robot has contact. The generated operators are further refined with abstract verification. Temporal segmentation. We first segment each demonstration (Fig. 3a) into a sequence of contact- based primitives (Fig. 3b). In this paper we consider seven primitives describing the interactions between the robot and other objects: open /close grippers without holding objects, move-to (x) which moves the gripper to an object, grasp (x, y) and place (x, y) which grasp and place object xfrom/onto another object y,move (x) which moves the currently holding object xandpush (x). We
|
https://arxiv.org/abs/2505.21981v1
|
leverage proprioception, i.e., gripper open state, and object segmentation to automatically segment the con- tinuous trajectories into these basis segments. For example, pushing the faucet head away involves the sequence of {close-gripper ,push,open-gripper }. This segmentation will be used for LLMs to generate operator definitions and for constructing training data for control policies. Behavior description generation with LLMs. Our behavior description language is based on PDDL [ 65]. We extend the PDDL definition to include a body section which is a sequence of contact primitives. It will be generated by the LLM based on the demonstration data. Our input to the LLM mainly contains: 1) a general description of the environment, 2) the natural language descriptions ℓassociated with the behavior itself and other behaviors that have appeared preceding or following ℓin the dataset, 3) all possible sequence of contact primitive sequences associated with ℓacross the dataset, and 4) additional instructions on the PDDL syntax, including a single PDDL definition example. We find the additional context useful. As shown in Fig. 3d, in addition to preconditions and effects of the operators, we also ask LLMs to predict a body of contact primitive sequence associated with the behavior, which we call body . We assume that each behavior has a single corresponding contact primitive sequence, and use this step to account for noises in the segmentation annotations. After LLM predicts the definition for all behavior, we will re-segment the demonstrations associated with each behavior based on the LLM-predicted body section. Behavior description refinement with abstract verification. In addition to checking for syntax errors, we also verify the generated behavior descriptions with abstract verification on the demon- stration trajectories. Given a segmented sequence of the trajectory where each segment is associated with a behavior, we verify whether the preconditions of each behavior can be satisfied by the accumu- lated effects of the previous segments. This verification does not require learning the grounding of 4 predicates and can be done at the behavior level for incorrect preconditions and effects, and at the contact primitive level for missing or incorrect contact primitives (e.g., grasp cannot be immediately followed by other grasp ). We resample behavior definitions that do not pass the verification. 4.2 Classifier and Policy Learning Given the dataset of state-action segments associated with each behavior, we train the classifiers for different state predicates and the low-level controller for each behavior. Automatic predicate annotation. We leverage allbehavior descriptions to automatically label an observation sequence {o1, ..., o H}based on its associated segmentation. In particular, at o0, we label all state predicates as “unknown.” Next, we unroll the sequence of behavior executed in ¯o. As illustrated in Fig. 3c, before applying a behavior aat step ot, we label all predicates in preatrue and predicates in effafalse. When afinishes at step ot′, we label all predicates in effa. In addition, we will propagate the labels for state predicates to later time steps until they are explicitly altered by another behavior a. In contrast to earlier methods, such as Migimatsu and Bohg [66] and Mao et al. [67], which
|
https://arxiv.org/abs/2505.21981v1
|
directly use the first and last state of state-action segments to train predicate classifiers, our method greatly increases the diversity of training data. After this step, for each predicate p∈ P, we obtain a dataset of paired observations oand the predicate value of pat the corresponding time step. Classifier learning. Based on the state predicate dataset generated from behavior definitions, we train a set of state classifiers fθ(p) :O → { T, F}, which are implemented as standard neural networks for classification. We include implementation details in Appendix A.6. In real-world environments with strong data-efficiency requirements, we additionally use an open vocabulary object detector [ 68] to detect relevant objects for the state predicate and crop the observation images. For example, only pixels associated with the object faucet will be the input to the turned-on (faucet )classifier. Policy learning. For each behavior, we also train control policies πθ(a) :O → U , implemented as a diffusion policy [ 1]. In simulation, we use a combination of frame-mounted and wrist-mounted RGB-D cameras as the inputs to the diffusion policy, while in the real world, the policy takes raw camera images as input. The high-level planner orchestrates which of these low-level policies to deploy based on the scene and states. Once trained on these diverse demonstrations of different skills, the resulting low-level policies can adapt to local changes, such as variations in object poses. 4.3 Bi-Level Planning and Execution At test time, given a novel state and a novel goal, BLADE first uses LLMs to translate the goal into a first-order logic formula based on the state predicates. Next, it leverages the learned state abstractions to perform planning in a symbolic space to produce a sequence of behaviors. Then, we execute the low-level policy associated with the first behavior, and we re-run the planner after the low-level policy finishes—this enables us to handle various types of uncertainties and perturbations, including execution failure, partial observability, and human perturbation. In implementation, we use the fast-forward heuristic to generate plans [ 69]; however, our method is planner-agnostic, and other symbolic planners (e.g., Fast-Downward [70]) are compatible. Visibility and geometric constraints are also modeled as preconditions, in addition to other object- state and relational conditions. For example, the behavior “opening the cabinet door” will have preconditions on the initial state of the door, a visibility constraint that the door is visible, and a geometric constraint that nothing is blocking the door. When those preconditions are not satisfied, the planner will automatically generate plans, such as actions that move obstacles away, to achieve them. Partial observability was handled by using the most-likely state assumption during planning and performing replanning. We include details in Appendix A.8. 5 Experiments 5.1 Simulation Experimental Setup We use the CALVIN benchmark [ 71] for simulation-based evaluations, which include teleoperated human-play data. We use the split Dof the dataset, which consists of approximately 6 hours of interactions. Annotations of the play data are generated by a script that detects goal conditions 5 Goal StateAbstract Goals Initial Condition Initial Condition Goal StateLanguage Goal: “Place All Blocks
|
https://arxiv.org/abs/2505.21981v1
|
Inside Drawer” ∀x.is-block (x)⇒in(x,drawer )Language Goal: “Find Block In Slider” is-block (x), is-blue (x), is-table (y),on(x,y)Partial Observability Language Goal: “Move Sliding Door Left” is-sliding-door (x), left (x)Geometric Constraints Goal State Initial Condition Goal State Initial Condition Blue block not visible Blue block on tablePurple cube blocking sliderSlider on the leftDrawer closed blocks outside drawerDrawer open blocks inside drawer Figure 4: Generalization Tasks in CALVIN. Examples from the three generalization tasks in the CALVIN simulation environment. Successfully completing these tasks require planning for and executing 3-7 actions. Table 1: Generalization results in CALVIN. Mean success rates with STD from three seeds are reported. BLADE outperforms latent planning, LLM, and VLM baselines in completing novel long-horizon tasks. MethodState ClassifierLatent FeasibilityGeneralization Task Abstract Goal Geometric Constraint Partial Observability HULC [72] N/A N/A 2.78±3.47 11 .67±11.55 0 .00±0.00 SayCan [6] N/A Short 23.89±1.92 1 .67±2.89 1 .67±2.89 VILA [57] N/A N/A 18.38±2.48 0 .00±0.00 4 .17±5.20 T2M-Shooting [51] Learned Long 57.78±12.29 0 .00±0.00 13 .33±1.44 Ours Learned N/A 68.33±10.14 26 .67±7.64 75 .83±3.82 T2M-Shooting [51] GT Long 61.67±5.00 0 .00±0.00 0 .83±1.44 Ours GT N/A 76.11±6.74 56 .67±16.07 70 .00±5.00 on simulator states, and there are in total 34 types of behaviors. We use RGB-D images from the mounted camera for classifier learning and partial 3D point clouds recovered from the images for policy learning. The original benchmark focuses only on evaluating individual skills and instruction following. To evaluate the ability to compositionally combine previously learned policies to solve novel tasks, we design six new generalization tasks, with examples shown in Fig. 4. Each task has a language instruction, a sampler that generates random initial states, and a goal satisfaction function for evaluation. For each task, we sample 20 initial states and evaluate all methods with three different random seeds. See Appendix B.1 for more details on the benchmark setup. Baselines. We compare BLADE with two groups of baselines: hierarchical policies with planning in latent spaces and LLM/VLM-based methods for robotic planning. For the former, we use HULC [ 72], a representative method in CALVIN, which learns a hierarchical policy from language-annotated play data using hindsight labeling. For the latter, we use SayCan [ 6], Robot-VILA [ 57], and Text2Motion [ 51]. Note that Text2Motion assumes access to ground-truth symbolic states. Hence we compare Text2Motion with BLADE in two settings: one with the ground-truth states and the other with the state classifiers learned by BLADE . See Appendix B.2 for more details on these methods. 5.2 Results in Simulation Table 1 presents the performance of different models in all three types of generalization tasks. Structured behavior representations improve long-horizon planning. We first compare to the hierarchical policy HULC in Table. 1. BLADE with learned classifiers achieves a more than 65% improvement in the success rate for reaching abstract goals while using the same language-annotated play data. We attribute this to the particular implementation of hindsight labeling in HULC being not sufficient to generate plans that chain multiple high-level actions: for example, the task of placing all blocks in the closed drawer requires chaining together a minimum
|
https://arxiv.org/abs/2505.21981v1
|
of 7 behaviors. Structured transition models learned by BLADE facilitate long-horizon planning. Both SayCan and T2M-Shooting uses learned action feasibility models for planning. Shown in Table. 1, learning accurate feasibility models directly from raw demonstration data remains a significant challenge. In our experiment, we find that first, when the LLM does not take into account state information (SayCan), using the short-horizon feasibility model is not sufficient to produce sound plans. Second, since our model learns a structured transition model, factorized into different state predicates, BLADE 6 is capable of producing more accurate longer-horizon plans than T2M-Shooting which learns long- horizon feasibility from data. Structured scene representations facilitate making feasible plans. Compared to the Robot-VILA method, which directly predicts action sequences based on the image state, BLADE first uses learned state classifiers to construct an abstract state representation. This contributes to a 49% improvement on the Abstract Goal tasks in Table 1. We observe that the pre-trained VLM used in Robot-VILA often predicts actions that are not feasible in the current state. For example, Robot-VILA consistently performs better in completing “placing all blocks in a closed drawer” than “placing all blocks in an open drawer” since it always predicts opening the drawer as the first step. Explicit modeling of geometric constraints and object visibility improves performance in these scenarios. BLADE can reason about these challenging situations without explicitly being trained in those settings. Table. 1 shows that our approach consistently outperforms baselines in these two settings. These generalization capabilities are built on the explicit modeling of geometric constraints and object visibility in behavior preconditions. BLADE can automatically propose operators for the specific environment given demonstrations. Our experiment shows that the LLM can automatically propose high-quality behavior descriptions that resemble the dependency structures among operators. For example, the LLM discovers from the given contact primitive sequences and language-paired demonstration that blocks can only be placed after the block is lifted and that a drawer needs to be opened before placing objects inside, etc. Some of these dependencies are unique to the CALVIN environment, therefore requiring the LLM to generate specifically for this domain. We provide more visualizations in the Appendix A.1. Table 2: Ablation on state classifier learning in CALVIN. Method Abstract Geometric Partial Obs. [66] 33.89±5.85 9 .17±5.20 3 .33±2.89 BLADE 68.33±10.14 26 .67±7.64 75 .83±3.82BLADE ’s automatic predicate annotation enables better classifier learning. From Table 1, we observe that having accurate state classifier models is critical for algo- rithms’ performance (GT vs. Learned). Hence, we perform additional ablation stud- ies on classifier learning. Prior work such as Migimatsu and Bohg [66] also presented a method for learning the preconditions and effects of actions from segmented trajectories and symbolic action descriptions. The key difference between BLADE and theirs is that they only use the first and last frame of each segment to supervise the learning of state classifiers. We compare the two classifier learning algorithms, given the same LLM-generated behavior definitions, by evaluating the classifier accuracy on held-out states. BLADE shows a 20.7% improvement in F1 (16.3% improvement for classifying object states and
|
https://arxiv.org/abs/2505.21981v1
|
38.6% improvement for classifying spatial relations) compared to the baseline model. This also translates into significant improvements in the planning success rate, as shown in Table 2. 5.3 Real World Experiments Environments. We use a Franka Emika robot arm with a parallel jaw gripper. The setup includes five RealSense RGB-D cameras, with one being wrist-mounted on the robot and the remaining positioned around the workspace. Fig. 5 shows the two domains: Make Tea and Boil Water. For each domain, we collect 85 language-annotated demonstrations using teleoperation with a 3D mouse. After segmenting the demonstrations using proprioception sensor data, an LLM is used to generate behavior descriptions. These descriptions are subsequently used for policy and classifier learning. Setup. We compare BLADE against the VLM-based baseline Robot-VILA. We omit SayCan and T2M-Shooting since they require additional training data. We first test the original action sequences seen in the demonstrations for each domain. We then test on tasks that require novel compositions of behaviors for four types of generalizations, i.e., unseen initial condition, state perturbation, geometric constraints, and partial observability. For each generalization type, we run six experiments and report the number of experiments that have been successfully completed. See Appendix D for details. Results. In Fig. 5, we show that our model is able to successfully complete at least 4/6 tasks for all generalization types in the two different domains. In comparison, Robot-VILA struggles to generate 7 Open/Close Cabinet Door Place Kettle On Stove Open Drawer Place Teabag In Kettle Place Kettle In Sink Turn On/Off FaucetMove Faucet Head Towards/Away From Sink Place Kettle On StovePlace Pot On TableMake Tea Domain Boil Water Domain 0123456 Orig. Seq Unseen Init Perturb. Geo. Constr. Partial Obs. 1/6 0/6 1/6 1/6 0/6 4/6 4/6 4/6 4/6 5/6BLADE Robot-VILACount (out of 6) 0123456 Orig. Seq Unseen Init Perturb. Geo. Constr. 0/6 0/6 0/6 0/6 6/6 6/6 6/6 5/6 Generalization TasksBoil Water Results Make Tea ResultsFigure 5: Domains and Results in Real World. Make Tea features a toy kitchen designed to simulate boiling water on a stove. The robot must assess the available space on the stove for the kettle. It also needs to manage the dependencies between actions, such as the faucet must be turned away before the kettle can be placed into the sink to avoid collisions. Boil Water involves a tabletop task aimed at preparing tea, incorporating a cabinet, a drawer, and a stove. The robot must locate the kettle, potentially hidden within the cabinet, and a teabag in the drawer. Additionally, it must consider geometric constraints by removing obstacles that block the cabinet doors. In both environments, our model significantly outperforms the VLM-based planner Robot-VILA. Open Left Door Place On Stove Open DrawerBLADE Robot-VilaPlan: “open-drawer” , “place-teabag” … (Policy cannot achieve the goal, teabag is on the stove)Open Right Door Initial Condition Place Teabag Grab Teabag Open Drawer Plan: “grab-kettle” , “grab-kettle” , “grab-kettle” … (Policy unable to achieve the goal, resulting in a loop) Grab KettleBLADE Robot-VilaGoal AchievedGoal Achieved Open Left Door Unblock Doors Place On Stove Place TeabagCup Not Visible Cup Not Visible
|
https://arxiv.org/abs/2505.21981v1
|
Cup is Visible (b) Left & Right Doors Blocked Drawer OpenGeometric Constraint: Kettle blocking the doors Cup Not Visible Initial Condition Initial Condition(a) Figure 6: Real World Planning and Execution. We show the execution traces from BLADE and Robot-VILA for two generalization tasks: (a) partial observability and (b) geometric constraints. correct plans to complete the tasks. In Fig. 6, we visualize the generated plans and execution traces of both models. In example (a), we show that BLADE can find the kettle initially hidden in the cabinet and then complete the rest of the task. In comparison, Robot-VILA directly predicts placing the teabag in the kettle when the kettle is not visible, resulting in a failure. 6 Conclusion and Discussion BLADE is a novel framework for long-horizon manipulation by integrating model-based planning and imitation learning. BLADE uses an LLM to generate behavior descriptions with preconditions and effects from language-annotated demonstrations and automatically generates state abstraction labels based on behavior descriptions for learning state classifiers. At performance time, BLADE generalizes to novel states and goals by composing learned behaviors with a planner. Compared to latent-space and LLM/VLM-based planners, BLADE successfully completes significantly more long-horizon tasks with various types of generalizations. Limitations. One limitation of BLADE is that the automatic segmentation of demonstrations is based on gripper states; more advanced contact detection techniques might be required for certain tasks such as caging grasps. We also assume the knowledge of a given set of predicate names in natural language and focus on learning dependencies between actions using the given predicates. Automatically inventing task-specific predicates from demonstrations and language annotations, possibly with the integration of vision-language models (VLMs) is an important future direction. In our experiments, we also found that noisy state classification led to some planning failures. Therefore, developing planners that are more robust to noises in state estimation is necessary. Finally, achieving novel compositions of behaviors also requires policies with strong generalization to novel environmental states, which remain a challenge for skills learned from a limited amount of demonstration data. 8 Acknowledgments This work is in part supported by Analog Devices, MIT Quest for Intelligence, MIT-IBM Watson AI Lab, ONR Science of AI, NSF grant 2214177, ONR N00014-23-1-2355, AFOSR YIP FA9550-23- 1-0127, AFOSR grant FA9550-22-1-0249, ONR MURI N00014-22-1-2740, ARO grant W911NF- 23-1-0034. We extend our gratitude to Jonathan Yedidia, Nicholas Moran, Zhutian Yang, Manling Li, Joy Hsu, Stephen Tian, Chen Wang, Wenlong Wang, Yunfan Jiang, Chengshu Li, Josiah Wong, Mengdi Xu, Sanjana Srivastava, Yunong Liu, Tianyuan Dai, Wensi Ai, Yihe Tang, the members of the Stanford Vision and Learning Lab, and the anonymous reviewers for insightful discussions. References [1]C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. In RSS, 2023. 1, 5, 18, 19 [2]C. R. Garrett, T. Lozano-P ´erez, and L. P. Kaelbling. PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning. In ICAPS , 2020. 1, 2 [3] D. Xu, A. Mandlekar, R. Mart ´ın-Mart ´ın, Y . Zhu, S. Savarese, and L. Fei-Fei. Deep affordance
|
https://arxiv.org/abs/2505.21981v1
|
foresight: Planning through what can be done in the future. In ICRA , 2021. [4]H. Shi, H. Xu, Z. Huang, Y . Li, and J. Wu. RoboCraft: Learning to see, simulate, and shape elasto-plastic objects in 3d with graph networks. IJRR , 43(4):533–549, 2024. 1 [5]C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learning latent plans from play. In CoRL , 2020. 1 [6]A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian, et al. Do as I can, not as I say: Grounding language in robotic affordances. In CoRL , 2023. 1, 2, 6 [7]S. Srivastava, E. Fang, L. Riano, R. Chitnis, S. Russell, and P. Abbeel. Combined Task and Motion Planning through an Extensible Planner-Independent Interface Layer. In ICRA , 2014. 2 [8]N. T. Dantam, Z. K. Kingston, S. Chaudhuri, and L. E. Kavraki. Incremental task and motion planning: A constraint-based approach. In RSS, 2016. [9]M. Toussaint. Logic-Geometric Programming: An optimization-based approach to combined task and motion planning. In IJCAI , 2015. [10] A. Curtis, X. Fang, L. P. Kaelbling, T. Lozano-P ´erez, and C. R. Garrett. Long-horizon manipu- lation of unknown objects via task and motion planning with estimated affordances. In ICRA , 2022. [11] D. Driess, O. Oguz, J.-S. Ha, and M. Toussaint. Deep visual heuristics: Learning feasibility of mixed-integer programs for manipulation planning. In ICRA , 2020. [12] Z. Yang, C. R. Garrett, T. Lozano-P ´erez, L. Kaelbling, and D. Fox. Sequence-based plan feasibility prediction for efficient task and motion planning. In RSS, 2023. 2 [13] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In ICRA , 2017. 2 [14] S. Nair and C. Finn. Hierarchical foresight: Self-supervised learning of long-horizon tasks via visual subgoal generation. In ICLR , 2020. [15] H. Shi, H. Xu, S. Clarke, Y . Li, and J. Wu. Robocook: Long-horizon elasto-plastic object manipulation with diverse tools. In CoRL , 2023. [16] A. Simeonov, Y . Du, B. Kim, F. Hogan, J. Tenenbaum, P. Agrawal, and A. Rodriguez. A long horizon planning framework for manipulating rigid pointcloud objects. In CoRL , 2021. 9 [17] X. Lin, C. Qi, Y . Zhang, Z. Huang, K. Fragkiadaki, Y . Li, C. Gan, and D. Held. Planning with spatial and temporal abstraction from point clouds for deformable object manipulation. In CoRL , 2022. [18] Y . Du, M. Yang, P. Florence, F. Xia, A. Wahid, B. Ichter, P. Sermanet, T. Yu, P. Abbeel, J. B. Tenenbaum, et al. Video language planning. arXiv:2310.10625 , 2023. 2 [19] J. Luo, C. Xu, X. Geng, G. Feng, K. Fang, L. Tan, S. Schaal, and S. Levine. Multi-stage cable routing through hierarchical imitation learning. IEEE Transactions on Robotics , 2024. 2 [20] L. X. Shi, Z. Hu, T. Z. Zhao, A. Sharma, K. Pertsch, J. Luo, S. Levine, and C. Finn. Yell at your robot: Improving on-the-fly from language corrections. arXiv:2403.12910 , 2024. [21] S. Pirk, K. Hausman, A. Toshev, and M. Khansari. Modeling long-horizon tasks as
|
https://arxiv.org/abs/2505.21981v1
|
sequential interaction landscapes. In CoRL , 2020. [22] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. In CoRL , 2023. [23] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. In RSS, 2021. 2 [24] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances over unstructured data. In ICRA , 2023. 21 [25] A. Mandlekar, C. Garrett, D. Xu, and D. Fox. Human-in-the-loop task and motion planning for imitation learning. In CoRL , 2023. 2 [26] Z. Zhang, Y . Li, O. Bastani, A. Gupta, D. Jayaraman, Y . J. Ma, and L. Weihs. Universal Visual Decomposer: Long-horizon manipulation made easy. In ICRA , 2024. 2 [27] Y . Zhu, P. Stone, and Y . Zhu. Bottom-up skill discovery from unsegmented demonstrations for long-horizon robot manipulation. IEEE Robotics and Automation Letters , 7(2):4126–4133, 2022. 2 [28] Y . Zhu, J. Tremblay, S. Birchfield, and Y . Zhu. Hierarchical planning for long-horizon manipu- lation with geometric and symbolic scene graphs. In ICRA , 2020. 2 [29] D.-A. Huang, S. Nair, D. Xu, Y . Zhu, A. Garg, L. Fei-Fei, S. Savarese, and J. C. Niebles. Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In CVPR , 2019. [30] D.-A. Huang, D. Xu, Y . Zhu, A. Garg, S. Savarese, F.-F. Li, and J. C. Niebles. Continuous relaxation of symbolic planner for one-shot imitation learning. In IROS , 2019. [31] Y . Huang, N. C. Taylor, A. Conkey, W. Liu, and T. Hermans. Latent space planning for multi-object manipulation with environment-aware relational classifiers. IEEE Transactions on Robotics , 2024. 2 [32] G. Konidaris, L. P. Kaelbling, and T. Lozano-Perez. From skills to symbols: Learning symbolic representations for abstract high-level planning. Journal of Artificial Intelligence Research , 61: 215–289, 2018. 2 [33] T. Silver, R. Chitnis, N. Kumar, W. McClinton, T. Lozano-P ´erez, L. Kaelbling, and J. B. Tenenbaum. Predicate invention for bilevel planning. In AAAI , 2023. [34] A. Ahmetoglu, E. Oztop, and E. Ugur. Symbolic manipulation planning with discovered object and relational predicates. arXiv preprint arXiv:2401.01123 , 2024. [35] N. Shah, J. Nagpal, P. Verma, and S. Srivastava. From reals to logic and back: Inventing symbolic vocabularies, actions and models for planning from raw data. arXiv preprint arXiv:2402.11871 , 2024. 10 [36] M. Han, Y . Zhu, S.-C. Zhu, Y . N. Wu, and Y . Zhu. Interpret: Interactive predicate learning from language feedback for generalizable task planning. In RSS, 2024. 2 [37] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In ICML , 2022. 2 [38] T. Silver, V . Hariprasad, R. S. Shuttleworth, N. Kumar, T. Lozano-P ´erez, and L. P. Kaelbling. Pddl planning with pretrained large language models. In NeurIPS 2022 foundation models for decision making workshop , 2022. 2 [39] K. Valmeekam, M. Marquez, S. Sreedharan, and S. Kambhampati. On the planning abilities of large language models-a critical investigation.
|
https://arxiv.org/abs/2505.21981v1
|
In NeurIPS , 2023. 2 [40] S. Kambhampati, K. Valmeekam, L. Guan, K. Stechly, M. Verma, S. Bhambri, L. Saldyt, and A. Murthy. Llms can’t plan, but can help planning in llm-modulo frameworks. arXiv:2402.01817 , 2024. 2 [41] Y . Chen, J. Arkin, Y . Zhang, N. Roy, and C. Fan. AutoTAMP: Autoregressive task and motion planning with llms as translators and checkers. In ICRA , 2024. 2 [42] B. Liu, Y . Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone. LLM+P: Empowering large language models with optimal planning proficiency. arXiv:2304.11477 , 2023. [43] Y . Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh. Translating natural language to planning goals with large-language models. arXiv:2302.05128 , 2023. [44] A. Mavrogiannis, C. Mavrogiannis, and Y . Aloimonos. Cook2ltl: Translating cooking recipes to ltl formulae using large language models. In ICRA , 2024. 2 [45] T. Silver, S. Dan, K. Srinivas, J. B. Tenenbaum, L. Kaelbling, and M. Katz. Generalized planning in PDDL domains with pretrained large language models. In AAAI , 2024. 2 [46] X. Zhu, Y . Chen, H. Tian, C. Tao, W. Su, C. Yang, G. Huang, B. Li, L. Lu, X. Wang, et al. Ghost in the Minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory. arXiv:2305.17144 , 2023. 2 [47] K. Nottingham, P. Ammanabrolu, A. Suhr, Y . Choi, H. Hajishirzi, S. Singh, and R. Fox. Do embodied agents dream of pixelated sheep: Embodied decision making using language guided world modelling. In ICML , 2023. 2 [48] S. Hao, Y . Gu, H. Ma, J. J. Hong, Z. Wang, D. Z. Wang, and Z. Hu. Reasoning with language model is planning with world model. In EMNLP , 2023. 2 [49] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies: Language model programs for embodied control. In ICRA , 2023. 2 [50] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg. Progprompt: Generating situated robot task plans using large language models. In ICRA , 2023. 2 [51] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural language instructions to feasible plans. Autonomous Robots , 47(8):1345–1365, 2023. 2, 6 [52] M. Skreta, Z. Zhou, J. L. Yuan, K. Darvish, A. Aspuru-Guzik, and A. Garg. Replan: Robotic replanning with perception and language models. arXiv:2401.04157 , 2024. 2 [53] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv:2303.03378 , 2023. 2 11 [54] Z. Wu, Z. Wang, X. Xu, J. Lu, and H. Yan. Embodied task planning with large language models. arXiv:2307.01848 , 2023. [55] J. Xiang, T. Tao, Y . Gu, T. Shu, Z. Wang, Z. Yang, and Z. Hu. Language models meet world models: Embodied experiences enhance language models. In NeurIPS , 2024. 2 [56] H. Wang, K. Kedia,
|
https://arxiv.org/abs/2505.21981v1
|
J. Ren, R. Abdullah, A. Bhardwaj, A. Chao, K. Y . Chen, N. Chin, P. Dan, X. Fan, et al. Mosaic: A modular system for assistive and interactive cooking. arXiv preprint arXiv:2402.18796 , 2024. 2 [57] Y . Hu, F. Lin, T. Zhang, L. Yi, and Y . Gao. Look before you leap: Unveiling the power of GPT-4v in robotic vision-language planning. arXiv:2311.17842 , 2023. 2, 6 [58] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi. ChatGPT empowered long-step robot control in various environments: A case application. IEEE Access , 2023. 2 [59] L. Wong, J. Mao, P. Sharma, Z. S. Siegel, J. Feng, N. Korneev, J. B. Tenenbaum, and J. Andreas. Learning adaptive planning representations with natural language guidance. In ICLR , 2024. 2 [60] L. Guan, K. Valmeekam, S. Sreedharan, and S. Kambhampati. Leveraging pre-trained large language models to construct and utilize world models for model-based task planning. In NeurIPS , 2023. [61] P. Smirnov, F. Joublin, A. Ceravola, and M. Gienger. Generating consistent PDDL domains with large language models. arXiv:2404.07751 , 2024. 2 [62] Z. Li, K. Yu, S. Cheng, and D. Xu. League++: Empowering continual robot learning through guided skill acquisition with large language models. In ICLR 2024 Workshop on Large Language Model (LLM) Agents , 2024. 2 [63] M. Dalal, T. Chiruvolu, D. Chaplot, and R. Salakhutdinov. Plan-seq-learn: Language model guided rl for solving long horizon robotics tasks. In ICLR , 2024. 2 [64] V . Lifschitz. On the semantics of STRIPS. In M. Georgeff, Lansky, and Amy, editors, Reasoning about Actions and Plans , pages 1–9. Morgan Kaufmann, San Mateo, CA, 1987. 3 [65] C. Aeronautiques, A. Howe, C. Knoblock, I. D. McDermott, A. Ram, M. Veloso, D. Weld, D. W. SRI, A. Barrett, and D. Christianson. PDDL: The Planning Domain Definition Language, 1998. 4 [66] T. Migimatsu and J. Bohg. Grounding predicates through actions. In ICRA , 2022. 5, 7 [67] J. Mao, T. Lozano-P ´erez, J. Tenenbaum, and L. Kaelbling. PDSketch: Integrated domain programming, learning, and planning. In NeurIPS , 2022. 5 [68] S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, et al. Grounding Dino: Marrying dino with grounded pre-training for open-set object detection. arXiv:2303.05499 , 2023. 5, 17 [69] J. Hoffmann and B. Nebel. The FF planning system: Fast plan generation through heuristic search. JAIR , 14:253–302, 2001. 5 [70] M. Helmert. The fast downward planning system. JAIR , 26:191–246, 2006. 5 [71] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard. Calvin: A benchmark for language- conditioned policy learning for long-horizon robot manipulation tasks. RA-L , 7:7327–7334, 2021. 5 [72] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitation learning over unstructured data. RA-L , 7:11205–11212, 2022. 6, 21 12 [73] Y . Zhang, X. Huang, J. Ma, Z. Li, Z. Luo, Y . Xie, Y . Qin, T. Luo, Y . Li, S. Liu, et al. Recognize Anything: A strong image tagging model. In CVPR , 2024. 14 [74] M. Shridhar,
|
https://arxiv.org/abs/2505.21981v1
|
L. Manuelli, and D. Fox. Perceiver-Actor: A multi-task transformer for robotic manipulation. In CoRL , 2023. 16 [75] T.-W. Ke, N. Gkanatsios, and K. Fragkiadaki. 3D Diffuser Actor: Policy diffusion with 3D scene representations. arXiv:2402.10885 , 2024. 16 [76] Z. Zhang, Y . Li, O. Bastani, A. Gupta, D. Jayaraman, Y . J. Ma, and L. Weihs. Universal Visual Decomposer: Long-horizon manipulation made easy. In ICRA , 2024. 16 [77] W. Wan, Y . Zhu, R. Shah, and Y . Zhu. Lotus: Continual imitation learning for robot manipulation through unsupervised skill discovery. In ICRA , 2024. 16 [78] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu. PCT: Point cloud transformer. Computational Visual Media , 7:187–199, 2021. 17 [79] L. P. Kaelbling and T. Lozano-P ´erez. Hierarchical task and motion planning in the now. In ICRA , 2011. 19, 20 [80] C. R. Garrett, C. Paxton, T. Lozano-P ´erez, L. P. Kaelbling, and D. Fox. Online replanning in belief space for partially observable task and motion problems. In ICRA , 2020. 20 13 Supplementary Material for Learning Compositional Behaviors from Demonstration and Language This supplementary material provides additional details on the BLADE framework, the experiments, and qualitative examples. Section A provides a detailed description of the method, including the behavior description generation, predicate generation, abstract verification, automatic predicate annotation, classifier implementation, and policy implementation. Section B provides details on the simulation experiments, including the task design and baseline implementations. Section C provides qualitative examples of our method and baselines. Section D provides details of our setup of the real-robot experiment. Finally, Section E includes a full list of the prompts for the baselines used in the simulation experiments. A BLADE Details A.1 Behavior Description Generation with LLMs In Listing 1, we show the behavior descriptions automatically generated by the LLM for the CALVIN domain. We also show the detailed prompt to the LLM for generating the behavior description. We break down the system prompt into four parts: definitions of primitive actions (Listings 2), definitions of predicates and environment context (Listings 3), an in-context example (Listings 4), and additional instructions (Listings 5). In Listings 6, we show one example of the specific user prompt that is used to generate the behavior description for place-in-drawer . In our experiments, we find that the environment description is necessary for the LLMs to understand the context of the task. For the simulation experiment, we provide the environment description as a list of objects and brief explanations, as shown in Listings 3. However, this description can be automatically generated using image tagging models such as Recognize Anything [ 73] or general- purpose VLMs such as GPT-4V . A.2 Predicate Generation with LLMs Our algorithms are agnostic to the source of predicates and can flexibly generate action descriptions based on the given predicates. In our main experiment, we assume that the predicates for each task domain are provided in natural language. Here, we show that given the task definition and the environment context, a LLM can automatically generate the relevant predicates for the
|
https://arxiv.org/abs/2505.21981v1
|
domain. Then, we generate behavior descriptions based on the automatically generated predicates. To generate high-quality predicates and behavior descriptions, we take the following steps. First, the LLM is provided with the list of objects in the scene and the language-paired demonstration sequence and is required to generate relevant predicates for the domain. In Listing 7, we show an example of the input prompt to the LLM for the CALVIN domain. Second, the LLM is prompted with the generated predicates and is required to ground the predicates using the objects in the scene. This step helps the LLM eliminate errors in the first step, such as missing arguments. Finally, we instruct the LLM to remove semantically equivalent predicates and keep the most general ones. For example, is-on-stove (kettle) is removed by the LLM in our experiment because is-on (kettle, stove) is semantically equivalent. After the predicates are automatically generated and filtered, we proceed with the behavior description generation. In Table A1, Table A2, and Table A3, we compare the generated predicates with the predicates defined by the domain expert for the CALVIN, Boil Water, and Make Tea domains. We observe that the LLM is able to generate 28 out of 30 predicates that match closely with the expert-designed predicates. These predicates provide abstract representations for object states (e.g., is-open ,light-on ), relations between objects (e.g., in-slider ,in), and robot-centric states (e.g., holding ). The LLM incorrectly generates the predicate next-to (?x,?y) to characterize the effects of the push-left andpush-right actions, possibly due to ambiguities in the definition of the actions. For the two real-world domains, 14 we perform additional experiments to confirm that the generated predicates and behavior descriptions can be used to generate correct task plans on all nine real-world tasks presented in Section 5.3. Table A1: Comparison of Manually Defined and Automatically Generated Predicates for CALVIN. Manually Defined Automatically Generated rotated-left (?x) rotated-left (?x) rotated-right (?x) rotated-right (?x) lifted (?x) holding (?x) is-open (?x) is-open (?x) is-close (?x) is-closed (?x) is-turned-on (?x) light-on (?x) is-turned-off (?x) light-off (?x) is-slider-left (?x) slider-left (?x) is-slider-right (?x)slider-right (?x) is-on (?x,?y) on-table (?x) is-in(?x,?y) in-slider (?x),in-drawer (?x) stacked (?x,?y) on(?x,?y) unstacked (?x,?y)clear (?x) pushed-left (?x) - pushed-right (?x) - - next-to (?x,?y) Table A2: Comparison of Manually Defined and Automatically Generated Predicates for Boil Water. Manually Defined Automatically Generated Automatically Generated (Grounded) is-placed-on (?x,?y)on-table (?x) on-table (kettle), on-table (pot) is-placed-in (?x,?y) in-sink (?x) in-sink (kettle), in-sink (pot) is-blocked (?x) on-stove (?x) on-stove (pot) is-turned-away (?x) faucet-over-sink (?x,?y) faucet-over-sink (faucet, sink) is-aligned (?x,?y) faucet-over-sink (?x,?y) faucet-over-sink (faucet, sink) is-turned-on (?x) faucet-on (?x) faucet-on (faucet) is-turned-off (?x) faucet-off (?x) faucet-off (faucet) is-filled (?x) filled (?x) filled (kettle), filled (pot) holding (?x) holding (?x) holding (pot) Table A3: Comparison of Manually Defined and Automatically Generated Predicates for Make Tea. Manually Defined Automatically Generated Automatically Generated (Grounded) is-placed-on (?x,?y) is-on (?x,?y) is-on (kettle, stove) is-cabinet-door-open (?x) is-open (?x) is-open (left-door), is-open (right-door) is-placed-inside (?x,?y) is-in-drawer (?x) is-in-drawer (teabag) is-in-kettle (?x) is-in-kettle (teabag) is-in-cabinet-left (?x) is-in-cabinet-left (kettle) is-in-cabinet-right (?x) is-in-cabinet-right (kettle) is-drawer-open (?x) is-open (?x) is-open (drawer) is-left-cabinet-door-blocked (?x) is-blocking
|
https://arxiv.org/abs/2505.21981v1
|
(?x,?y) is-blocking (pot, left-door) is-right-cabinet-door-blocked (?x)is-blocking (?x,?y) is-blocking (pot, right-door) - is-closed (?x) is-closed (left-door), is-closed (right-door) is-closed (drawer) - is-moved-away (?x) is-moved-away (pot) A.3 Temporal Segmentation Before the generation of behavior description, we segment each demonstration into a sequence of contact-based primitives . We consider seven primitives describing the interactions between the robot and other objects: open /close grippers without holding objects, move-to (x) which moves the gripper to an object, grasp (x, y) and place (x, y) which grasp and place object xfrom/onto another object y, move (x) which moves the currently holding object xandpush (x). 15 We use a set of heuristics to automatically segment the continuous trajectories using proprioception, i.e., gripper open state, and object segmentation. Specifically, open andclose are directly detected by checking whether the gripper width is at the maximum or minimum value. grasp (x, y) and place (x, y) correspond to the other closing and opening gripper actions. move (x),push (x) and move-to (x) are matched to temporal segments between pairs of gripper actions. Their type can be inferred based on the preceding and following gripper actions. We make a simplifying assumption that the robot moves freely in space only when the gripper is fully open and pushes objects only when the gripper is fully closed. These are given as instructions to the human demonstrators. In the simulator, the arguments of the primitives are obtained from the contact state. In the real world, they are inferred from the language annotations of the actions (e.g.,“place the kettle on the stove” corresponds to place (kettle, stove)) procedurally or by the LLMs. The arguments can also be left unspecified; these arguments mainly provide additional contextual information about the target objects. In Section 4.1, we discuss that we use LLMs to predict a body of contact primitive sequence associated with each behavior description. This additional step helps account for noises in the segmentation anno- tations, which are prevalent in CALVIN’s language-annotated demonstrations. For example, the lan- guage annotation “lift-block-table” correspond to the contact sequence {move-to ,grasp ,move,place}. Based on the generated body , the behavior can be correctly mapped to {grasp ,move}and the demon- stration trajectories can then be re-segmented. This additional step is crucial for learning accurate groundings of the states and actions. Our approach to temporal segmentation are similar to keyframe-based methods like PerAct [ 74] and 3D Diffuser Actor [ 75], which rely on end-effector states (e.g., grasp and release) and velocities to segment demonstrations. However, our method differs by distinguishing between prehensile and non-prehensile manipulation through detecting whether an object is grasped. This enables the development of behaviors such as move-faucet-away , where the robot pushes the faucet head without grasping it. In our preliminary studies, we also experiment with other vision-based methods including UVD [ 76] and Lotus [ 77]. A main issue for incorporating these methods is that they provide less consistent segmentations for different occurrences of the same behavior. As we discussed in Section 6, more advanced contact detection techniques will be an important future direction for using contact primitives as a
|
https://arxiv.org/abs/2505.21981v1
|
meaningful interface between actions and language. A.4 Abstract Verification After the generation of the behavior descriptions, we verify the generated behavior descriptions by performing abstract verification on the demonstration trajectories. Given a segmented sequence of the trajectory where each segment is associated with a behavior, we verify whether the preconditions of each behavior can be satisfied by the accumulated effects of the previous behaviors. Pseudocode for this algorithm is shown in Algorithm 1. A.5 Automatic Predicate Annotation We leverage allbehavior descriptions to automatically label an observation sequence {o1, ..., o H} based on its associated segmentation. In particular, at o0, we label all state predicates as “unknown.” Next, we unroll the sequence of executed behaviors. As illustrated in Fig. 3c, before applying a behavior aat step ot, we label all predicates in preatrue and predicates in effafalse. When a finishes at step ot′, we label all predicates in effa. In addition, we will propagate the labels for state predicates to later time steps until they are explicitly altered by another behavior a. Pseudocode for this algorithm is shown in Algorithm 2. As a result, we obtain both positive and negative examples to train the binary predicate classifiers. In particular, the negative examples of a predicate come from two sources. First, the preconditions and effects of a behavior can include negated predicates. Second, during automatic predicate annotation, we label all predicates in the effects as false. 16 Algorithm 1 Abstract Verification Input: Dataset D, Behavior descriptions A 1:error counter ←a counter for sequencing errors related to each behavior 2:counter ←a counter for storing the occurrences of each behavior 3:fori←1toKdo 4: obtain a behavior sequence Di← {ai 1, ..., ai N} 5: initialize a dictionary for predicate state pred← {} 6: fort←1toNdo 7: foreach expinpreai tdo 8: (p, v)←EXTRACT PREDICATE ANDBOOL(exp) 9: ifpnot in pred then 10: pred[p]←v 11: else 12: ifpred[p]̸=vthen 13: increment error counter [ai t] 14: foreach expineffai tdo 15: (p, v)←EXTRACT PREDICATE ANDBOOL(exp) 16: pred[p]←v 17: increment counter [ai t] 18:foreachainerror counter do 19: iferror counter [a]/counter [a]>threshold then 20: regenerate the behavior description for a A.6 Classifier Implementation Based on the state predicate dataset generated from behavior definitions, we train a set of state classifiers fθ(p) :O → { T, F}, which are implemented as standard neural networks for classification. In the simulation experiment, the classifier model is based on a pre-trained CLIP model ( ViT-B/32 ). We use the image pre-processing pipeline from the CLIP model to process the input images. We use images from the static camera in the simulation. We perform one additional step of image processing to mask out the robot arm, which we find in our preliminary experiment to help avoid overfitting. We do not use the global image embedding from the CLIP model, instead we extract the patch tokens from the output of the vision transformer. We downsize the concatenated patch tokens with a multilayer perceptron (MLP) and then concatenate with word embeddings of the predicate arguments (e.g., red-block ,table ). The final embedding is then passed through a predicate-specific MLP to output
|
https://arxiv.org/abs/2505.21981v1
|
the logit for binary classification. The CLIP model is frozen, while all other learnable parameters are trained. In the real-world experiment, we find that, with more limited data than simulation, the pre-trained CLIP model often overfits to spurious relations in the training images (e.g., the state of the faucet is entangled with the location of the kettle). We also experiment with a ResNet-50 model pre- trained on ImageNet and find similar behavior. To improve generalization, we choose to focus on relevant objects and regions. We achieve this by using segmented object point clouds. We use open vocabulary object detector Grounding-Dino [ 68] to detect objects given object names. The predicted 2D bounding boxes are projected into 3D and used to extract regions of the point cloud surrounding each object. The point-cloud-based classifier is based on the shape classification model from the Point Cloud Transformer (PCT) [ 78]. We concatenate the segmented object point clouds and include one additional channel to indicate the identity of each point. The PCT is used to encode the combined point cloud and output the final logit. The PCT model is trained from scratch. We also experiment with replacing trained classifiers a general-purpose VLM gpt-4o . The VLM fails to robustly determine the boolean values of the predicates in different states. Our observation is that the VLM can more reliably detect the states of articulated objects (e.g., the states of drawers and cabinet doors) than more complex spatial relations (e.g., whether the faucet head is aligned with 17 Algorithm 2 Predicate Annotation Input: Behavior sequence {a1, ..., a N}, Observation sequence {o1, ..., o H}, Descriptions A 1:propagated ←an empty list of propagated predicates 2:prev effs←a list for storing effects from previous step 3:timed preds←an empty list of predicates associated with time steps 4:pred obs←an empty list for storing predicates paired with observations 5:fort←1toNdo 6: // Precondition 7: timed preds←timed preds∪GETTIMED PREDICATES (preat, t) 8: timed preds←timed preds∪GETTIMED PREDICATES (¬effat, t) 9: // Propagated 10: foreachpinpropagated do 11: ifnot ALTERED (p,at)then 12: UPDATE TIME(p,t) 13: else 14: propagated.remove (p) 15: timed preds.add (p) 16: // Previous effects 17: foreachpinprev effsdo 18: ifnot ALTERED (p,at)then 19: propagated.add (p) 20: else 21: timed preds.add (p) 22: // Store effects for next step 23: prev effs←GETTIMED PREDICATES (effat, t) 24:timed preds.update (propagated ) 25:timed preds.update (prev effs) 26:foreachpintimed preds do 27: pred obs.update (MATCH TIMED PREDICATE WITHOBSERVATION (p,{o1, ..., o H})) 28:return pred obs the kettle). Following our original experimental procedure, we combine the VLM classifier with our planner to test the generalization cases for the Boil Water domain. Due to the inaccuracy of the state classification, the overall system achieves a 0% success rate. We imagine future VLMs will become more reliable in detecting the predicate states, and our method will be beneficial to the paradigm of using VLMs as classifiers in the following ways: 1) our automatic predicate annotation method can generate examples for VLMs to recognize new concepts through visual in-context prompting or supervised fine-tuning; 2) our method provides data to train task- specific classifiers based on 3D
|
https://arxiv.org/abs/2505.21981v1
|
representations, which are complementary to general-purpose VLMs in recognizing geometric and spatial concepts; 3) our method provides a way to learn user-specific predicates (e.g., a predicate that determines whether clean dishes are arranged according to a user’s preferences) from demonstrations. In our preliminary experiment on the Boil Water domain, we find that using examples generated from our method as in-context visual examples improves the VLM’s accuracy by around 5% on 220 withheld testing examples. A.7 Policy Implementation For each behavior, we train control policies πθ(a) :O → U , implemented as a diffusion policy [ 1]. We make three changes to the original implementation to facilitate chaining the learned behaviors. First, when training the model to predict the first raw action for each skill, we replace the history observations with observations sampled randomly from a temporal window prior to when the skill is executed, to avoid bias in the starting positions of the robot arm. Second, we perform biased sampling of the training sequences to ensure that the policy is trained on a diverse set of starting positions. Third, at the end of each training sequence, we append a sequence of zeros actions so the learned 18 policy can learned to predicate termination. These strategies are implemented for both the simulation and the real world. In simulation, we construct the point cloud of the scene using the RGB-D image from the frame- mounted camera. We then obtain segmented object point clouds for the relevant objects of each behavior (e.g., table andblock forpick-block-table ) with groundtruth segmentation masks from the PyBullet simulator. The segmented point clouds of the objects are concatenated to form the input point cloud observation. The model uses the PCT to encode a sequence of point clouds as history observations and uses another time-series transformer encoder to reason over the history observations and predict the next actions. The time-series transformer is similar in design to the transformer-based diffusion policy [1]. In the real world, we use RGB images from four stationary cameras mounted around the workspace and a wrist-mounted camera as input to an image-based diffusion policy model. The input is processed using five separate ResNet-34 encoder heads. The policy directly predicts the gripper pose in the world frame. We found the wrist-mounted camera to be particularly helpful in the real-world setup. A.8 Planner Implementation Planning over geometric constraints. Geometric constraints, specifically the collision-free con- straints for each action, are handled “in the now,” right before an action is executed. This is because in order to classify the geometric constraints, we would need to know the exact pose of all objects in the environments. However, we do not explicitly learn models for predicting the exact location of objects after executing certain behaviors. Our approach to handle this is to process them in the now. It follows the hierarchical planning strategy [ 79]. In particular, the precondition for actions is an ordered list. In our case, there are two levels: the second level contains the geometric constraint preconditions and the first level contains the rest of the semantic preconditions. During planning,
|
https://arxiv.org/abs/2505.21981v1
|
only the first set of preconditions will be added to the subgoal list. After we have finished planning for the first-level preconditions, we consider the second-level precondition for the first behavior in the resulting plan, by possibly moving other obstacles away. As an example, let us consider the skill of opening the cabinet door. Its first-level precondition only considers the initial state of the cabinet door (i.e., it should be initially closed). It also has a second-level precondition stating that nothing else should be blocking the door. In the beginning, the planner only considers the first-level preconditions. When this behavior is selected to be executed next, the planner checks for the second-level precondition. If this non-blocking precondition is not satisfied in the current state, we will recursively call the planner to achieve it (which will generate actions that move the blocking obstacles away). If this precondition has already been satisfied, we will proceed to execute the policy associated with this opening-cabinet-door skill. This strategy will work for scenarios where there is enough space for moving obstacles around and the robot does not need to make dedicated plans for arranging objects. In scenarios where space is tight and dedicated object placement planning is required, we can extend our framework to include the prediction of object poses after each skill execution. Planning over partial observability. Partial observability is handled assuming the most likely state. In particular, the effect definitions for all behaviors are deterministic. It denotes the most likely state that it will result in. For example, in the definition of behaviors for finding objects (e.g., the find-object-in-left-cabinet ), we have a deterministic and “optimistic” effect statement that the object will be visible after executing this action. At performance time, since we will replan after executing each behavior, if the object is not visible after we have opened the left cabinet, the planner will automatically plan for other actions to achieve this visibility subgoal. This strategy works for simple partially observable Markov decision processes (POMDPs). A potential extension to it is to model a belief state (e.g., representing a distribution of possible object 19 poses) and execute belief updates on it. Planners can then use more advanced algorithms such as observation-based planning to generate plans. Such strategies have been studied in task and motion planning literature [79, 80]. B Simulation Experiment Details In the effort to standardize our evaluation, we adopt the standard CALVIN benchmark to evaluate all methods without any modifications to its language annotations, action space, skills, and environments. B.1 Task Design To evaluate generalization to new long-horizon manipulation tasks, we designed six tasks that fall into three categories: Abstract Goal, Geometric Constraint, and Partial Observability. Each task has a language instruction, a sampler that generates random initial states, and a goal satisfaction function for evaluation. We provide details for each task below. Task-1 •Task Category: Abstract Goal •Language Instruction: turn off all lights. •Logical Goal: (and (is-turned-off led) (is-turned-off lightbulb)) •Initial State: Both the led and the lightbulb are initially turned on. •Goal Satisfaction: The logical states of both the lightbulb
|
https://arxiv.org/abs/2505.21981v1
|
and the led are off. •Variation: The initial states of the led and the lightbulb are both on and the goal is to turn them off. Task-2 •Task Category: Abstract Goal •Language Instruction: move all blocks to the closed drawer. •Logical Goal: (and (is-in red-block drawer) (is-in blue-block drawer) (is-in pink-block drawer)) •Initial State: The blocks are visible and not in the drawer. The drawer is closed. •Goal Satisfaction: The blocks are in the drawer. Task-3 •Task Category: Abstract Goal •Language Instruction: move all blocks to the open drawer. •Logical Goal: (and (is-in red-block drawer) (is-in blue-block drawer) (is-in pink-block drawer)) •Initial State: The blocks are visible and not in the drawer. The drawer is open. •Goal Satisfaction: The blocks are in the drawer. Task-4 •Task Category: Partial Observability •Language Instruction: place a red block on the table. •Logical Goal: (is-on red-block table) •Initial State: The red block is in the drawer and the drawer is closed. •Goal Satisfaction: The red block is placed on the table. •Variations: Find the blue block or the pink block. Task-5 •Task Category: Partial Observability •Language Instruction: place a red block on the table. •Logical Goal: (is-on red-block table) •Initial State: The red block is behind the sliding door. •Goal Satisfaction: The red block is placed on the table. •Variations: Find the blue block or the pink block. Task-6 20 •Task Category: Geometric Constraint •Language Instruction: open the slider. •Logical Goal: (is-slider-left slider) •Initial State: The sliding door is on the right and there is a pink block on the path of the sliding door to the left. •Goal Satisfaction: The sliding door is within 5cm of the left end. •Variations: Move the slider to the right. B.2 Baseline Implementation HULC. This baseline is a hierarchical policy learning method that learns from language-annotated play data using hindsight labeling [ 72]. It’s one of the best-performing models on the D→Dsplit of the CALVIN benchmark. We omit the comparison to the HULC++ method [ 24], the follow-up work of HULC that leverages affordance prediction and motion planning to improve the low-level skills, because our evaluation is focused on the task planning ability of the learned hierarchical model. SayCan. This baseline combines an LLM-based planner that takes the language instruction and learned feasibility functions for skills to perform task planning. SayCan relies on well-trained affordance models to filter out inadmissible actions from the LLMs. Because we learn new skills from demonstrations, it is infeasible to learn Q-functions for these skills through RL. We opt to use two other types of affordance functions *. The first affordance function is based on the detected objects in the scene.. We provide the detected objects in the prompt. The second affordance function is an image-based neural network that predicts the likelihood of taking an action in a physical state, and the network is trained on our demonstration data. The backbone of the neural network is the same as our image-based predicate classifiers. The prompt of the model is listed in Listing 8. Robot-VILA. This baseline performs task planning with a VLM. We adopt
|
https://arxiv.org/abs/2505.21981v1
|
the prompts pro- vided in the original paper to the CALVIN environment. The prompts are divided into the initial prompt that is used to generate the task plan given the initial observation (shown in Listing 9) and the follow-up prompt that is used for all subsequent steps (shown in Listing 10). We use gpt-4-turbo-2024-04-09 as the VLM. Because the model does not memorize the history. We store the history dialogue, including the text input and the image input, and concatenate the history dialogue with the current dialogue as the input to the VLM. T2M-Shooting. This baseline (in particular, the shooting-based algorithm) is similar to the SayCan algorithm except that: 1) it uses a multi-step feasibility model in contrast to the single-step feasibility model used by SayCan; 2) the LLM additionally takes a symbolic state description of object states and relationships. The original Text2Motion method assumes access to ground-truth symbolic states. For comparison, in this paper, we compare Text2Motion with BLADE in two settings: one with the ground-truth states and the other with the state classifiers learned by BLADE . The prompt of the model is listed in Listing 11. C Qualitative Examples In this section, we include three qualitative examples from the CALVIN experiments to compare the generalization capabilities of BLADE with baselines. Specifically, Fig. A4 shows generalization to abstract goal, Fig. A5 shows generalization to partial observability, and Fig. A6 shows generalization to geometric constraint. In summary, BLADE is able to generate accurate long-horizon manipulation plans for novel situations while latent planning, LLM, and VLM baselines fail. D Real World Experiment Details We validated our approach in two real-world domains. As shown in Fig. A1, we employ a 7-degree of freedom (DOF) Franka Emika robotic arm equipped with a parallel jaw gripper. A total of Five Intel RealSense RGB-D cameras are used to provide observation for our policies and state classifiers. Four cameras are mounted on the frame and one additional camera is mounted on the robot’s wrist. *We closely follow the official open-source SayCan implementation, available at https://github.com/ google-research/google-research/tree/master/saycan. 21 RealSense RGBD Camera RealSense RGBD CameraRealSense RGBD CameraRealSense RGBD Camera RealSense RGBD Camera (Wrist-mounted)Franka Emika RobotFigure A1: We use a 7-degree of freedom (DOF) Franka Emika robotic arm with a parallel jaw gripper for our real-world experiment. A total of Five Intel RealSense RGB-D cameras are used to provide observation for our policies and state classifiers. Four cameras are mounted on the frame and an additional one is mounted to the robot’s wrist. Our teleoperation system uses a 3DConnexion SpaceMouse for control. During the collection of demonstrations, we record the the pose of the end effector, the gripper width, and the RGB-D images from the five cameras. We collected 85 demonstrations for each of the two real-world domains, which provide the training data for the diffusion policy models and the state classifiers. D.1 Task Design Similar to our simulation experiments, our evaluation protocol includes the design of six tasks aimed at assessing the model’s generalization capabilities across new long-horizon tasks. These tasks are specifically crafted to test the model’s proficiency
|
https://arxiv.org/abs/2505.21981v1
|
for four types of generalization: Unseen Initial Condition, State Perturbation, Partial Observability, and Geometric Constraint. Task-1 •Domain: Boil Water •Task Category: Unseen Initial Condition •Language Instruction: Fill the kettle with water and place it on the stove •Logical Goal: (and (is-filled kettle) (is-placed-on kettle stove) (is-turned-off faucet-knob)) •Initial State: The kettle is placed inside the sink, and the stove is not blocked. The faucet is turned off with the faucet head turned away. Task-2 •Domain: Boil Water •Task Category: State Perturbation •Language Instruction: Fill the kettle with water and place it on the stove •Logical Goal: (and (is-filled kettle) (is-placed-on kettle stove) (is-turned-off faucet-knob)) •Initial State: The kettle is placed inside the sink and the stove is blocked. •Perturbation : The human user moves the kettle from the sink to the table after the robot turns the faucet head towards the sink. The robot needs to replan to move the kettle back to the sink. Task-3 •Domain: Boil Water •Task Category: Geometric Constraint •Language Instruction: Fill the kettle with water and place it on the stove •Logical Goal: (and (is-filled kettle) (is-placed-on kettle stove) (is-turned-off faucet-knob)) 22 Training Demonstration States | Intermediate States Unseen Test Initial Conditions1. Place In Sink 2. Move Faucet Head 3. Turn On Faucet 4. Turn Off Faucet 5. Move Faucet Away 6. Place Pot on Table 7. Move Kettle to Stove Unseen Init., Pertub., Geo. Constr. Unseen Initial Conditions Initial StatesHuman Perturbation Perturbed StatesGeometric Constraints Initial States Not Seen During TrainingFigure A2: Training and testing states for the Boil Water domain. Training Demonstration States | Intermediate States 1. Unblock Doors 2. Open Right Door 3. Place Kettle On Stove 4. Open Drawer 5. Place Teabag In Kettle Unseen Initial Conditions Initial StatesHuman Perturbation Intermediate StatesGeometric Constraints Initial States Partial Observability Initial States 1. Unblock Doors 2. Open Right Door 3. Place Kettle On Stove 4. Open Drawer 5. Place Teabag In KettleNot Seen During Training Figure A3: Training and testing states for the Make Tea domain. •Initial State: The kettle is placed inside the sink and the stove is blocked, creating a geometric constraint. Task-4 •Domain: Make Tea •Task Category: Unseen Initial Condition •Language Instruction: Place the kettle on the stove and place the teabag inside the kettle. •Logical Goal: (and (is-placed-on kettle stove) (is-placed-inside teabag kettle)) •Initial State: The kettle is placed inside a cabinet. The cabinet doors are open. The drawer is closed. Task-5 •Domain: Make Tea •Task Category: State Perturbation •Language Instruction: Place the kettle on the stove and place the teabag inside the kettle. •Logical Goal: (and (is-placed-on kettle stove) (is-placed-inside teabag kettle)) •Initial State: The kettle is placed inside the cabinet and the cabinet door is open. The drawer is initially closed. •Perturbation : Once the robot opens the drawer, a human user closes the drawer. 23 Task-6 •Domain: Make Tea •Task Category: Geometric Constraint •Language Instruction: Place the kettle on the stove and place the teabag inside the kettle. •Logical Goal: (and (is-placed-on kettle stove) (is-placed-inside teabag kettle)) •Initial State: There is a teapot blocking the cabinet doors. The kettle is
|
https://arxiv.org/abs/2505.21981v1
|
inside the cabinet. The drawer is open with the teabag visible. Task-7 •Domain: Make Tea •Task Category: Partial Observability •Language Instruction: Place the kettle on the stove and place the teabag inside the kettle. •Logical Goal: (and (is-placed-on kettle stove) (is-placed-inside teabag kettle)) •Initial State: The kettle is placed inside a cabinet and is not visible. D.2 Qualitative Examples of Novel States In Fig. A2 and Fig. A3, we visualize and confirm that more than half of the initial states and the perturbed states are not part of the demonstrations in our experiments; therefore, purely imitation- learning-based methods will struggle to solve. E Prompts for Baselines In this section, we provide the prompts for the baselines used in the simulation experiments. We provide the prompts for SayCan in Listing 8, Robot-VILA in Listing 9 and Listing 10, and T2M- Shooting in Listing 11. 24 BLADEText2Motion - Shooting HULKPlanner loops between two skills: open drawer, close drawer Planner fails to pick up the last block and place in drawer Planner fails to place any block into the drawerRobot-VILA Initial State Open Drawer Place Block In Drawer Place Block In Drawer Place Block In DrawerInitial State Open Drawer Place Block In Drawer Place Block In DrawerInitial State Open DrawerAbstract Goal: ”Place All Blocks In Drawer” Place Block on Table Place Block on TableInitial State Open Drawer Close DrawerFigure A4: BLADE and baseline performance on an Abstract Goal generalization task in the CALVIN environ- ment. BLADEText2Motion - ShootingHULKPlanner fails to generate a relevant action Planner fails to consider the slider is closedPlanner fails to generate a planRobot-VILA Initial State Open Drawer Open Slider Grab Block In SliderInitial State Grab Block In SliderInitial StatePartial Observability: ”Find Block In Slider” Initial State Turn On Lights Figure A5: BLADE and baseline performance on the Partial Observability generalization task in the CALVIN environment. 25 BLADEText2Motion - Shooting HULKPlanner fails to consider the door is blocked Planner fails to consider the door is blocked Planner fails to generate a planRobot-VILA Initial State Move Block Open SliderInitial State Open SliderInitial StateGeometric Constraints: ”Open Slider Door” Initial State Move SliderFigure A6: BLADE and baseline performance on the Geometric Constraint generalization task in the CALVIN environment. 26 Listing 1: Behavior descriptions generated by the LLM for the CALVIN domain. ;; lift_block_table (:action lift-block-table :parameters (?block - item ?table - item) :precondition (and (is-block ?block) (is-table ?table) (is-on ?block ?table) (not (is-lifted ?block))) :effect (and (lifted ?block) (not (is-on ?block ?table))) :body (then (grasp ?block ?table) (move ?block) ) ) ;; lift_block_slider (:action lift_block_slider :parameters (?block - item ?slider - item) :precondition (and (is-block ?block) (is-slider ?slider) (is-in ?block ?slider)) :effect (and (lifted ?block) (not (is-in ?block ?slider))) :body (then (grasp ?block ?slider) (move ?block) ) ) ;; lift_block_drawer (:action lift-block-drawer :parameters (?block - item ?drawer - item) :precondition (and (is-block ?block) (is-drawer ?drawer) (is-in ?block ?drawer) (is-open ? drawer)) :effect (and (lifted ?block) (not (is-in ?block ?drawer))) :body (then (grasp ?block ?drawer) (move ?block) ) ) ;; place_in_slider (:action place-in-slider :parameters (?block - item ?slider - item) :precondition (and (is-block ?block) (is-slider ?slider)
|
https://arxiv.org/abs/2505.21981v1
|
(is-lifted ?block)) :effect (and (is-in ?block ?slider) (not (is-lifted ?block))) :body (then (place ?block ?slider) ) ) ;; place_in_drawer (:action place-in-drawer :parameters (?block - item ?drawer - item) :precondition (and (is-block ?block) (is-drawer ?drawer) (is-lifted ?block) (is-open ?drawer) ) :effect (and (is-in ?block ?drawer) (not (is-lifted ?block))) :body (then (place ?block ?drawer) ) ) ;; place_on_table (:action place-on-table :parameters (?block - item ?table - item) :precondition (and (is-block ?block) (is-table ?table) (is-lifted ?block)) :effect (and (is-on ?block ?table) (not (is-lifted ?block))) :body (then (place ?block ?table) ) ) ;; stack_block (:action stack_block :parameters (?block - item ?target - item) :precondition (and (is-block ?block) (is-block ?target) (is-lifted ?block)) :effect (and (stacked ?block ?target) (not (is-lifted ?block))) :body (then (place ?block ?target) ) ) ;; unstack_block (:action unstack_block 27 :parameters (?block1 - item ?block2 - item) :precondition (and (is-block ?block1) (is-block ?block2) (stacked ?block1 ?block2)) :effect (and (unstacked ?block1 ?block2) (is-lifted ?block1) (not (stacked ?block1 ?block2))) :body (then (grasp ?block1 ?block2) (move ?block1) ) ) ;; rotate_block_right (:action rotate-block-right :parameters (?block - item ?table - item) :precondition (and (is-block ?block) (is-table ?table) (is-on ?block ?table)) :effect (and (rotated-right ?block) (not (rotated-left ?block))) :body (then (grasp ?block ?table) (move ?block) (place ?block ?table) ) ) ;; rotate_block_left (:action rotate_block_left :parameters (?block - item ?table - item) :precondition (and (is-block ?block) (is-table ?table) (is-on ?block ?table)) :effect (and (rotated-left ?block)) :body (then (grasp ?block) (move ?block) (place ?block) ) ) ;; push_block_right (:action push_block_right :parameters (?block - item ?table - item) :precondition (and (is-block ?block) (is-table ?table) (is-on ?block ?table)) :effect (and (pushed-right ?block) (not (pushed-left ?block))) :body (then (close) (push ?block) (open) ) ) ;; push_block_left (:action push-block-left :parameters (?block - item) :precondition (and (is-block ?block)) :effect (and (pushed-left ?block)) :body (then (close) (push ?block) (open) ) ) ;; move_slider_left (:action move_slider_left :parameters (?slider - item) :precondition (and (is-slider ?slider) (is-slider-right ?slider)) :effect (and (is-slider-left ?slider) (not (is-slider-right ?slider))) :body (then (grasp ?slider) (move ?slider) (place ?slider) ) ) ;; move_slider_right (:action move-slider-right :parameters (?slider - item) :precondition (and (is-slider ?slider) (not (is-slider-right ?slider))) :effect (and (is-slider-right ?slider)) :body (then (grasp ?slider) (move ?slider) (place ?slider) ) 28 ) ;; open_drawer (:action open-drawer :parameters (?drawer - item) :precondition (and (is-drawer ?drawer) (is-close ?drawer)) :effect (and (is-open ?drawer) (not (is-close ?drawer))) :body (then (close) (push ?drawer) (open) ) ) ;; close_drawer (:action close-drawer :parameters (?drawer - item) :precondition (and (is-drawer ?drawer) (is-open ?drawer)) :effect (and (is-close ?drawer) (not (is-open ?drawer))) :body (then (close) (push ?drawer) (open) ) ) ;; turn_on_lightbulb (:action turn-on-lightbulb :parameters (?lightbulb - item) :precondition (and (is-lightbulb ?lightbulb) (is-turned-off ?lightbulb)) :effect (and (is-turned-on ?lightbulb) (not (is-turned-off ?lightbulb))) :body (then (close) (push ?lightbulb) (open) ) ) ;; turn_off_lightbulb (:action turn-off-lightbulb :parameters (?lightbulb - item) :precondition (and (is-lightbulb ?lightbulb) (is-turned-on ?lightbulb)) :effect (and (is-turned-off ?lightbulb) (not (is-turned-on ?lightbulb))) :body (then (close) (push ?lightbulb) (open) ) ) ;; turn_on_led (:action turn-on-led :parameters (?led - item) :precondition (is-led ?led) :effect (and (is-turned-on ?led) (not (is-turned-off ?led))) :body (then (close) (push ?led) (open) ) ) ;; turn_off_led (:action turn-off-led :parameters (?led - item) :precondition (and (is-led ?led) (is-turned-on ?led))
|
https://arxiv.org/abs/2505.21981v1
|
:effect (and (is-turned-off ?led) (not (is-turned-on ?led))) :body (then (close) (push ?led) (open) ) ) ;; push_into_drawer (:action push-into-drawer :parameters (?block - item ?drawer - item) :precondition (and (is-block ?block) (is-drawer ?drawer) (is-open ?drawer)) :effect (and (is-in ?block ?drawer)) :body (then (close) (push ?block) (open) 29 ) ) Listing 2: Example Prompt for CALVIN–Contact Primitives. **Primitive Actions: ** There are seven primitive actions that the robot can perform. They are: - (grasp ?x ?y): ?x and ?y are two object variables. ?x is the object that the robot will be grasping, ?y is the object that ?x is currently on or in. - (place ?x ?y): ?x and ?y are two object variables. ?x is the object that the robot is currently holding, ?y is the object that ?x will be placed on or in. - (move ?x): ?x is the object that the robot is currently holding and will be moved by the robot. - (push ?x): ?x is the object that the robot will be pushing. - (move-to ?x): the robot arm will move without holding any object or pushing any object. - (open): the robot gripper will open fully. - (close): the robot gripper will close without grasping any object. **Combined Primitives: ** The primitive actions can be combined into a high-level routine. For example, (then (grasp ?x ?y) (move ?x) (place ?x ?y)) means the robot will pick up ?x from ?y, move ?x, and place ?x to ?z. The possible combination of primitives are: A. (then (grasp ?x ?y) (move ?x)) B. (then (place ?x ?y)) C. (then (grasp ?x ?y) (move ?x) (place ?x ?z)) D. (then (close) (push ?x) (open)) Listing 3: Example Prompt for CALVIN–Environment. **Predicates for Preconditions and Effects: ** The list of all possible predicates for defining the preconditions and effects of the high- level routine are listed below: For specifying the type of the object: - (is-table ?x - item): ?x is a table. - (is-slider ?x - item): ?x is a slider. - (is-drawer ?x - item): ?x is a drawer. - (is-lightbulb ?x - item): ?x is a lightbulb. - (is-led ?x - item): ?x is a led. - (is-block ?x - item): ?x is a block. For specifying the attributes of the object: - (is-red ?x - item): ?x is red. This predicate applies to a block. - (is-blue ?x - item): ?x is blue. This predicate applies to a block. - (is-pink ?x - item): ?x is pink. This predicate applies to a block. For specifying the state of the object: - (rotated-left ?x - item): ?x is rotated left. This predicate applies to a block. - (rotated-right ?x - item): ?x is rotated right. This predicate applies to a block. - (pushed-left ?x - item): ?x is pushed left. This predicate applies to a block. - (pushed-right ?x - item): ?x is pushed right. This predicate applies to a block. - (lifted ?x - item): ?x is lifted in the air. This predicate applies to a block. - (is-open ?x - item): ?x is open. This
|
https://arxiv.org/abs/2505.21981v1
|
predicate applies to a drawer. - (is-close ?x - item): ?x is close. This predicate applies to a drawer. - (is-turned-on ?x - item): ?x is turned on. This predicate applies to a lightbulb or a led. - (is-turned-off ?x - item): ?x is turned off. This predicate applies to a lightbulb or a led. - (is-slider-left ?x - item): the sliding door of the slider ?x is on the left. - (is-slider-right ?x - item): the sliding door of the slider ?x is on the right. For specifying the relationship between objects: - (is-on ?x - item ?y - item): ?x is on top of ?y. This predicate applies when ?x is a block and ?y is a table. - (is-in ?x - item ?y - item): ?x is inside of ?y. This predicate applies when ?x is a block and ?y is a drawer or a slider. - (stacked ?x - item ?y - item): ?x is stacked on top of ?y. This predicate applies when ?x and ?y are blocks. - (unstacked ?x - item ?y - item): ?x is unstacked from ?y. This predicate applies when ?x and ?y are blocks. **Task Environment: ** In the environment where the demonstrations are being performed, there are the following objects: - A table. Objects can be placed on the table. - A drawer that can be opened. Objects can be placed into the drawer when it is open. - A slider which is a cabinet with a sliding door. The sliding door can be moved to the left or to the right. Objects can be placed into the slider no matter the position of the sliding door. - A lightbulb that be can turned on/off with a button. - A led that can be turned on/off with a button. 30 - Three blocks that can be rotated, pushed, lifted, and placed. Listing 4: Example Prompt for CALVIN–In-Context Example. **Demonstration Parsing: ** Now, you will help to parse several human demonstrations of the robot performing a task and generate a lifted description of how to accomplish this task. For each demonstration, a sequence of performed primitives will be given, with actual object names. Three demonstrations for the task of "place_in_slider" is: <code name="primitive_sequence"> primitives = [ {"name": "grasp", "arguments": ["red_block", "table"]} {"name": "move", "arguments": ["red_block"]} {"name": "place", "arguments": ["red_block", "slider"]} {"name": "move-to", "arguments": [""]} ] </code> <code name="primitive_sequence"> primitives = [ {"name": "grasp", "arguments": ["blue_block", "table"]} {"name": "move", "arguments": ["blue_block"]} {"name": "place", "arguments": ["blue_block", "slider"]} {"name": "move-to", "arguments": [""]} ] </code> <code name="primitive_sequence"> primitives = [ {"name": "grasp", "arguments": ["pink_block", "table"]} {"name": "move", "arguments": ["pink_block"]} {"name": "place", "arguments": ["pink_block", "slider"]} {"name": "move-to", "arguments": [""]} ] </code> **Previous Tasks: ** A list of tasks that can be performed before the current task will also be provided as context . For the task of "place_in_slider", the possible previous tasks are: lift_block_table, lift_block_drawer, move_slider_right **Example Output: ** You should generate a lifted description, treating all objects as variables. For example, the lifted description for "place_in_slider" is: <code name="mechanism"> (:mechanism place-in-slider :parameters (?block - item
|
https://arxiv.org/abs/2505.21981v1
|
?slider - item) :precondition (and (is-block ?block) (is-slider ?slider) (is-lifted ?block)) :effect (and (is-in ?block ?slider) (not (is-lifted ?block))) :body (then (place ?block ?slider) ) ) </code> Listing 5: Example Prompt for CALVIN–Instructions. **Think Step-by-Step: ** To generate the lifted description, you should think through the task in natural language in the following steps. Be EXTREMELY CAREFUL to think through step 3a, 3b, and 4a, 4b. 1. Parse the goal. For example "place_in_slider", the goal is to place a block into the slider . 2. Think about the possible effects achieved by previous tasks and the previous actions that have been performed. For "lift_block_table", a block is lifted from the table and the effect is that the block is lifted. For "lift_block_drawer", a block is lifted from the drawer and the effect is that the block is lifted. For "move_slider_right", the sliding door of the slider is moved to the right and the effect is that the sliding door is on the right. 3. Parse the demonstrations and choose the combination of primitives for the current task. The demonstrations are noisy so that the demonstrated primitive sequences may include extra primitive actions that are not necessary for the current task at the beginning or end. The extra primitive actions can be for the previous tasks. Combining with the understanding of the task and previous task to infer the correct combination of primitives for the current task. 3a. In this case, the previous tasks are relevant to the current task. We should think about how to sequence the previous tasks with the current task. The primitive combination for the current task should not include primitive actions that have been performed. The above example for "place_in_slider" is this case. We can infer that "grasp" in the demonstrated sequences is likely to be for the previous tasks and should not be included in the primitive combination 31 for the current task. We therefore choose B. (then (place ?x ?y)). The semantics is that the robot place the lifted block in the slider. 3b. In this case, the previous tasks are not relevant to the current task. 4. Think about the preconditions. Also specify the types of all relevant objects in the preconditions. 4a. In this case, previous tasks are relevant to the current task. We should think about the effects of the previous tasks. For "place_in_slider", the effects of previous tasks include the block is already lifted. So we should specify that the block is lifted in the preconditions for the current task. 4b. In this case, previous tasks are not relevant to the current task. 5. Think about the effects. For "place_in_slider", the effects are that the block is in the slider and the block is not lifted. 6. Write down the mechanism in the format of the example. **Additional Instructions: ** 1. Make sure the generated lifted description starts with <code name="mechanism"> and ends with </code>. 2. Please do not invent any new predicates for the precondition and effect. You can only use the predicates listed above. 3. Consider the physical constraints of
|
https://arxiv.org/abs/2505.21981v1
|
the objects. For example, a robot arm can not go through a closed door. 4. For each parameter in :parameters, you should use one of the predicates for specifying the type of the object to indicate its type (e.g., is-drawer, is-block, and etc). Listing 6: Example Prompt for CALVIN–Task Input. **Current Task: **place_in_drawer **Example Sequences: ** <code name="primitive_sequence"> primitives = [ {"name": "grasp", "arguments": ["blue_block", "table"]} {"name": "move", "arguments": ["blue_block"]} {"name": "place", "arguments": ["blue_block", "drawer"]} {"name": "move-to", "arguments": [""]} ] </code> <code name="primitive_sequence"> primitives = [ {"name": "grasp", "arguments": ["red_block", "table"]} {"name": "move", "arguments": ["red_block"]} {"name": "place", "arguments": ["red_block", "drawer"]} {"name": "move-to", "arguments": [""]} ] </code> <code name="primitive_sequence"> primitives = [ {"name": "grasp", "arguments": ["pink_block", "table"]} {"name": "move", "arguments": ["pink_block"]} {"name": "place", "arguments": ["pink_block", "drawer"]} {"name": "move-to", "arguments": [""]} ] </code> **Previous Tasks: **push_into_drawer, lift_block_table, lift_block_slider Listing 7: Example Prompt for Predicate Generation. You are a helpful agent in helping a robot interpret human demonstrations and discover a generalized high-level routine to accomplish a given task. **Primitive Actions: ** There are seven primitive actions that the robot can perform. They are: - (grasp ?x ?y): ?x and ?y are two object variables. ?x is the object that the robot will be grasping, ?y is the object that ?x is currently on or in. - (place ?x ?y): ?x and ?y are two object variables. ?x is the object that the robot is currently holding, ?y is the object that ?x will be placed on or in. - (move ?x): ?x is the object that the robot is currently holding and will be moved by the robot. - (push ?x): ?x is the object that the robot will be pushing. - (move-to ?x): the robot arm will move without holding any object or pushing any object. - (open): the robot gripper will open fully. - (close): the robot gripper will close without grasping any object. **Task Environment: ** In the environment where the demonstrations are being performed, there are the following objects: - A table. Objects can be placed on the table. 32 - A drawer that can be opened. Objects can be placed into the drawer when it is open. - A slider which is a cabinet with a sliding door. The sliding door can be moved to the left or to the right. Objects can be placed into the slider no matter the position of the sliding door. - A lightbulb that be can turned on/off with a button. - A led that can be turned on/off with a button. - Three blocks that can be rotated, pushed, lifted, and placed. **Task ** You will help the robot to write PDDL definitions for the following actions: 1. lift_red_block_table 2. lift_red_block_slider 3. lift_red_block_drawer 4. lift_blue_block_table 5. lift_blue_block_slider 6. lift_blue_block_drawer 7. lift_pink_block_table 8. lift_pink_block_slider 9. lift_pink_block_drawer 10. stack_block 11. unstack_block 12. place_in_slider 13. place_in_drawer 14. place_on_table 15. rotate_red_block_right 16. rotate_red_block_left 17. rotate_blue_block_right 18. rotate_blue_block_left 19. rotate_pink_block_right 20. rotate_pink_block_left 21. push_red_block_right 22. push_red_block_left 23. push_blue_block_right 24. push_blue_block_left 25. push_pink_block_right 26. push_pink_block_left 27. move_slider_left 28. move_slider_right 29. open_drawer 30. close_drawer 31. turn_on_lightbulb 32. turn_off_lightbulb 33.
|
https://arxiv.org/abs/2505.21981v1
|
turn_on_led 34. turn_off_led Before writing the operators, define the predicates that should be used to write the preconditions and effects of the operators. Group the predicates into unary predicates that define the states of objects and binary relations that specify relations between two objects. For each predicate, list actions that are relevant. Listing 8: Prompt for SayCan. **Objective: ** You are a helpful agent in helping a robot plan a sequence of actions to accomplish a given task. I will first provide context and then provide an example of how to perform the task. **Task Environment: ** In the robot’s environment, there are the following objects: - A table. Objects can be placed on the table. - A drawer that can be opened. Objects can be placed into the drawer when it is open. - A slider which is a cabinet with a sliding door. The sliding door can be moved to the left or to the right. Objects can be placed into the slider no matter the position of the sliding door. - A lightbulb that be can turned on/off with a button. - A led that can be turned on/off with a button. - Three blocks that can be rotated, pushed, lifted, and placed. **Actions: ** There are the following actions that the robot can perform. They are: - lift_red_block_table: lift the red block from the table. - lift_red_block_slider: lift the red block from the slider. - lift_red_block_drawer: lift the red block from the drawer. - lift_blue_block_table: lift the blue block from the table. - lift_blue_block_slider: lift the blue block from the slider. - lift_blue_block_drawer: lift the blue block from the drawer. - lift_pink_block_table: lift the pink block from the table. - lift_pink_block_slider: lift the pink block from the slider. - lift_pink_block_drawer: lift the pink block from the drawer. - stack_block: stack the blocks. 33 - place_in_slider: place the block in the slider. - place_in_drawer: place the block in the drawer. - place_on_table: place the block on the table. - rotate_red_block_right: rotate the red block to the right. - rotate_red_block_left: rotate the red block to the left. - rotate_blue_block_right: rotate the blue block to the right. - rotate_blue_block_left: rotate the blue block to the left. - rotate_pink_block_right: rotate the pink block to the right. - rotate_pink_block_left: rotate the pink block to the left. - push_red_block_right: push the red block to the right. - push_red_block_left: push the red block to the left. - push_blue_block_right: push the blue block to the right. - push_blue_block_left: push the blue block to the left. - push_pink_block_right: push the pink block to the right. - push_pink_block_left: push the pink block to the left. - move_slider_left: move the slider to the left. - move_slider_right: move the slider to the right. - open_drawer: open the drawer. - close_drawer: close the drawer. - turn_on_lightbulb: turn on the lightbulb. - turn_off_lightbulb: turn off the lightbulb. - turn_on_led: turn on the led. - turn_off_led: turn off the led. - do_nothing: do nothing. **Example Task: ** Now, you will help to parse the goal predicate and generate a list of candidate actions
|
https://arxiv.org/abs/2505.21981v1
|
the robot can potentially take to accomplish the task. You should rank the actions in terms of how likely they are to be performed next. Goal predicate: (is-turned-off led) Task output: ‘‘‘python [’turn_off_led’, ’do_nothing’] ‘‘‘ In this example above, if the led is on, the robot should turn it off. If the led is already off, the robot should do nothing. When the robot successfully completes an action, the robot will ask for the next action to take. Considering the executed task: turn_off_led Your output should be: ‘‘‘python [’do_nothing’] ‘‘‘ Since the led is already off, the robot should do nothing. **Additional Instructions: ** 1. Make sure the generated plan is a list of actions. Place the list between ‘‘‘python and ends with ‘‘‘. 2. Think Step-by-Step. Goal predicate: {Based on the given task} Current symbolic state: {Based on the simulator state} Executed actions: {Based on the previously executed actions} Listing 9: Initial Prompt for Robot-VILA. You are highly skilled in robotic task planning, breaking down intricate and long-term tasks into distinct primitive actions. If the object is in sight, you need to directly manipulate it. If the object is not in sight, you need to use primitive skills to find the object first. If the target object is blocked by other objects, you need to remove all the blocking objects before picking up the target object. At the same time, you need to ignore distracters that are not related to the task. And remember your last step plan needs to be "done". Consider the following skills a robotic arm can perform. - lift_red_block_table: lift the red block from the table. - lift_red_block_slider: lift the red block from the slider. - lift_red_block_drawer: lift the red block from the drawer. - lift_blue_block_table: lift the blue block from the table. - lift_blue_block_slider: lift the blue block from the slider. - lift_blue_block_drawer: lift the blue block from the drawer. - lift_pink_block_table: lift the pink block from the table. - lift_pink_block_slider: lift the pink block from the slider. - lift_pink_block_drawer: lift the pink block from the drawer. - stack_block: stack the blocks. - place_in_slider: place the block in the slider. - place_in_drawer: place the block in the drawer. - place_on_table: place the block on the table. 34 - rotate_red_block_right: rotate the red block to the right. - rotate_red_block_left: rotate the red block to the left. - rotate_blue_block_right: rotate the blue block to the right. - rotate_blue_block_left: rotate the blue block to the left. - rotate_pink_block_right: rotate the pink block to the right. - rotate_pink_block_left: rotate the pink block to the left. - push_red_block_right: push the red block to the right. - push_red_block_left: push the red block to the left. - push_blue_block_right: push the blue block to the right. - push_blue_block_left: push the blue block to the left. - push_pink_block_right: push the pink block to the right. - push_pink_block_left: push the pink block to the left. - move_slider_left: move the slider to the left. - move_slider_right: move the slider to the right. - open_drawer: open the drawer. - close_drawer: close the drawer. - turn_on_lightbulb: turn on the
|
https://arxiv.org/abs/2505.21981v1
|
lightbulb. - turn_off_lightbulb: turn off the lightbulb. - turn_on_led: turn on the led. - turn_off_led: turn off the led. - done: the goal has reached. You are only allowed to use the provided skills. You can first itemize the task-related objects to help you plan. For the actions you choose, list them as a list in the following format. <code> [’turn_off_led’, ’open_drawer’, ’done’] </code> Listing 10: Follow-Up Prompt for Robot-VILA. This image displays a scenario after you have executed some steps from the plan generated earlier. When interacting with people, sometimes the robotic arm needs to wait for the person’ s action. If you do not find the target object in the current image, you need to continue searching elsewhere. Continue to generate the plan given the updated environment state. Listing 11: Prompt for Text2Motion. **Objective: ** You are a helpful agent in helping a robot plan a sequence of actions to accomplish a given task. I will first provide context and then provide an example of how to perform the task. **Task Environment: ** In the robot’s environment, there are the following objects: - A table. Objects can be placed on the table. - A drawer that can be opened. Objects can be placed into the drawer when it is open. - A slider which is a cabinet with a sliding door. The sliding door can be moved to the left or to the right. Objects can be placed into the slider no matter the position of the sliding door. - A lightbulb that be can turned on/off with a button. - A led that can be turned on/off with a button. - Three blocks that can be rotated, pushed, lifted, and placed. **Predicates for symbolic state: ** The list of all possible predicates for defining the symbolic state are listed below: - (rotated-left ?x - item): ?x is rotated left. This predicate applies to a block. - (rotated-right ?x - item): ?x is rotated right. This predicate applies to a block. - (pushed-left ?x - item): ?x is pushed left. This predicate applies to a block. - (pushed-right ?x - item): ?x is pushed right. This predicate applies to a block. - (lifted ?x - item): ?x is lifted in the air. This predicate applies to a block. - (is-open ?x - item): ?x is open. This predicate applies to a drawer. - (is-close ?x - item): ?x is close. This predicate applies to a drawer. - (is-turned-on ?x - item): ?x is turned on. This predicate applies to a lightbulb or a led. - (is-turned-off ?x - item): ?x is turned off. This predicate applies to a lightbulb or a led. - (is-slider-left ?x - item): the sliding door of the slider ?x is on the left. - (is-slider-right ?x - item): the sliding door of the slider ?x is on the right. - (is-on ?x - item ?y - item): ?x is on top of ?y. This predicate applies when ?x is a block and ?y is a table. - (is-in ?x - item ?y - item): ?x
|
https://arxiv.org/abs/2505.21981v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.