text
string
source
string
arXiv:2505.21966v1 [cs.HC] 28 May 2025MapStory: LLM-Powered Text-Driven Map Animation Prototyping with Human-in-the-Loop Editing Aditya Gunturu University of Calgary Canada aditya.gunturu@ucalgary.caBen Pearman University of Calgary Canada ben.pearman@ucalgary.caKeiichi Ihara University of Tsukuba Japan kihara@iplab.cs...
https://arxiv.org/abs/2505.21966v1
producing a single video, as we observed in our formative study. In this paper, we ask: what if anyone could immediately create these map-based animations simply by writing a script ? And going beyond that to make it editable at every stage. For example, imagine typing a sentence like “Very few people live in the mainl...
https://arxiv.org/abs/2505.21966v1
conducted three studies: 1) a us- ability study (N=12) to measure the tool’s expressiveness, creative exploration, and accessibility for novice users; 2) an expert study with professional map animators (N=5) to gather feedback on the tool’s workflows, and potential for real-world applicability; and 3) a technical evalu...
https://arxiv.org/abs/2505.21966v1
[ 24]. For example, Draco [ 25] let illustrators bring static drawings to life by sketching motion paths and applying kinetic textures to create rich path animations and particle effects. These sketch-based interfaces greatly lowered the barrier through more natural interactions. Aug- mented Physics [ 18] introduced a ...
https://arxiv.org/abs/2505.21966v1
leverages structured scripts and the actor’s facial poses to translate talking animations to a virtual character in real-time. DrawTalking [ 47], on the other hand, enables users to add sim- ple motion to sketched objects via speech, and RealityTalk [ 34] displays relevant graphics based on the user’s speech for creati...
https://arxiv.org/abs/2505.21966v1
work- flows, the tools they use, and the challenges or needs they encounter when producing map animations. Although we acknowledge the relatively small number of experts in our study, we intentionally focused on specialized map animators rather than general video creators, thus recruitment was challenging. To complemen...
https://arxiv.org/abs/2505.21966v1
plan [...] It’s a huge time saver for planning my camera moves before I open After Effects. ” Figure 3: Workflow of a map animator. The animator first breaks down their script into manageable chunks and de- scribes the animation they plan to make for this specific item. The animation guides also include research inform...
https://arxiv.org/abs/2505.21966v1
following design decisions. D1: Script-Driven Authoring .Our system should directly trans- late scripts to support a script-driven workflow. Moreover, changes to textual instructions should seamlessly reflect in the animated map scenes, enabling rapid iteration. D2: Research Integration to Extract Map Data .Our system ...
https://arxiv.org/abs/2505.21966v1
are often used to represent journeys like troop movements and migration paths. Also, it can show borders, national boundaries, or coordinate references to provide contextual grounding or to show expansion over time. •Point highlights are small markers, symbols, or icons placed at particular coordinates. These highlight...
https://arxiv.org/abs/2505.21966v1
support accu- rate and fact-grounded animations, users can ask follow up queries and modifications to the system’s initial results. MapStory assists animators in resolving vague or ambiguous location references, re- trieving precise geospatial data like points, regions, or paths, based on the LLM-powered web search, th...
https://arxiv.org/abs/2505.21966v1
change the map style to match the narrative tone. Finally, users can interactively control the timing of each an- imation module by specifying when it appears and how long it plays, through the timeline sequencer. This allows for fine-grained control over pacing and sequencing in real time. By iteratively refining timi...
https://arxiv.org/abs/2505.21966v1
time interval. The polygon data is fetched from the Nomi- natim API or generated by the LLM. •Line : Uses an array of latitude-longitude coordinates provided by the LLM to render a polyline on the map. •Point : Plots a map marker at the specified coordinates. Users can freely attach text or images to highlights of any ...
https://arxiv.org/abs/2505.21966v1
responsible for converting user input into a structured JSON-based scene breakdown. The research agent then examines the relevant modules from that breakdown, performs chain-of-thought reasoning [ 64] to validate their parameters, and finally invokes the appropriate function call with the confirmed arguments. Below, we...
https://arxiv.org/abs/2505.21966v1
research agent can only access one tool: a function call adhering to the OpenAI Function Calling Protocol [ 43]. This modular structure not only streamlines the system’s architecture but also minimizescontext length, given the inverse relationship between context size and instruction-following accuracy [36]. We break d...
https://arxiv.org/abs/2505.21966v1
response (which relies on the function calling API) is parsed into a compatible JSON output with the animation func- tions available in the system. This is then rendered as an editable timeline. Human-in-the-Loop .Once the agent generates an initial scene breakdown, the user can edit, rearrange, or delete any part and ...
https://arxiv.org/abs/2505.21966v1
system’s capabilities. Figure 12 shows the average processing time (in seconds) for each model used in our system. As expected, GPT-4.5 took the longest to produce an output, followed by o1, 4o, and 3.5 respectively. Interestingly, GPT-4.5 took less time to produce an output for the scene breakdown, this is presumably ...
https://arxiv.org/abs/2505.21966v1
B was found to be 75% accurate (57 / 76). Failures were almost always a result of the route or the highlight modules having imprecise coordinates, albeit they were typically within 1km of the correct result. For example, when users asks to zoom into “a random seven eleven in japan“ the system returned a highlight aroun...
https://arxiv.org/abs/2505.21966v1
exploration and refinement. Emerging Use Cases .Participants envisioned novel and often informal use scenarios that went beyond our initial expectations, particularly highlighting casual and presentation-oriented applica- tions outside educational or learning video creation. Notably, 8 out of 12 participants proposed u...
https://arxiv.org/abs/2505.21966v1
which “is effectively the basics of a story” and keeps narrative flow intact, and he can directly control and change the narrative: “The auto -generated ‘scene breakdown’ feels like way-points in your story that you can tweak in plain language . ”E5 finds the scene breakdown to be a medium of updating story beats first...
https://arxiv.org/abs/2505.21966v1
like how you can ask for information and not just animation“ . E2 contrasts this with his current After Effects practice where, once you make it, you can’t change it. . . But [with MapStory] I can change it, but it will be time-consuming . In MapStory he simply deletes a label and replaces a location highlight and the ...
https://arxiv.org/abs/2505.21966v1
of map -centric animation blocks; however, our modular system can be extended with new, custom blocks tailored to animators’ niche tastes. Participants envision creating their own stylistic mod- ules—such as filters, transitions, or bespoke map -animation blocks, while some imagine importing a hand -designed basemap an...
https://arxiv.org/abs/2505.21966v1
evaluations detailed workflow of map animation creation with a design space of map-animation blocks. MapStory introduces mod- ular animation blocks, integrated geospatial querying researcher, and tight coupling between script and animation through a step-by- step scene breakdown. These features were implemented through...
https://arxiv.org/abs/2505.21966v1
the 2022 CHI Conference on Human Factors in Computing Systems . 1–19.[15] Richard C Davis, Brien Colwell, and James A Landay. 2008. K-sketch: a’kinetic’sketch pad for novice animators. In Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems . 413–422. [16] Tong Gao, Jessica R Hullman, Eytan Adar...
https://arxiv.org/abs/2505.21966v1
A Stewart Fotheringham, Elizabeth A Mack, Ziqi Li, Mehak Sachdeva, Sarah Bardin, and Ross Maciejewski. 2023. GeoExplainer: A visual analytics framework for spatial modeling contextualization and report gener- ation. IEEE Transactions on Visualization and Computer Graphics 30, 1 (2023), 1391–1401. [32] Wanwan Li, Changy...
https://arxiv.org/abs/2505.21966v1
virtual globes. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . 1–14. [50] Kadek Ananta Satriadi, Jim Smiley, Barrett Ens, Maxime Cordeil, Tobias Czaud- erna, Benjamin Lee, Ying Yang, Tim Dwyer, and Bernhard Jenny. 2022. Tangible globes for data visualisation in augmented reality. In P...
https://arxiv.org/abs/2505.21966v1
Bridging graphics and linguistics. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology . 722–734. [67] Haijun Xia, Tony Wang, Aditya Gunturu, Peiling Jiang, William Duan, and Xiaoshuo Yao. 2023. CrossTalk: Intelligent Substrates for Language-Oriented Interaction in Video-Based Comm...
https://arxiv.org/abs/2505.21966v1
arXiv:2505.21969v1 [cs.RO] 28 May 2025 DORAEMON: Decentralized Ontology-aware Reliable Agent with Enhanced Memory Oriented Navigation Tianjun Gu1Linfeng Li1Xuhong Wang3Chenghua Gong1Jingyu Gong1 Zhizhong Zhang1Yuan Xie1,3Lizhuang Ma1Xin Tan1,2 1East China Normal University,2Shanghai AI Lab,3Shanghai Innovation Institut...
https://arxiv.org/abs/2505.21969v1
due to the discrete nature of input image descriptions at each time step, this spatiotemporal discontinuity often makes it difficult for VLMs to understand the relationships between targets and obstacles in complex environments. On the other hand, while many existing navigation systems incorporate some form of memory f...
https://arxiv.org/abs/2505.21969v1
understanding, increasingly leveraging foundation models like Vision- Language Models (VLMs) and Large Language Models (LLMs). LLMs provide commonsense reasoning via object-room correlation [ 39,41,51], semantic mapping [ 43], and chain-of-thought planning [ 5,6,33,41], while VLMs align visual observations with textual...
https://arxiv.org/abs/2505.21969v1
trigger occurs when 1) the agent is within a predefined distance threshold dsuccess of the target object; 2) the target object is visually confirmed within the agent’s current observation It. Methods Overview Our DORAEMON framework achieves End-to-End and zero-shot navigation through the ontology of two decentralized c...
https://arxiv.org/abs/2505.21969v1
Topological Map nodes vt∈ V, our module organizes information of vtinto a hierarchical structure. The nodes hjon the hierarchical structure are defined as: hj= idj, lj,Pj,Cj , (2) where idj,lj∈ {L0, L1, L2, L3},Pj,Cjcorrespond to unique string identifier, hierarchy level tag, parent node references, and child node re...
https://arxiv.org/abs/2505.21969v1
measuring overlap in area coverage. A high AORI indicates excessive path overlap and inefficient exploration, specifically addressing the limitations of conventional coverage metrics that neglect temporal-spatial redundancy. AORI is formally defined as: AORI = 1.0−(wc·(1.0−roverlap)2+wd·(1.0−dnorm)), (4) where roverlap...
https://arxiv.org/abs/2505.21969v1
on the HM3Dv2[ 40], HM3Dv1[ 29], and MP3D[ 8]. Our main comparison focuses on End-to-End Vision-Language Model (VLM) approaches [14, 25]. Beyond these direct End-to-End counterparts, we also consider a broader set of recent methods for non-End-to-End object navigation methods. More baseline details are set in the Appen...
https://arxiv.org/abs/2505.21969v1
module, we compared three variants (Dorsal Stream, RAG-VLM of Ventral Stream, and Policy-VLM of Ventral Stream) on HM3D v2. Removing the Dorsal Stream and RAG-VLM implies that the model relies solely on the Policy-VLM of the Dorsal Stream in decision-making. The results reported for SR, SPL, and AORI, as presented in T...
https://arxiv.org/abs/2505.21969v1
data in indoor environments, 2017. [9]Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, and Ruslan Salakhutdinov. Object goal navigation using goal-oriented semantic exploration, 2020. [10] Junting Chen, Guohao Li, Suryansh Kumar, Bernard Ghanem, and Fisher Yu. How to not train your dragon: Training-free embodied o...
https://arxiv.org/abs/2505.21969v1
Jitendra Malik, and Kristen Grauman. Poni: Potential functions for objectgoal navigation with interaction-free learning, 2022. [29] Santhosh Kumar Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Austin Clegg, John M Turner, Manolis Savva, Angel X Chang, and Dhruv Batra. Habitat-Matterport 3D Dataset (HM3D): 1000 large-scal...
https://arxiv.org/abs/2505.21969v1
Zhongyuan Wang, Shanghang Zhang, and Renjing Xu. Mapnav: A novel memory representation via annotated semantic maps for vlm-based vision-and-language navigation, 2025. [46] Mingjie Zhang, Yuheng Du, Chengkai Wu, Jinni Zhou, Zhenchao Qi, Jun Ma, and Boyu Zhou. Apexnav: An adaptive exploration strategy for zero-shot objec...
https://arxiv.org/abs/2505.21969v1
This formulation enables direct comparison with baseline methods by normalizing both: Tepisode =500X t=1Nt≤500 (11) where Ntdenotes converted steps for action at time step t. During our experiments, one DORAEMON step t was equivalent to about 7-8 Nt 13 Algorithm 1 Discrete Step Conversion Require: Polar action (r, θ), ...
https://arxiv.org/abs/2505.21969v1
reliable cue than vague earlier memories, it chooses Action3 to move toward that table. The agent is in a dining-style room with a central table and clearly visible chairs. To reach them most directly it selects Action5, which moves straight toward the table and chairs. Visisted Memory [left direction, 1.3m, 1 steps ag...
https://arxiv.org/abs/2505.21969v1
connection distance δconnect = 1.0m, node update interval Supdate = 3 steps, L1hierarchical clustering weight w= 0.4, AORI grid resolution δgrid= 0.1m, minimum obstacle clearance dmin_obs = 0.5m, and various stuck detection thresholds (e.g., path inefficiency ηpath<0.25, small area coverage δarea_gain <0.35m2, high rot...
https://arxiv.org/abs/2505.21969v1
items are typically located ,→within a home . There are [N] red arrows superimposed onto your observation , which ,→represent potential actions . These are labeled with a number in a white circle , which represent ,→the location you would move to if you took that action . [ TURN_INSTRUCTION ] Let ’s solve this navigati...
https://arxiv.org/abs/2505.21969v1
Learning selects informative targets during training and uses semantic expansion at inference for zero-shot instance navigation. Pixel -Nav [5]: Introduces pixel -guided navigation skills that bridge foundation models and ObjectNav, relying solely on RGB inputs. SGM [47]: “Imagine Before Go” constructs a self -supervis...
https://arxiv.org/abs/2505.21969v1
arXiv:2505.21972v1 [cs.LG] 28 May 2025Judging LLMs on a Simplex Patrick Vossler1Fan Xia1Yifan Mai2Jean Feng1 Abstract Automated evaluation of free-form outputs from large language models (LLMs) is challenging because many distinct answers can be equally valid. A common practice is to use LLMs themselves as judges, but ...
https://arxiv.org/abs/2505.21972v1
judges score each candidate’s answer according to a rubric. Candidates are ranked based on their judge-assigned scores. Shaded boxes indicate cases where the same LLM serves as both candidate and judge (self-judging). framework, representing judges and candidates as points on a probability simplex. By visualizing the p...
https://arxiv.org/abs/2505.21972v1
LLM judges when there is no true agreed-upon rating scale, but our work addresses a more fundamental question: even with an agreed-upon rating scale, what theoretical guarantees can we provide about ranking accuracy? Uncertainty Quantification in LLM Evaluation: Current frameworks lack uncertainty quantifica- tion that...
https://arxiv.org/abs/2505.21972v1
correspond to the Mpoints on the simplex θ(j) m,k= PrˆS(j) k= 1|S∗ k=m ,···,PrˆS(j) k=M|S∗ k=m ∀m= 1,···, M. (3.1) Then the distribution of judge-assigned scores for the candidate is the convex mixture γ(j) k=PM m=1πk,mθ(j) m,k, where πk,m= Pr( S∗ k=m)are the prevalences of the true scores. Prevalences ⃗ πk= (πk,...
https://arxiv.org/abs/2505.21972v1
augmenting each judge vertex with the true score, drawing the triangle between these augmented vertices, and finding the intersection between the triangle and a vertical line at each candidate (Fig 3.2 right) Then the true score of each candidate corresponds to the height of this intersection. That is, comparing perfor...
https://arxiv.org/abs/2505.21972v1
their relative barycentric coordinates) as long as the judges satisfy a weak monotonicity condition: Assumption 3. Thej-th judge’s probability of assigning the lowest score when the true score is equal to mis decreases with respect to m. Under moderate constancy, we use a similar idea. For candidates that can only be e...
https://arxiv.org/abs/2505.21972v1
assumptions and any number of judges. While this negative 5 result again may be disappointing, there are various ways to filter down possible rankings as outlined above. These results highlight that ranking uncertainty comes not just from sampling variation (aleatoric uncertainty) but also uncertainty about which assum...
https://arxiv.org/abs/2505.21972v1
prevalences ⃗ π(j) kwould then replace ⃗ πkin(4.2) . The magnitude of REs is thus controlled through the hyperparameter ω∈[0,∞)(ω= 0implies the constancy assumption holds). 6  α(m1,m2)→(m′ 1,m′ 2) (m′ 1,m′ 2)s.t.(m1,m2)→(m′ 1,m′ 2)∼Dirichlet( ⃗β(m1,m2)) θ(j) m′ 1,m′ 2=X (m1,m2)→(m′ 1,m′ 2)θ(j) (m1,m2)α(m1,m2)→(m′ 1,m′...
https://arxiv.org/abs/2505.21972v1
evaluate judge adjudication methods, we designed a unified two-stage protocol. First, LLM judges evaluate candidates without access to ground truth, mirroring real-world usage. Second, we generate ground truth scores for each answer by comparing against the correct multiple choice answer on verifiable tasks and obtaini...
https://arxiv.org/abs/2505.21972v1
be a difficult dataset, so judge quality is lower in this dataset and rank uncertainty is higher. This is in contrast to datasets like MTBench, where the candidates are so easy to distinguish that the choice of method is less important. That said, because there was evidence of self-preference in MTBench (see Appendix),...
https://arxiv.org/abs/2505.21972v1
on MTBench (left) and GPT-3.5 Turbo as a judge on TLDR (right). Blue triangles are judge configurations sampled from posterior. Bottom : Posterior distributions for candidate rankings across difficulty levels in GPQA. in different regions of the simplex, leading to much higher ranking uncertainty. These contrasting pat...
https://arxiv.org/abs/2505.21972v1
in the absence of a gold standard. Stat. Med. , 21(18):2653–2669, September 2002. [4]Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324, December 1952. [5]Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Z...
https://arxiv.org/abs/2505.21972v1
855–863, September 2010. [20] Jaehun Jung, Faeze Brahman, and Yejin Choi. Trust or escalate: LLM judges with provable guarantees for human agreement. In The Thirteenth International Conference on Learning Representations , 2025. [21] Nimit Kalra and Leonard Tang. VERDICT: A library for compound LLM judge systems. [22] ...
https://arxiv.org/abs/2505.21972v1
November 2023. [34] Johannes B Reitsma, Anne W S Rutjes, Khalid S Khan, Arri Coomarasamy, and Patrick M Bossuyt. A review of solutions for diagnostic accuracy studies with an imperfect or missing reference standard. J. Clin. Epidemiol. , 62(8):797–806, August 2009. [35] Juan Diego Rodriguez, Wenxuan Ding, Katrin Erk, a...
https://arxiv.org/abs/2505.21972v1
IEEE, June 2010. 12 [48] Minge Xie, Kesar Singh, and Cun-Hui Zhang. Confidence intervals for population ranks in the presence of ties and near ties. J. Am. Stat. Assoc. , 104(486):775–788, June 2009. [49] Qiujie Xie, Qingqiu Li, Zhuohao Yu, Yuejie Zhang, Yue Zhang, and Linyi Yang. An empirical analysis of uncertainty i...
https://arxiv.org/abs/2505.21972v1
dimensions (relevance, consistency, fluency, coherence) on 5-point scales. •Omni-MATH [13]: A benchmark focusing on high-difficulty competition-level problems from International and National Olympiads. These problems present particular challenges for automated evaluation as solutions vary significantly in approach, not...
https://arxiv.org/abs/2505.21972v1
GPQA, MMLU Pro, Omni-MATH Meta Llama 3.1 405B llama-3.1-405b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Meta Llama 3.1 70B llama-3.1-70b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Meta Llama 3.1 8B llama-3.1-8b-instruct-turbo GPQA, MMLU Pro, Omni-MATH Meta Llama 4 Maverick 17B llama-4-maverick-17b-128e-instruct-fp8 MMLU Pr...
https://arxiv.org/abs/2505.21972v1
5 = Explanation perfectly aligns with and justifies the selected answer - Accuracy: Did the candidate select the correct answer choice? (-1 = no, 1 = yes, 0 = unsure) * Provide a concise explanation referencing key facts or reasoning that makes the answer correct or incorrect <QUESTION> [[question]] </QUESTION> [[candi...
https://arxiv.org/abs/2505.21972v1
scale with specific definitions provided to ensure consistent interpretation: Instructions: In this task you will evaluate the quality of summaries written for a news article. You will be shown the original article and [[num_candidates]] candidate summaries. To correctly solve this task, follow these steps: 1. Carefull...
https://arxiv.org/abs/2505.21972v1
reference answers. This mimics how human experts might evaluate mathematical work without preconceived notions of the "correct" approach. The prompt emphasizes accuracy assessment on a 3-point scale (-1 for incorrect, 0 for partially correct, 1 for correct), with explicit instructions to use the middle category sparing...
https://arxiv.org/abs/2505.21972v1
- Provide clear justification for each score with specific references to both the candidate answer and ground truth Rate each answer using the following criteria relative to the ground truth reference answer: ### Accuracy Assessment (1 for correct, 0 for partially correct/borderline, -1 for incorrect) Based on the refe...
https://arxiv.org/abs/2505.21972v1
The GPT-4o mini judge assigned an equal number of perfect ’5’ scores (50 instances each) to both its own family model, GPT-4, and to Claude v1. While GPT-4o mini gives the same number of top scores to GPT-4 and Claude, the scoores from the Claude judge suggests that models tend to receive more favorable evaluations fro...
https://arxiv.org/abs/2505.21972v1
γkK. This implies that  θ0+πk1(θ1−θ0) > θ0+πk2(θ1−θ0) >···> θ0+πkK(θ1−θ0) which implies that πk1> πk2>···> πkK. Proof for Theorem 1(ii). WLOG, let the candidates who are not also judges (referred to as core candidates ) have indices k= 3,4,···, J. Let the two LLMs who are both judges and candidates be k=j∈ {1,2}(...
https://arxiv.org/abs/2505.21972v1
the derivative of the score difference (E.5) is nonzero, i.e. ∇h (Θ + h∆)−1(γ1−γ2)⊤ 0 1 2! ∝ Θ−1(γ1−γ2)⊤(Θ−1∆)⊤ 0 1 2! ̸= 0. (E.6) If this were to hold, then there exists some h >0such that the true score rankings between two candidates with marginal score distributions γ1andγ2from a judge with vertices Θ−h∆would b...
https://arxiv.org/abs/2505.21972v1
ℓndenote the log likelihood, i.e. ℓn(γ1,···, γK) =1 nnX i=1KX k=1log Pr ˆSi,k;γk . Because the solution to (E.8) corresponds to the MLE for a multinomial model, and standard regularity conditions apply, the estimator is asymptotically consistent. Formally, this implies that the estimated parameters ˆγk,nsatisfy Pr (ℓ...
https://arxiv.org/abs/2505.21972v1
arXiv:2505.21981v1 [cs.RO] 28 May 2025Learning Compositional Behaviors from Demonstration and Language Weiyu Liu1*, Neil Nie1*, Ruohan Zhang1, Jiayuan Mao2†, Jiajun Wu1† 1Stanford University2MIT Abstract: We introduce Behavior from Language and Demonstration ( BLADE ), a framework for long-horizon robotic manipulation ...
https://arxiv.org/abs/2505.21981v1
move faucet head ခ place in sink Perturbation: kettle moved Kettle Filled & On Stove… Geometric constraint: stove blocked Recovery: move pot to table … Stove is not blocked Kettle In Sink Unseen Initial Condition State Perturbation Partial ObservabilityGeometric Constraints ❌ Kettle Filled & On Stove Place In Sink Move...
https://arxiv.org/abs/2505.21981v1
using LLMs to generate planning-compatible action representations [ 59–61]. However, they make assumptions on the availability of state abstractions, while BLADE grounds LLM-generated action definitions without additional labels. Also complementary to methods that leverage these representations for skill learning [ 62,...
https://arxiv.org/abs/2505.21981v1
BLADE requires humans to additionally provide a list of predicate names in natural language, which we have found to be helpful for LLMs to generate action definitions. We provide additional ablations in the Appendix A.2. Based on S, we learn a library of behaviors (a.k.a., abstract actions ). Each behavior a∈ A is a tu...
https://arxiv.org/abs/2505.21981v1
leverage proprioception, i.e., gripper open state, and object segmentation to automatically segment the con- tinuous trajectories into these basis segments. For example, pushing the faucet head away involves the sequence of {close-gripper ,push,open-gripper }. This segmentation will be used for LLMs to generate operato...
https://arxiv.org/abs/2505.21981v1
directly use the first and last state of state-action segments to train predicate classifiers, our method greatly increases the diversity of training data. After this step, for each predicate p∈ P, we obtain a dataset of paired observations oand the predicate value of pat the corresponding time step. Classifier learnin...
https://arxiv.org/abs/2505.21981v1
Inside Drawer” ∀x.is-block (x)⇒in(x,drawer )Language Goal: “Find Block In Slider” is-block (x), is-blue (x), is-table (y),on(x,y)Partial Observability Language Goal: “Move Sliding Door Left” is-sliding-door (x), left (x)Geometric Constraints Goal State Initial Condition Goal State Initial Condition Blue block not visib...
https://arxiv.org/abs/2505.21981v1
of 7 behaviors. Structured transition models learned by BLADE facilitate long-horizon planning. Both SayCan and T2M-Shooting uses learned action feasibility models for planning. Shown in Table. 1, learning accurate feasibility models directly from raw demonstration data remains a significant challenge. In our experimen...
https://arxiv.org/abs/2505.21981v1
38.6% improvement for classifying spatial relations) compared to the baseline model. This also translates into significant improvements in the planning success rate, as shown in Table 2. 5.3 Real World Experiments Environments. We use a Franka Emika robot arm with a parallel jaw gripper. The setup includes five RealSen...
https://arxiv.org/abs/2505.21981v1
Cup is Visible (b) Left & Right Doors Blocked Drawer OpenGeometric Constraint: Kettle blocking the doors Cup Not Visible Initial Condition Initial Condition(a) Figure 6: Real World Planning and Execution. We show the execution traces from BLADE and Robot-VILA for two generalization tasks: (a) partial observability and ...
https://arxiv.org/abs/2505.21981v1
foresight: Planning through what can be done in the future. In ICRA , 2021. [4]H. Shi, H. Xu, Z. Huang, Y . Li, and J. Wu. RoboCraft: Learning to see, simulate, and shape elasto-plastic objects in 3d with graph networks. IJRR , 43(4):533–549, 2024. 1 [5]C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, ...
https://arxiv.org/abs/2505.21981v1
sequential interaction landscapes. In CoRL , 2020. [22] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. In CoRL , 2023. [23] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. In R...
https://arxiv.org/abs/2505.21981v1
In NeurIPS , 2023. 2 [40] S. Kambhampati, K. Valmeekam, L. Guan, K. Stechly, M. Verma, S. Bhambri, L. Saldyt, and A. Murthy. Llms can’t plan, but can help planning in llm-modulo frameworks. arXiv:2402.01817 , 2024. 2 [41] Y . Chen, J. Arkin, Y . Zhang, N. Roy, and C. Fan. AutoTAMP: Autoregressive task and motion planni...
https://arxiv.org/abs/2505.21981v1
J. Ren, R. Abdullah, A. Bhardwaj, A. Chao, K. Y . Chen, N. Chin, P. Dan, X. Fan, et al. Mosaic: A modular system for assistive and interactive cooking. arXiv preprint arXiv:2402.18796 , 2024. 2 [57] Y . Hu, F. Lin, T. Zhang, L. Yi, and Y . Gao. Look before you leap: Unveiling the power of GPT-4v in robotic vision-langu...
https://arxiv.org/abs/2505.21981v1
L. Manuelli, and D. Fox. Perceiver-Actor: A multi-task transformer for robotic manipulation. In CoRL , 2023. 16 [75] T.-W. Ke, N. Gkanatsios, and K. Fragkiadaki. 3D Diffuser Actor: Policy diffusion with 3D scene representations. arXiv:2402.10885 , 2024. 16 [76] Z. Zhang, Y . Li, O. Bastani, A. Gupta, D. Jayaraman, Y . ...
https://arxiv.org/abs/2505.21981v1
domain. Then, we generate behavior descriptions based on the automatically generated predicates. To generate high-quality predicates and behavior descriptions, we take the following steps. First, the LLM is provided with the list of objects in the scene and the language-paired demonstration sequence and is required to ...
https://arxiv.org/abs/2505.21981v1
(?x,?y) is-blocking (pot, left-door) is-right-cabinet-door-blocked (?x)is-blocking (?x,?y) is-blocking (pot, right-door) - is-closed (?x) is-closed (left-door), is-closed (right-door) is-closed (drawer) - is-moved-away (?x) is-moved-away (pot) A.3 Temporal Segmentation Before the generation of behavior description, we ...
https://arxiv.org/abs/2505.21981v1
meaningful interface between actions and language. A.4 Abstract Verification After the generation of the behavior descriptions, we verify the generated behavior descriptions by performing abstract verification on the demonstration trajectories. Given a segmented sequence of the trajectory where each segment is associat...
https://arxiv.org/abs/2505.21981v1
the logit for binary classification. The CLIP model is frozen, while all other learnable parameters are trained. In the real-world experiment, we find that, with more limited data than simulation, the pre-trained CLIP model often overfits to spurious relations in the training images (e.g., the state of the faucet is en...
https://arxiv.org/abs/2505.21981v1
representations, which are complementary to general-purpose VLMs in recognizing geometric and spatial concepts; 3) our method provides a way to learn user-specific predicates (e.g., a predicate that determines whether clean dishes are arranged according to a user’s preferences) from demonstrations. In our preliminary e...
https://arxiv.org/abs/2505.21981v1
only the first set of preconditions will be added to the subgoal list. After we have finished planning for the first-level preconditions, we consider the second-level precondition for the first behavior in the resulting plan, by possibly moving other obstacles away. As an example, let us consider the skill of opening t...
https://arxiv.org/abs/2505.21981v1
and the led are off. •Variation: The initial states of the led and the lightbulb are both on and the goal is to turn them off. Task-2 •Task Category: Abstract Goal •Language Instruction: move all blocks to the closed drawer. •Logical Goal: (and (is-in red-block drawer) (is-in blue-block drawer) (is-in pink-block drawer...
https://arxiv.org/abs/2505.21981v1
the prompts pro- vided in the original paper to the CALVIN environment. The prompts are divided into the initial prompt that is used to generate the task plan given the initial observation (shown in Listing 9) and the follow-up prompt that is used for all subsequent steps (shown in Listing 10). We use gpt-4-turbo-2024-...
https://arxiv.org/abs/2505.21981v1
for four types of generalization: Unseen Initial Condition, State Perturbation, Partial Observability, and Geometric Constraint. Task-1 •Domain: Boil Water •Task Category: Unseen Initial Condition •Language Instruction: Fill the kettle with water and place it on the stove •Logical Goal: (and (is-filled kettle) (is-plac...
https://arxiv.org/abs/2505.21981v1
inside the cabinet. The drawer is open with the teabag visible. Task-7 •Domain: Make Tea •Task Category: Partial Observability •Language Instruction: Place the kettle on the stove and place the teabag inside the kettle. •Logical Goal: (and (is-placed-on kettle stove) (is-placed-inside teabag kettle)) •Initial State: Th...
https://arxiv.org/abs/2505.21981v1
(is-lifted ?block)) :effect (and (is-in ?block ?slider) (not (is-lifted ?block))) :body (then (place ?block ?slider) ) ) ;; place_in_drawer (:action place-in-drawer :parameters (?block - item ?drawer - item) :precondition (and (is-block ?block) (is-drawer ?drawer) (is-lifted ?block) (is-open ?drawer) ) :effect (and (is...
https://arxiv.org/abs/2505.21981v1
:effect (and (is-turned-off ?led) (not (is-turned-on ?led))) :body (then (close) (push ?led) (open) ) ) ;; push_into_drawer (:action push-into-drawer :parameters (?block - item ?drawer - item) :precondition (and (is-block ?block) (is-drawer ?drawer) (is-open ?drawer)) :effect (and (is-in ?block ?drawer)) :body (then (c...
https://arxiv.org/abs/2505.21981v1
predicate applies to a drawer. - (is-close ?x - item): ?x is close. This predicate applies to a drawer. - (is-turned-on ?x - item): ?x is turned on. This predicate applies to a lightbulb or a led. - (is-turned-off ?x - item): ?x is turned off. This predicate applies to a lightbulb or a led. - (is-slider-left ?x - item)...
https://arxiv.org/abs/2505.21981v1
?slider - item) :precondition (and (is-block ?block) (is-slider ?slider) (is-lifted ?block)) :effect (and (is-in ?block ?slider) (not (is-lifted ?block))) :body (then (place ?block ?slider) ) ) </code> Listing 5: Example Prompt for CALVIN–Instructions. **Think Step-by-Step: ** To generate the lifted description, you sh...
https://arxiv.org/abs/2505.21981v1
the objects. For example, a robot arm can not go through a closed door. 4. For each parameter in :parameters, you should use one of the predicates for specifying the type of the object to indicate its type (e.g., is-drawer, is-block, and etc). Listing 6: Example Prompt for CALVIN–Task Input. **Current Task: **place_in_...
https://arxiv.org/abs/2505.21981v1
turn_on_led 34. turn_off_led Before writing the operators, define the predicates that should be used to write the preconditions and effects of the operators. Group the predicates into unary predicates that define the states of objects and binary relations that specify relations between two objects. For each predicate, ...
https://arxiv.org/abs/2505.21981v1
the robot can potentially take to accomplish the task. You should rank the actions in terms of how likely they are to be performed next. Goal predicate: (is-turned-off led) Task output: ‘‘‘python [’turn_off_led’, ’do_nothing’] ‘‘‘ In this example above, if the led is on, the robot should turn it off. If the led is alre...
https://arxiv.org/abs/2505.21981v1
lightbulb. - turn_off_lightbulb: turn off the lightbulb. - turn_on_led: turn on the led. - turn_off_led: turn off the led. - done: the goal has reached. You are only allowed to use the provided skills. You can first itemize the task-related objects to help you plan. For the actions you choose, list them as a list in th...
https://arxiv.org/abs/2505.21981v1