Add 1 files
Browse files- 2412/2412.07207.md +488 -0
2412/2412.07207.md
ADDED
|
@@ -0,0 +1,488 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: A Framework for Active Preference Learning Guided by Large Language Models
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2412.07207
|
| 4 |
+
|
| 5 |
+
Published Time: Mon, 23 Dec 2024 01:15:03 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
###### Abstract
|
| 9 |
+
|
| 10 |
+
The advent of large language models (LLMs) has sparked significant interest in using natural language for preference learning. However, existing methods often suffer from high computational burdens, taxing human supervision, and lack of interpretability. To address these issues, we introduce MAPLE, a framework for large language model-guided Bayesian active preference learning. MAPLE leverages LLMs to model the distribution over preference functions, conditioning it on both natural language feedback and conventional preference learning feedback, such as pairwise trajectory rankings. MAPLE also employs active learning to systematically reduce uncertainty in this distribution and incorporates a language-conditioned active query selection mechanism to identify informative and easy-to-answer queries, thus reducing human burden. We evaluate MAPLEโs sample efficiency and preference inference quality across two benchmarks, including a real-world vehicle route planning benchmark using OpenStreetMap data. Our results demonstrate that MAPLE accelerates the learning process and effectively improves humansโ ability to answer queries.
|
| 11 |
+
|
| 12 |
+
Introduction
|
| 13 |
+
------------
|
| 14 |
+
|
| 15 |
+
Following significant advancements in artificial intelligence, autonomous agents are increasingly being deployed in real-world applications to tackle complex tasks(Zilberstein [2015](https://arxiv.org/html/2412.07207v2#bib.bib51); Dietterich [2017](https://arxiv.org/html/2412.07207v2#bib.bib16)). A prominent method for efficiently aligning these agents with human preferences is Active Learning from Demonstration (Active LfD)(Biyik [2022](https://arxiv.org/html/2412.07207v2#bib.bib4)). Preference-based Active LfD, a variant of LfD, aims to infer a preference function from human-generated rankings over a set of observed behaviors using a Bayesian active learning approach.
|
| 16 |
+
|
| 17 |
+
Recent advancements in natural language processing have inspired many researchers to leverage language-based abstraction for learning human preferences(Soni et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib40); Guan, Sreedharan, and Kambhampati [2022](https://arxiv.org/html/2412.07207v2#bib.bib18)). This approach offers a more flexible and interpretable way to learn preferences compared to conventional methods(Sadigh et al. [2017](https://arxiv.org/html/2412.07207v2#bib.bib38); Brown, Goo, and Niekum [2019](https://arxiv.org/html/2412.07207v2#bib.bib8); Brown et al. [2019](https://arxiv.org/html/2412.07207v2#bib.bib9)). More recent work(Yu et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib47); Ma et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib30)) has focused on utilizing large language models (LLMs), such as ChatGPT(Achiam et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib2)), with prompting-based approaches to learn preferences from natural language instructions. However, these methods often require significant computational resources and taxing human supervision, as they lack a systematic querying approach.
|
| 18 |
+
|
| 19 |
+
To tackle these challenges, we introduce a novel frameworkโMAPLE (M odel-guided A ctive P reference Le arning). MAPLE begins by interpreting natural language instructions from humans and utilizes large language models (LLMs) to estimate a distribution over preference functions. It then applies an active learning approach to systematically reduce uncertainty about the correct preference function. This is achieved through standard Bayesian posterior updates, conditioned on both conventional preference learning feedback, such as pairwise trajectory rankings, and linguistic feedback such as clarification or explanations of the cause behind the preference. To further ease human effort, MAPLE incorporates a language-conditioned active query selection mechanism that leverages feedback on the difficulty of previous queries to choose future queries that are both informative and easy to answer. MAPLE represents preference functions as a linear combination of abstract language concepts, providing a modular structure that enables the framework to acquire new concepts over time and enhance sample efficiency for future instructions. Moreover, this interpretable structure allows for human auditing of the learning process, facilitating human-guided validation before applying the preference function to optimize behavior.
|
| 20 |
+
|
| 21 |
+
In our experiments, we evaluate the efficacy of MAPLE in terms of sample efficiency during learning, as well as the quality of the final preference function. We use an environment based on the popular Minigrid(Chevalier-Boisvert et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib14)) and introduce a new realistic vehicle routing benchmark based on OpenStreetMap (OpenStreetMap Contributors [2017](https://arxiv.org/html/2412.07207v2#bib.bib33)) data, which includes text descriptions of the road network of different cities in the USA. Our evaluation shows the effectiveness of MAPLE in preference inference and improving humanโs ability to answer queries. Our contributions are threefold:
|
| 22 |
+
|
| 23 |
+
* โขWe propose a Bayesian preference learning framework that leverages LLMs and natural language explanations to reduce uncertainty over preference functions.
|
| 24 |
+
* โขWe provide a language-conditioned active query selection approach to reduce human burden.
|
| 25 |
+
* โขWe conduct extensive evaluations, including the design of a realistic new benchmark that can be used for future research in this area.
|
| 26 |
+
|
| 27 |
+
Related Work
|
| 28 |
+
------------
|
| 29 |
+
|
| 30 |
+
#### Learning from demonstration
|
| 31 |
+
|
| 32 |
+
Most Learning from Demonstration (LfD) algorithms learn a reward function using expert trajectories(Ng and Russell [2000](https://arxiv.org/html/2412.07207v2#bib.bib32); Abbeel and Ng [2004](https://arxiv.org/html/2412.07207v2#bib.bib1); Ziebart et al. [2008](https://arxiv.org/html/2412.07207v2#bib.bib50)). Some of these approaches utilize a Bayesian framework to learn the reward or preference function(Ramachandran and Amir [2007](https://arxiv.org/html/2412.07207v2#bib.bib37); Brown et al. [2020](https://arxiv.org/html/2412.07207v2#bib.bib10); Mahmud, Saisubramanian, and Zilberstein [2023](https://arxiv.org/html/2412.07207v2#bib.bib31)), and some pair it with active learning to reduce the number of human queries(Sadigh et al. [2017](https://arxiv.org/html/2412.07207v2#bib.bib38); Basu, Singhal, and Dragan [2018](https://arxiv.org/html/2412.07207v2#bib.bib3); Biyik [2022](https://arxiv.org/html/2412.07207v2#bib.bib4)). However, these methods are unable to utilize natural language abstraction, whereas our method can use both. In addition, we employ language-conditioned active learning to reduce user burden, an approach not previously explored in this context.
|
| 33 |
+
|
| 34 |
+
#### Natural language in intention communication
|
| 35 |
+
|
| 36 |
+
With the advent of natural language processing, several works have focused on directly communicating abstract concepts to agents(Tevet et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib42); Guo et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib21); Wang et al. [2024](https://arxiv.org/html/2412.07207v2#bib.bib45); Sontakke et al. [2024](https://arxiv.org/html/2412.07207v2#bib.bib41); Lin et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib26); Tien et al. [2024](https://arxiv.org/html/2412.07207v2#bib.bib44); Lou et al. [2024](https://arxiv.org/html/2412.07207v2#bib.bib27)). The key difference is that these works directly condition behavior on natural language, whereas we learn a language-abstracted preference function. This approach offers several advantages, including increased transparency, a more fine-grained trade-off between concepts, and enhanced transferability. The work most closely related to ours is (Lin et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib26)), which infers rewards from language but restricts them to step-wise decision-making.
|
| 37 |
+
|
| 38 |
+
Other lines of work(Yu et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib47); Ma et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib30)) aim to learn reward functions directly by prompting LLMs. However, these methods are limited by the variables available in the coding space and often struggle with identifying temporally extended abstract behaviors. Further, these approaches can not utilize conventional preference feedback, whereas MAPLE can utilize both. Additionally, they either lack a systematic way of acquiring human feedback or rely on data-hungry evolutionary algorithms. In contrast, our approach employs more efficient Bayesian active learning.
|
| 39 |
+
|
| 40 |
+
#### Abstraction in reward learning
|
| 41 |
+
|
| 42 |
+
Several works leverage abstract concepts to learn reward functions(Lyu et al. [2019](https://arxiv.org/html/2412.07207v2#bib.bib29); Illanes et al. [2020](https://arxiv.org/html/2412.07207v2#bib.bib23); Icarte et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib22); Guan, Valmeekam, and Kambhampati [2022](https://arxiv.org/html/2412.07207v2#bib.bib19); Soni et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib40); Bobu et al. [2021](https://arxiv.org/html/2412.07207v2#bib.bib6); Guan et al. [2021](https://arxiv.org/html/2412.07207v2#bib.bib20); Guan, Sreedharan, and Kambhampati [2022](https://arxiv.org/html/2412.07207v2#bib.bib18); Silver et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib39); Zhang et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib48); Bucker et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib12); Cui et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib15)). Two methods closely related to our work are PRESCA(Soni et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib40)) and RBA(Guan, Valmeekam, and Kambhampati [2022](https://arxiv.org/html/2412.07207v2#bib.bib19)). PRESCA learns state-based abstract concepts to be avoided, while RBA learns temporally extended concepts with two variants: global (eliciting preference weights directly from humans) and local (tuning weights using binary search). Our approach also leverages temporally extended concepts but learns preference functions from natural language feedback using active learning. Unlike RBA, which relies on direct preference weights from humans or binary search, our method uses LLM-guided active learning for more expressive and informative preference elicitation, thereby reducing human effort.
|
| 43 |
+
|
| 44 |
+
Some works use offline behavior datasets or demonstrations to learn diverse skills(Lee and Popoviฤ [2010](https://arxiv.org/html/2412.07207v2#bib.bib24); Wang et al. [2017](https://arxiv.org/html/2412.07207v2#bib.bib46); Zhou and Dragan [2018](https://arxiv.org/html/2412.07207v2#bib.bib49); Peng et al. [2018](https://arxiv.org/html/2412.07207v2#bib.bib34); Luo et al. [2020](https://arxiv.org/html/2412.07207v2#bib.bib28); Chebotar et al. [2021](https://arxiv.org/html/2412.07207v2#bib.bib13); Peng et al. [2021](https://arxiv.org/html/2412.07207v2#bib.bib35)), which complement our approach. While MAPLE can also utilize such datasets in pre-training, the focus of MAPLE is to encode human preference in terms of these concepts using natural language.
|
| 45 |
+
|
| 46 |
+
#### Alignment auditing
|
| 47 |
+
|
| 48 |
+
Alignment auditing ensures that an agentโs behavior aligns with human intentions by verifying that the agent has learned the correct preference function. While some works focus on alignment verification with minimal queries(Brown, Schneider, and Niekum [2021](https://arxiv.org/html/2412.07207v2#bib.bib11)), they often rely on function weights, value weights, or trajectory rankings, which are difficult to interpret. In contrast, our approach leverages natural language to communicate with humans, facilitating validation and serving as a stopping criterion for the active learning process. Mahmud, Saisubramanian, and Zilberstein ([2023](https://arxiv.org/html/2412.07207v2#bib.bib31)) presents a notable alignment auditing approach related to our method, using explanations to detect misalignment and update distributions over preferences. While they employ a feature attribution method, we use natural language explanations. Additionally, they use human-selected or randomly sampled data points from an offline dataset for auditing, whereas we employ active learning to enhance efficiency.
|
| 49 |
+
|
| 50 |
+
#### Active learning
|
| 51 |
+
|
| 52 |
+
Previous works have explored different acquisition functions for active learning, typically focusing on selecting queries that maximize certain uncertainty quantization metrics. These metrics include predictive entropy(Gal and Ghahramani [2016](https://arxiv.org/html/2412.07207v2#bib.bib17)), uncertainty volume reduction(Sadigh et al. [2017](https://arxiv.org/html/2412.07207v2#bib.bib38)), mutual information maximization(Biyik et al. [2019](https://arxiv.org/html/2412.07207v2#bib.bib5)), and maximizing variation ratios(Gal and Ghahramani [2016](https://arxiv.org/html/2412.07207v2#bib.bib17)). Our approach complements these methods by integrating language-conditioned query selection to reduce user burden. While any of these methods can be paired with MAPLE, we opt for variation ratio due to its ease of calculation and high effectiveness.
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
|
| 56 |
+
Figure 1: Application of MAPLE to the Natural Language Vehicle Routing Task.
|
| 57 |
+
|
| 58 |
+
Background
|
| 59 |
+
----------
|
| 60 |
+
|
| 61 |
+
#### Markov decision process (MDP)
|
| 62 |
+
|
| 63 |
+
A Markov Decision Process (MDP) M ๐ M italic_M is represented by the tuple M=(S,A,T,S 0,R,ฮณ)๐ ๐ ๐ด ๐ subscript ๐ 0 ๐
๐พ M=(S,A,T,S_{0},R,\gamma)italic_M = ( italic_S , italic_A , italic_T , italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_R , italic_ฮณ ), where S ๐ S italic_S is the set of states, A ๐ด A italic_A is the set of actions, T:SรAรSโ[0,1]:๐โ๐ ๐ด ๐ 0 1 T:S\times A\times S\rightarrow[0,1]italic_T : italic_S ร italic_A ร italic_S โ [ 0 , 1 ] is the transition function, S 0 subscript ๐ 0 S_{0}italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the initial state distribution, and ฮณโ[0,1)๐พ 0 1\gamma\in[0,1)italic_ฮณ โ [ 0 , 1 ) is the discount factor. A history h t subscript โ ๐ก h_{t}italic_h start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is a sequence of states up to time t ๐ก t italic_t, (s 0,โฆ,s t)subscript ๐ 0โฆsubscript ๐ ๐ก(s_{0},\dots,s_{t})( italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , โฆ , italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). The reward function R:HรAโ[โR max,R max]:๐
โ๐ป ๐ด subscript ๐
max subscript ๐
max R:H\times A\rightarrow[-R_{\text{max}},R_{\text{max}}]italic_R : italic_H ร italic_A โ [ - italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT ] maps histories and actions to rewards. For some problems, a goal function G:Hโ[0,1]:๐บโ๐ป 0 1 G:H\rightarrow[0,1]italic_G : italic_H โ [ 0 , 1 ] is provided that maps histories to goal achievements. In such problems, the reward function is typically R:HรAโ[โR max,0]:๐
โ๐ป ๐ด subscript ๐
max 0 R:H\times A\rightarrow[-R_{\text{max}},0]italic_R : italic_H ร italic_A โ [ - italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT , 0 ] and โaโA for-all ๐ ๐ด\forall a\in Aโ italic_a โ italic_A, Tโข(s g,a,s g)=1 ๐ subscript ๐ ๐ ๐ subscript ๐ ๐ 1 T(s_{g},a,s_{g})=1 italic_T ( italic_s start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , italic_a , italic_s start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ) = 1 and Rโข(h tโชs g,a)=0 ๐
subscript โ ๐ก subscript ๐ ๐ ๐ 0 R(h_{t}\cup s_{g},a)=0 italic_R ( italic_h start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT โช italic_s start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , italic_a ) = 0 given the final state s gโh t subscript ๐ ๐ subscript โ ๐ก s_{g}\in h_{t}italic_s start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT โ italic_h start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. A policy ฯ:HรAโ[0,1]:๐โ๐ป ๐ด 0 1\pi:H\times A\rightarrow[0,1]italic_ฯ : italic_H ร italic_A โ [ 0 , 1 ] is a mapping from histories to a distribution over actions. The policy ฯ ๐\pi italic_ฯ induces a value function V ฯ:Sโโ:superscript ๐ ๐โ๐ โ V^{\pi}:S\rightarrow\mathbb{R}italic_V start_POSTSUPERSCRIPT italic_ฯ end_POSTSUPERSCRIPT : italic_S โ blackboard_R, which represents the expected cumulative return V ฯโข(s)superscript ๐ ๐ ๐ V^{\pi}(s)italic_V start_POSTSUPERSCRIPT italic_ฯ end_POSTSUPERSCRIPT ( italic_s ) that the agent can achieve from state s ๐ s italic_s when following policy ฯ ๐\pi italic_ฯ. An optimal policy ฯโsuperscript ๐\pi^{*}italic_ฯ start_POSTSUPERSCRIPT โ end_POSTSUPERSCRIPT maximizes the expected cumulative return Vโโข(s)superscript ๐ ๐ V^{*}(s)italic_V start_POSTSUPERSCRIPT โ end_POSTSUPERSCRIPT ( italic_s ) from any state s ๐ s italic_s, particularly from the initial state s 0 subscript ๐ 0 s_{0}italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.
|
| 64 |
+
|
| 65 |
+
#### Bayesian preference learning
|
| 66 |
+
|
| 67 |
+
A preference function ฯ ๐\omega italic_ฯ maps a trajectory ฯ ๐\tau italic_ฯ to a real number reflecting the alignment of the trajectory with the humanโs objective. The goal of preference learning is to infer this function from various types of human feedback. A common approach involves learning this function from a pairwise preference dataset, denoted by ๐={(ฯ 1 1โปฯ 1 2),(ฯ 2 1โปฯ 2 2),โฆ,(ฯ n 1โปฯ n 2)}๐ succeeds subscript superscript ๐ 1 1 subscript superscript ๐ 2 1 succeeds subscript superscript ๐ 1 2 subscript superscript ๐ 2 2โฆsucceeds subscript superscript ๐ 1 ๐ subscript superscript ๐ 2 ๐\mathcal{D}=\{(\tau^{1}_{1}\succ\tau^{2}_{1}),(\tau^{1}_{2}\succ\tau^{2}_{2}),% \ldots,(\tau^{1}_{n}\succ\tau^{2}_{n})\}caligraphic_D = { ( italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , ( italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) , โฆ , ( italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) }, where ฯ i 1 subscript superscript ๐ 1 ๐\tau^{1}_{i}italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ฯ i 2 subscript superscript ๐ 2 ๐\tau^{2}_{i}italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are two different trajectories, and ฯ i 1โปฯ i 2 succeeds subscript superscript ๐ 1 ๐ subscript superscript ๐ 2 ๐\tau^{1}_{i}\succ\tau^{2}_{i}italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicates that ฯ i 1 subscript superscript ๐ 1 ๐\tau^{1}_{i}italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is preferred to ฯ i 2 subscript superscript ๐ 2 ๐\tau^{2}_{i}italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. A Bayesian framework for preference learning, as described in Ramachandran and Amir ([2007](https://arxiv.org/html/2412.07207v2#bib.bib37)), defines a probability distribution over preference functions given a trajectory dataset ๐ ๐\mathcal{D}caligraphic_D using Bayesโ rule: Pโข(ฯโฃ๐)โPโข(๐โฃฯ)โขPโข(ฯ)proportional-to ๐ conditional ๐ ๐ ๐ conditional ๐ ๐ ๐ ๐ P(\omega\mid\mathcal{D})\propto P(\mathcal{D}\mid\omega)P(\omega)italic_P ( italic_ฯ โฃ caligraphic_D ) โ italic_P ( caligraphic_D โฃ italic_ฯ ) italic_P ( italic_ฯ ). Various algorithms define Pโข(๐โฃฯ)๐ conditional ๐ ๐ P(\mathcal{D}\mid\omega)italic_P ( caligraphic_D โฃ italic_ฯ ) differently, but we adopt the definition from BREX (Brown et al. [2020](https://arxiv.org/html/2412.07207v2#bib.bib10)) using the BradleyโTerry model(Bradley and Terry [1952](https://arxiv.org/html/2412.07207v2#bib.bib7)):
|
| 68 |
+
|
| 69 |
+
Pโข(๐โฃฯ)=โ(ฯ i 1โปฯ i 2)โ๐ e ฮฒโขฯโข(ฯ i 1)e ฮฒโขฯโข(ฯ i 1)+e ฮฒโขฯโข(ฯ i 2).๐ conditional ๐ ๐ subscript product succeeds subscript superscript ๐ 1 ๐ subscript superscript ๐ 2 ๐ ๐ superscript ๐ ๐ฝ ๐ subscript superscript ๐ 1 ๐ superscript ๐ ๐ฝ ๐ subscript superscript ๐ 1 ๐ superscript ๐ ๐ฝ ๐ subscript superscript ๐ 2 ๐ P(\mathcal{D}\mid\omega)=\prod_{(\tau^{1}_{i}\succ\tau^{2}_{i})\in\mathcal{D}}% \frac{e^{\beta\omega(\tau^{1}_{i})}}{e^{\beta\omega(\tau^{1}_{i})}+e^{\beta% \omega(\tau^{2}_{i})}}.italic_P ( caligraphic_D โฃ italic_ฯ ) = โ start_POSTSUBSCRIPT ( italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) โ caligraphic_D end_POSTSUBSCRIPT divide start_ARG italic_e start_POSTSUPERSCRIPT italic_ฮฒ italic_ฯ ( italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_e start_POSTSUPERSCRIPT italic_ฮฒ italic_ฯ ( italic_ฯ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT + italic_e start_POSTSUPERSCRIPT italic_ฮฒ italic_ฯ ( italic_ฯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT end_ARG .(1)
|
| 70 |
+
|
| 71 |
+
Here, ฮฒโ[0,โ)๐ฝ 0\beta\in[0,\infty)italic_ฮฒ โ [ 0 , โ ) is the inverse-temperature parameter.
|
| 72 |
+
|
| 73 |
+
#### Variance ratio
|
| 74 |
+
|
| 75 |
+
Given a conditional probability distribution P(โ
โฃX)P(\cdot\mid X)italic_P ( โ
โฃ italic_X ) over {y i}i=0 k superscript subscript subscript ๐ฆ ๐ ๐ 0 ๐\{y_{i}\}_{i=0}^{k}{ italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, the variance ratio of an input X ๐ X italic_X is defined as follows:
|
| 76 |
+
|
| 77 |
+
Variance_Ratioโข(X)=1โargโขmax y iโกPโข(y iโฃX)Variance_Ratio ๐ 1 subscript arg max subscript ๐ฆ ๐ ๐ conditional subscript ๐ฆ ๐ ๐\text{Variance\_Ratio}(X)=1-\operatorname*{arg\,max}_{y_{i}}P(y_{i}\mid X)Variance_Ratio ( italic_X ) = 1 - start_OPERATOR roman_arg roman_max end_OPERATOR start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_P ( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โฃ italic_X )
|
| 78 |
+
|
| 79 |
+
Problem Formulation
|
| 80 |
+
-------------------
|
| 81 |
+
|
| 82 |
+
#### MAPLE
|
| 83 |
+
|
| 84 |
+
We define a MAPLE problem instance as the tuple (MโR,C,ฮฉ,D ฯ,โ,๐)subscript ๐ ๐
๐ถ ฮฉ subscript ๐ท ๐ โ ๐(M_{-R},C,\Omega,D_{\tau},\mathcal{H},\mathbb{L})( italic_M start_POSTSUBSCRIPT - italic_R end_POSTSUBSCRIPT , italic_C , roman_ฮฉ , italic_D start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT , caligraphic_H , blackboard_L ), where:
|
| 85 |
+
|
| 86 |
+
* โขMโR subscript ๐ ๐
M_{-R}italic_M start_POSTSUBSCRIPT - italic_R end_POSTSUBSCRIPT is an MDP with an undefined reward function R ๐
R italic_R.
|
| 87 |
+
* โขโ โ\mathcal{H}caligraphic_H is the human interaction function that acts as the interface between the human and the MAPLE framework. Humans provide their feedback, preferences, and explanations in response to natural language queries posed by MAPLE.
|
| 88 |
+
* โข๐ ๐\mathbb{L}blackboard_L is the LLM interaction function that generates natural language queries to the LLM and returns structured output in text files, such as JSON format.
|
| 89 |
+
* โขC ๐ถ C italic_C is an expanding set of natural language concepts {c 1,c 2,โฆ,c n}subscript ๐ 1 subscript ๐ 2โฆsubscript ๐ ๐\{c_{1},c_{2},\ldots,c_{n}\}{ italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , โฆ , italic_c start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. We also use Cโข(โ
)๐ถโ
C(\cdot)italic_C ( โ
) to refer to a mapping model that takes a trajectory embedding ฯโข(ฯ)italic-ฯ ๐\phi(\tau)italic_ฯ ( italic_ฯ ) and a natural language concept embedding ฯโข(c i)๐ subscript ๐ ๐\psi(c_{i})italic_ฯ ( italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and maps them to a numeric value indicating the degree to which the trajectory ฯ ๐\tau italic_ฯ satisfies the concept c i subscript ๐ ๐ c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For non-Markovian concepts, ฯโข(โ
)italic-ฯโ
\phi(\cdot)italic_ฯ ( โ
) may be a sequence model such as a transformer. For Markovian concepts, we can define Cโข(ฯโข(ฯ),ฯโข(c i))=โsโฯ Cโข(ฯโข(s),ฯโข(c i))๐ถ italic-ฯ ๐ ๐ subscript ๐ ๐ subscript ๐ ๐ ๐ถ italic-ฯ ๐ ๐ subscript ๐ ๐ C(\phi(\tau),\psi(c_{i}))=\sum_{\displaystyle s\in\tau}C(\phi(s),\psi(c_{i}))italic_C ( italic_ฯ ( italic_ฯ ) , italic_ฯ ( italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) = โ start_POSTSUBSCRIPT italic_s โ italic_ฯ end_POSTSUBSCRIPT italic_C ( italic_ฯ ( italic_s ) , italic_ฯ ( italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ), where ฯโข(s)italic-ฯ ๐ \phi(s)italic_ฯ ( italic_s ) is the state embedding.
|
| 90 |
+
* โขฮฉ ฮฉ\Omega roman_ฮฉ is the space of all preference functions. In MAPLE, the preference functions ฯ ๐\omega italic_ฯ over a trajectory ฯ ๐\tau italic_ฯ are modeled as a linear combination of the concepts and their associated weights:
|
| 91 |
+
|
| 92 |
+
ฯโข(ฯ)=โc iโC ฯ c iโ
Cโข(ฯโข(ฯ),ฯโข(c i))๐ ๐ subscript subscript ๐ ๐ ๐ถโ
subscript ๐ subscript ๐ ๐ ๐ถ italic-ฯ ๐ ๐ subscript ๐ ๐\omega(\tau)=\sum_{\displaystyle c_{i}\in C}\omega_{c_{i}}\cdot C(\phi(\tau),% \psi(c_{i}))italic_ฯ ( italic_ฯ ) = โ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โ italic_C end_POSTSUBSCRIPT italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT โ
italic_C ( italic_ฯ ( italic_ฯ ) , italic_ฯ ( italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) )(2)
|
| 93 |
+
* โขD ฯ subscript ๐ท ๐ D_{\tau}italic_D start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT is a dataset of unlabeled trajectories {ฯ 1,ฯ 2,โฆ,ฯ m}subscript ๐ 1 subscript ๐ 2โฆsubscript ๐ ๐\{\tau_{1},\tau_{2},\ldots,\tau_{m}\}{ italic_ฯ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_ฯ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , โฆ , italic_ฯ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT }.
|
| 94 |
+
|
| 95 |
+
The objective of MAPLE is to model the repeated interaction between a human and an agent, where the human communicates their task objective ๐ ๐ฏ โ subscript superscript ๐ โ ๐ฏ\mathcal{A}^{\mathcal{H}}_{\mathcal{T}}caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT in natural language, and the agent is responsible for completing the task in alignment with that objective. MAPLE accomplishes this by actively learning a symbolic preference function ฯ ๐\omega italic_ฯ using large language models (LLMs), enabling the agent to optimize its behavior according to this function to ensure its actions align with human preferences.
|
| 96 |
+
|
| 97 |
+
#### Motivating example
|
| 98 |
+
|
| 99 |
+
Consider an intelligent route planning system that takes a source, a destination, and user preferences about the route in natural language, as illustrated in Figure [1](https://arxiv.org/html/2412.07207v2#Sx2.F1 "Figure 1 โฃ Active learning โฃ Related Work โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models"). Datasets for several preference-defining concepts such as speed, safety, battery friendliness, smoothness, autopilot friendliness, and scenic view can be easily obtained and used to pre-train the concept mapping function Cโข(โ
)๐ถโ
C(\cdot)italic_C ( โ
). The goal of MAPLE is to take natural language instructions from a human and map them to a preference function ฯ ๐\omega italic_ฯ interactively so that a search algorithm can optimize it to find the preferred route. MAPLE incorporates preference feedback on top of natural language feedback to address issues like hallucination and calibration associated with directly using LLMs. Additionally, MAPLE allows the human to skip difficult queries and learns in-context which query to present, making the system more human-friendly. Furthermore, the preference function inference process in MAPLE is fully interpretable, enabling a human to audit the process thoroughly and provide the necessary feedback for improvement. Finally, the interaction with the human is repeated, allowing MAPLE to acquire new concepts over time and become more efficient for future tasks.
|
| 100 |
+
|
| 101 |
+
Detailed Description of the Proposed Method
|
| 102 |
+
-------------------------------------------
|
| 103 |
+
|
| 104 |
+
A key innovation of MAPLE is the integration of conventional feedback from the preference learning literature with more expressive linguistic feedback, formally captured within a Bayesian framework introduced in REVEALE(Mahmud, Saisubramanian, and Zilberstein [2023](https://arxiv.org/html/2412.07207v2#bib.bib31)):
|
| 105 |
+
|
| 106 |
+
Pโข(ฯโฃF h,F l)โPโข(F hโฃฯ)โขPโข(F lโฃฯ)โขPโข(ฯ)proportional-to ๐ conditional ๐ subscript ๐น โ subscript ๐น ๐ ๐ conditional subscript ๐น โ ๐ ๐ conditional subscript ๐น ๐ ๐ ๐ ๐ P(\omega\mid F_{h},F_{l})\propto P(F_{h}\mid\omega)P(F_{l}\mid\omega)P(\omega)italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) โ italic_P ( italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โฃ italic_ฯ ) italic_P ( italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT โฃ italic_ฯ ) italic_P ( italic_ฯ )(3)
|
| 107 |
+
|
| 108 |
+
Above, F h subscript ๐น โ F_{h}italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT represents the set of feedback observed in conventional preference learning algorithms, specifically in the context of this paper pairwise trajectory ranking.1 1 1 MAPLE can handle any conventional feedback for which Pโข(F hโฃฯ)๐ conditional subscript ๐น โ ๐ P(F_{h}\mid\omega)italic_P ( italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โฃ italic_ฯ ) is defined.F l subscript ๐น ๐ F_{l}italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT denotes the set of linguistic feedback. We can rewrite the equation as:
|
| 109 |
+
|
| 110 |
+
Pโข(ฯโฃF h,F l)๐ conditional ๐ subscript ๐น โ subscript ๐น ๐\displaystyle P(\omega\mid F_{h},F_{l})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT )โPโข(F hโฃฯ)โBradley-Terry ModelโขPโข(ฯโฃF l)โLLMโขPโข(F l)โUniform proportional-to absent subscriptโ๐ conditional subscript ๐น โ ๐ Bradley-Terry Model subscriptโ๐ conditional ๐ subscript ๐น ๐ LLM subscriptโ๐ subscript ๐น ๐ Uniform\displaystyle\propto\underbrace{P(F_{h}\mid\omega)}_{\text{Bradley-Terry Model% }}\underbrace{P(\omega\mid F_{l})}_{\text{LLM}}\underbrace{P(F_{l})}_{\text{% Uniform}}โ underโ start_ARG italic_P ( italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โฃ italic_ฯ ) end_ARG start_POSTSUBSCRIPT Bradley-Terry Model end_POSTSUBSCRIPT underโ start_ARG italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT LLM end_POSTSUBSCRIPT underโ start_ARG italic_P ( italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT Uniform end_POSTSUBSCRIPT(4)
|
| 111 |
+
โPโข(F hโฃฯ)โBradley-Terry ModelโขPโข(ฯโฃF l)โLLM proportional-to absent subscriptโ๐ conditional subscript ๐น โ ๐ Bradley-Terry Model subscriptโ๐ conditional ๐ subscript ๐น ๐ LLM\displaystyle\propto\underbrace{P(F_{h}\mid\omega)}_{\text{Bradley-Terry Model% }}\underbrace{P(\omega\mid F_{l})}_{\text{LLM}}โ underโ start_ARG italic_P ( italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โฃ italic_ฯ ) end_ARG start_POSTSUBSCRIPT Bradley-Terry Model end_POSTSUBSCRIPT underโ start_ARG italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT LLM end_POSTSUBSCRIPT(5)
|
| 112 |
+
|
| 113 |
+
Here, the likelihood of F h subscript ๐น โ F_{h}italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT given ฯ ๐\omega italic_ฯ is defined using the Bradley-Terry Model. The likelihood of ฯ ๐\omega italic_ฯ given F l subscript ๐น ๐ F_{l}italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT is estimated using an LLM. Beyond incorporating linguistic feedback via LLMs, MAPLE advances conventional active learning methods. Conventional active learning typically focuses on selecting queries that reduce the maximum uncertainty of the posterior but lacks a flexible mechanism to account for human capability in responding to certain types of queries. MAPLEโs Oracle-guided active query selection enhances any conventional acquisition function by leveraging linguistic feedback to alleviate the human burden associated with difficult queries. In the rest of this section, we provide more details on MAPLE, particularly Algorithms[1](https://arxiv.org/html/2412.07207v2#alg1 "Algorithm 1 โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models") and [2](https://arxiv.org/html/2412.07207v2#alg2 "Algorithm 2 โฃ Stopping criteria โฃ LLM-Guided Active Preference Learning โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models").
|
| 114 |
+
|
| 115 |
+
Algorithm 1 MAPLE
|
| 116 |
+
|
| 117 |
+
0:Human instruction
|
| 118 |
+
|
| 119 |
+
๐ ๐ฏ โ subscript superscript ๐ โ ๐ฏ\mathcal{A}^{\mathcal{H}}_{\mathcal{T}}caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT
|
| 120 |
+
, Acquisition function
|
| 121 |
+
|
| 122 |
+
๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT
|
| 123 |
+
, # of LLM query
|
| 124 |
+
|
| 125 |
+
K ๐พ K italic_K
|
| 126 |
+
|
| 127 |
+
1:
|
| 128 |
+
|
| 129 |
+
F h,F qโโ
,โ
formulae-sequenceโsubscript ๐น โ subscript ๐น ๐ F_{h},F_{q}\leftarrow\emptyset,\emptyset italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT โ โ
, โ
|
| 130 |
+
|
| 131 |
+
2:
|
| 132 |
+
|
| 133 |
+
F lโ{๐ฏ}โsubscript ๐น ๐ ๐ฏ F_{l}\leftarrow\{\mathcal{T}\}italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT โ { caligraphic_T }
|
| 134 |
+
|
| 135 |
+
3:
|
| 136 |
+
|
| 137 |
+
ฮฉ ๐ฏโ{ฯ i}i=0 nโผ๐โข(ฯโฃF l)โsubscript ฮฉ ๐ฏ superscript subscript subscript ๐ ๐ ๐ 0 ๐ similar-to ๐ conditional ๐ subscript ๐น ๐\Omega_{\mathcal{T}}\leftarrow\{\omega_{i}\}_{i=0}^{n}\sim\mathbb{L}(\omega% \mid F_{l})roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT โ { italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT โผ blackboard_L ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT )
|
| 138 |
+
|
| 139 |
+
4:while condition not met do
|
| 140 |
+
|
| 141 |
+
5:
|
| 142 |
+
|
| 143 |
+
Qโ{(ฯ i,ฯ j):ฯ i,ฯ jโ๐ ฯโง(ฯ i,ฯ j)โF h}โ๐ conditional-set subscript ๐ ๐ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ ๐ subscript ๐ ๐ subscript ๐น โ Q\leftarrow\{(\tau_{i},\tau_{j}):\tau_{i},\tau_{j}\in\mathcal{D}_{\tau}\land(% \tau_{i},\tau_{j})\not\in F_{h}\}italic_Q โ { ( italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) : italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โ caligraphic_D start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT โง ( italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) โ italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT }
|
| 144 |
+
|
| 145 |
+
6:
|
| 146 |
+
|
| 147 |
+
qโโ๐ absent q\leftarrow italic_q โ
|
| 148 |
+
Query Selection(
|
| 149 |
+
|
| 150 |
+
๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT
|
| 151 |
+
,
|
| 152 |
+
|
| 153 |
+
Q ๐ Q italic_Q
|
| 154 |
+
,
|
| 155 |
+
|
| 156 |
+
F q subscript ๐น ๐ F_{q}italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT
|
| 157 |
+
,
|
| 158 |
+
|
| 159 |
+
ฮฉ ๐ฏ subscript ฮฉ ๐ฏ\Omega_{\mathcal{T}}roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT
|
| 160 |
+
,
|
| 161 |
+
|
| 162 |
+
๐ ๐\mathbb{L}blackboard_L
|
| 163 |
+
,
|
| 164 |
+
|
| 165 |
+
K ๐พ K italic_K
|
| 166 |
+
)
|
| 167 |
+
|
| 168 |
+
7:
|
| 169 |
+
|
| 170 |
+
(f h,f l,f q)โโโข(q)โsubscript ๐ โ subscript ๐ ๐ subscript ๐ ๐ โ ๐(f_{h},f_{l},f_{q})\leftarrow\mathcal{H}(q)( italic_f start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_f start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ) โ caligraphic_H ( italic_q )
|
| 171 |
+
|
| 172 |
+
8:
|
| 173 |
+
|
| 174 |
+
F h,F l,F qโF hโช{f h},F lโช{f l},F qโช{f q}formulae-sequenceโsubscript ๐น โ subscript ๐น ๐ subscript ๐น ๐ subscript ๐น โ subscript ๐ โ subscript ๐น ๐ subscript ๐ ๐ subscript ๐น ๐ subscript ๐ ๐ F_{h},F_{l},F_{q}\leftarrow F_{h}\cup\{f_{h}\},F_{l}\cup\{f_{l}\},F_{q}\cup\{f% _{q}\}italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT โ italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โช { italic_f start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT } , italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT โช { italic_f start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT } , italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT โช { italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT }
|
| 175 |
+
|
| 176 |
+
9:
|
| 177 |
+
|
| 178 |
+
ฮฉ ๐ฏโ{ฯ i}i=0 nโผPโข(F hโฃฯ)โขPโข(ฯโฃF l)โsubscript ฮฉ ๐ฏ superscript subscript subscript ๐ ๐ ๐ 0 ๐ similar-to ๐ conditional subscript ๐น โ ๐ ๐ conditional ๐ subscript ๐น ๐\Omega_{\mathcal{T}}\leftarrow\{\omega_{i}\}_{i=0}^{n}\sim P(F_{h}\mid\omega)P% (\omega\mid F_{l})roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT โ { italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT โผ italic_P ( italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โฃ italic_ฯ ) italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT )
|
| 179 |
+
|
| 180 |
+
10:end while
|
| 181 |
+
|
| 182 |
+
11:return
|
| 183 |
+
|
| 184 |
+
ฮฉ ๐ฏ subscript ฮฉ ๐ฏ\Omega_{\mathcal{T}}roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT
|
| 185 |
+
|
| 186 |
+
### Initialization
|
| 187 |
+
|
| 188 |
+
MAPLE starts by taking natural language instruction about task preference ๐ ๐ฏ โ subscript superscript ๐ โ ๐ฏ\mathcal{A}^{\mathcal{H}}_{\mathcal{T}}caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT and initializes the pairwise preference feedback set F h subscript ๐น โ F_{h}italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT, linguistic feedback F l subscript ๐น ๐ F_{l}italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT, and feedback about query difficulty F q subscript ๐น ๐ F_{q}italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT (lines 1-2, Algorithm[1](https://arxiv.org/html/2412.07207v2#alg1 "Algorithm 1 โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models")). After that, the initial set of weights is sampled using the LLM from the distribution Pโข(ฯโฃF l)๐ conditional ๐ subscript ๐น ๐ P(\omega\mid F_{l})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) as F h subscript ๐น โ F_{h}italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is still empty (line 3, Algorithm[1](https://arxiv.org/html/2412.07207v2#alg1 "Algorithm 1 โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models")). To sample ฯ ๐\omega italic_ฯ from Pโข(ฯโฃF l)๐ conditional ๐ subscript ๐น ๐ P(\omega\mid F_{l})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) we explore two sampling strategies described below.
|
| 189 |
+
|
| 190 |
+
#### Preference weight sampling from LLM
|
| 191 |
+
|
| 192 |
+
We directly prompt the LLM ๐ ๐\mathbb{L}blackboard_L to provide linear weights ฯ ๐\omega italic_ฯ over the abstract concepts. Specifically, we provide ๐ ๐\mathbb{L}blackboard_L with a prompt containing the task description ๐ฏ ๐ฏ\mathcal{T}caligraphic_T, a list of known concepts C ๐ถ C italic_C, human preference ๐ ๐ฏ โ subscript superscript ๐ โ ๐ฏ\mathcal{A}^{\mathcal{H}}_{\mathcal{T}}caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT, and examples of instruction weight pairs, along with additional answer generation instructions G ๐บ G italic_G (see Appendix for details). The LLM processes this prompt and returns an answer ๐ ฯ i ๐ subscript superscript ๐ ๐ subscript ๐ ๐\mathcal{A}^{\mathbb{L}}_{\omega_{i}}caligraphic_A start_POSTSUPERSCRIPT blackboard_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT:
|
| 193 |
+
|
| 194 |
+
๐ ฯ i ๐โ๐โข(promptโข(๐ฏ,C,๐ ๐ฏ โ,D โ,G))โsubscript superscript ๐ ๐ subscript ๐ ๐ ๐ prompt ๐ฏ ๐ถ subscript superscript ๐ โ ๐ฏ subscript ๐ท โ ๐บ\mathcal{A}^{\mathbb{L}}_{\omega_{i}}\leftarrow\mathbb{L}(\text{prompt}(% \mathcal{T},C,\mathcal{A}^{\mathcal{H}}_{\mathcal{T}},D_{\mathcal{I}},G))caligraphic_A start_POSTSUPERSCRIPT blackboard_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT โ blackboard_L ( prompt ( caligraphic_T , italic_C , caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT caligraphic_I end_POSTSUBSCRIPT , italic_G ) )
|
| 195 |
+
|
| 196 |
+
We can take advantage of text generation temperature to collect a diverse set of samples. We define the set of all generated weights as ๐ ฯ ๐ subscript superscript ๐ ๐ ๐\mathcal{A}^{\mathbb{L}}_{\omega}caligraphic_A start_POSTSUPERSCRIPT blackboard_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT. Then Pโข(ฯ jโฃF l)๐ conditional subscript ๐ ๐ subscript ๐น ๐ P(\omega_{j}\mid F_{l})italic_P ( italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) can be modeled for any arbitrary ฯ j subscript ๐ ๐\omega_{j}italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT as follows:
|
| 197 |
+
|
| 198 |
+
Pโข(ฯ jโฃF l)=expโก(โฮฒ lโข๐ผ ฯ iโ๐ ฯ ๐โข[Distanceโข(ฯ i,ฯ j)])๐ conditional subscript ๐ ๐ subscript ๐น ๐ subscript ๐ฝ ๐ subscript ๐ผ subscript ๐ ๐ subscript superscript ๐ ๐ ๐ delimited-[]Distance subscript ๐ ๐ subscript ๐ ๐ P(\omega_{j}\mid F_{l})=\exp\left(-\beta_{l}\mathbb{E}_{\omega_{i}\in\mathcal{% A}^{\mathbb{L}}_{\omega}}[\text{Distance}(\omega_{i},\omega_{j})]\right)italic_P ( italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) = roman_exp ( - italic_ฮฒ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โ caligraphic_A start_POSTSUPERSCRIPT blackboard_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ Distance ( italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ] )(6)
|
| 199 |
+
|
| 200 |
+
In this case, Euclidean or Cosine distance can be applied.
|
| 201 |
+
|
| 202 |
+
#### Distribution weight sampling using LLM
|
| 203 |
+
|
| 204 |
+
The second approach we explore is distribution modeling using an LLM. Here, we use similar prompts as in the previous approach; however, we instruct the LLM to generate parameters for Pโข(ฯโฃF l)๐ conditional ๐ subscript ๐น ๐ P(\omega\mid F_{l})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ). For example, for the weight of each concept ฯ c iโฯ subscript ๐ subscript ๐ ๐ ๐\omega_{c_{i}}\in\omega italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT โ italic_ฯ we prompt ๐ ๐\mathbb{L}blackboard_L to generate a range ฯ c i range=[ฯ c i min,ฯ c i max]superscript subscript ๐ subscript ๐ ๐ range superscript subscript ๐ subscript ๐ ๐ min superscript subscript ๐ subscript ๐ ๐ max\omega_{c_{i}}^{\text{range}}=[\omega_{c_{i}}^{\text{min}},\omega_{c_{i}}^{% \text{max}}]italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT range end_POSTSUPERSCRIPT = [ italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT min end_POSTSUPERSCRIPT , italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT max end_POSTSUPERSCRIPT ]. Then we can define Pโข(ฯโฃF l)๐ conditional ๐ subscript ๐น ๐ P(\omega\mid F_{l})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) as follows:
|
| 205 |
+
|
| 206 |
+
Pโข(ฯโฃF l)={1,ifโขฯ c iโฯ c i range,โฯ c iโฯ 0,otherwise.๐ conditional ๐ subscript ๐น ๐ cases 1 formulae-sequence if subscript ๐ subscript ๐ ๐ superscript subscript ๐ subscript ๐ ๐ range for-all subscript ๐ subscript ๐ ๐ ๐ 0 otherwise P(\omega\mid F_{l})=\begin{cases}1,&\text{if }\omega_{c_{i}}\in\omega_{c_{i}}^% {\text{range}},\forall\omega_{c_{i}}\in\omega\\ 0,&\text{otherwise}.\end{cases}italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) = { start_ROW start_CELL 1 , end_CELL start_CELL if italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT โ italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT range end_POSTSUPERSCRIPT , โ italic_ฯ start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT โ italic_ฯ end_CELL end_ROW start_ROW start_CELL 0 , end_CELL start_CELL otherwise . end_CELL end_ROW(7)
|
| 207 |
+
|
| 208 |
+
We can similarly model this for other forms of distributions, such as the Gaussian distribution. Once the initialization process is complete, MAPLE iteratively reduces its uncertainty using human feedback.
|
| 209 |
+
|
| 210 |
+
### LLM-Guided Active Preference Learning
|
| 211 |
+
|
| 212 |
+
After initialization, MAPLE iteratively follows three steps: 1) query selection, 2) human feedback collection, and 3) preference posterior update, discussed below.
|
| 213 |
+
|
| 214 |
+
#### Oracle-guided active query selection (OAQS)
|
| 215 |
+
|
| 216 |
+
At the beginning of each iteration, MAPLE selects a query q ๐ q italic_q (a pair of trajectories) (lines 5-6, Algorithm [1](https://arxiv.org/html/2412.07207v2#alg1 "Algorithm 1 โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models")) from ๐ ฯ subscript ๐ ๐\mathcal{D}_{\tau}caligraphic_D start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT that would reduce uncertainty the most while mitigating query difficulty based on human feedback. The query selection process is described in Algorithm [2](https://arxiv.org/html/2412.07207v2#alg2 "Algorithm 2 โฃ Stopping criteria โฃ LLM-Guided Active Preference Learning โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models"), which starts by sorting all the queries based on an acquisition function ๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT. In this paper, we use the variance ratio for its flexibility and high efficacy. In particular, for trajectory ranking queries, the score for (ฯ i,ฯ j)subscript ๐ ๐ subscript ๐ ๐(\tau_{i},\tau_{j})( italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) is calculated as ๐ผ ฯโผฮฉ ๐ฏโข[1โmaxโก(Pโข(ฯ iโปฯ jโฃฯ),Pโข(ฯ jโปฯ iโฃฯ))]subscript ๐ผ similar-to ๐ subscript ฮฉ ๐ฏ delimited-[]1 ๐ succeeds subscript ๐ ๐ conditional subscript ๐ ๐ ๐ ๐ succeeds subscript ๐ ๐ conditional subscript ๐ ๐ ๐\mathbb{E}_{\omega\sim\Omega_{\mathcal{T}}}[1-\max(P(\tau_{i}\succ\tau_{j}\mid% \omega),P(\tau_{j}\succ\tau_{i}\mid\omega))]blackboard_E start_POSTSUBSCRIPT italic_ฯ โผ roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ 1 - roman_max ( italic_P ( italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โฃ italic_ฯ ) , italic_P ( italic_ฯ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT โป italic_ฯ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT โฃ italic_ฯ ) ) ]. Note that other acquisition functions can also be used. Once sorted, OAQS iterates over the top K ๐พ K italic_K queries and selects the first query that the oracle (in our case an LLM) evaluates to be answerable by the human (lines 2-11). Finally, Algorithm [2](https://arxiv.org/html/2412.07207v2#alg2 "Algorithm 2 โฃ Stopping criteria โฃ LLM-Guided Active Preference Learning โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models") returns the least difficult query q ๐ q italic_q among the top K query selected by ๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT. We now analyze the performance of OAQS based on the characterization of the oracle.2 2 2 Proofs are in the Appendix.
|
| 217 |
+
|
| 218 |
+
###### Definition 1
|
| 219 |
+
|
| 220 |
+
Let Q ๐ Q italic_Q denote the set of all possible queries, and Q ๐โQ subscript ๐ ๐ ๐ Q_{\mathcal{A}}\subseteq Q italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT โ italic_Q represent the subset of queries answerable by โ โ\mathcal{H}caligraphic_H. The Absolute Query Success Rate (AQSR) is defined as the probability that a randomly selected query q ๐ q italic_q belongs to the intersection QโฉQ ๐ ๐ subscript ๐ ๐ Q\cap Q_{\mathcal{A}}italic_Q โฉ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT, i.e., Pโข(qโQ ๐)๐ ๐ subscript ๐ ๐ P(q\in Q_{\mathcal{A}})italic_P ( italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ).
|
| 221 |
+
|
| 222 |
+
###### Definition 2
|
| 223 |
+
|
| 224 |
+
The Query Success Rate (QSR) of a query selection strategy is defined as the probability that a query q ๐ q italic_q, selected by the strategy, belongs to Q ๐ subscript ๐ ๐ Q_{\mathcal{A}}italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT, i.e., Pโข(qโQ ๐โฃstrategy)๐ ๐ conditional subscript ๐ ๐ strategy P(q\in Q_{\mathcal{A}}\mid\text{strategy})italic_P ( italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT โฃ strategy ).
|
| 225 |
+
|
| 226 |
+
###### Proposition 1
|
| 227 |
+
|
| 228 |
+
Assuming the independence of AQSR from acquisition function ranking, the QSR of a random query selection strategy: Pโข(qโQ ๐โฃrandom)=AโขQโขSโขR ๐ ๐ conditional subscript ๐ ๐ random ๐ด ๐ ๐ ๐
P(q\in Q_{\mathcal{A}}\mid\text{random})=AQSR italic_P ( italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT โฃ random ) = italic_A italic_Q italic_S italic_R
|
| 229 |
+
|
| 230 |
+
###### Proposition 2
|
| 231 |
+
|
| 232 |
+
Under the same assumption of proposition 1, the QSR of a top-query selection strategy, which always selects the highest-rated query by ๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT, Pโข(qโQ ๐โฃtop)=AโขQโขSโขR ๐ ๐ conditional subscript ๐ ๐ top ๐ด ๐ ๐ ๐
P(q\in Q_{\mathcal{A}}\mid\text{top})=AQSR italic_P ( italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT โฃ top ) = italic_A italic_Q italic_S italic_R.
|
| 233 |
+
|
| 234 |
+
###### Proposition 3
|
| 235 |
+
|
| 236 |
+
The QSR of the OAQS strategy is given by
|
| 237 |
+
|
| 238 |
+
AโขQโขSโขRโ
Y 1โ
1โ[AQSRโ
(1โY 0โY 1)+Y 0]K 1โ[AQSRโ
(1โY 0โY 1)+Y 0],โ
๐ด ๐ ๐ ๐
subscript ๐ 1 1 superscript delimited-[]โ
AQSR 1 subscript ๐ 0 subscript ๐ 1 subscript ๐ 0 ๐พ 1 delimited-[]โ
AQSR 1 subscript ๐ 0 subscript ๐ 1 subscript ๐ 0 AQSR\cdot Y_{1}\cdot\frac{1-\left[\text{AQSR}\cdot(1-Y_{0}-Y_{1})+Y_{0}\right]% ^{K}}{1-\left[\text{AQSR}\cdot(1-Y_{0}-Y_{1})+Y_{0}\right]},italic_A italic_Q italic_S italic_R โ
italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT โ
divide start_ARG 1 - [ AQSR โ
( 1 - italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT end_ARG start_ARG 1 - [ AQSR โ
( 1 - italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ] end_ARG ,
|
| 239 |
+
|
| 240 |
+
where Y 0=Pโข(๐โข(F q,qโQ ๐)=False)subscript ๐ 0 ๐ ๐ subscript ๐น ๐ ๐ subscript ๐ ๐ False Y_{0}=P(\mathbb{L}(F_{q},q\notin Q_{\mathcal{A}})=\text{False})italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_P ( blackboard_L ( italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT , italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ) = False ) and
|
| 241 |
+
|
| 242 |
+
Y 1=Pโข(๐โข(F q,qโQ ๐)=True)subscript ๐ 1 ๐ ๐ subscript ๐น ๐ ๐ subscript ๐ ๐ True Y_{1}=P(\mathbb{L}(F_{q},q\in Q_{\mathcal{A}})=\text{True})italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_P ( blackboard_L ( italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT , italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ) = True ). Here, we assume independence of AQSR, Y 0 subscript ๐ 0 Y_{0}italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, and Y 1 subscript ๐ 1 Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT from acquisition function ranking.
|
| 243 |
+
|
| 244 |
+
###### Corollary 1
|
| 245 |
+
|
| 246 |
+
Based on Proposition [3](https://arxiv.org/html/2412.07207v2#Thmproposition3 "Proposition 3 โฃ Oracle-guided active query selection (OAQS) โฃ LLM-Guided Active Preference Learning โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models"), the OAQS will have a higher QSR than the random query selection strategy and top-query selection strategy iff, Y 0+Y 1>1 subscript ๐ 0 subscript ๐ 1 1 Y_{0}+Y_{1}>1 italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 1 as Kโโโ๐พ K\rightarrow\infty italic_K โ โ.
|
| 247 |
+
|
| 248 |
+
###### Definition 3
|
| 249 |
+
|
| 250 |
+
The Optimal Query Success Rate (OQSR) of a strategy is defined as the probability that the strategy returns the query qโsuperscript ๐ q^{*}italic_q start_POSTSUPERSCRIPT โ end_POSTSUPERSCRIPT with the highest value according to an acquisition function ๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT, among all answerable queries, i.e.,
|
| 251 |
+
|
| 252 |
+
Pโข(qโ=argโกmax qโQโก๐ fโข(q)โข๐โข(qโQ ๐)),๐ superscript ๐ subscript ๐ ๐ subscript ๐ ๐ ๐ ๐ ๐ subscript ๐ ๐ P(q^{*}=\arg\max_{q\in Q}\mathcal{A}_{f}(q)\mathbb{I}(q\in Q_{\mathcal{A}})),italic_P ( italic_q start_POSTSUPERSCRIPT โ end_POSTSUPERSCRIPT = roman_arg roman_max start_POSTSUBSCRIPT italic_q โ italic_Q end_POSTSUBSCRIPT caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( italic_q ) blackboard_I ( italic_q โ italic_Q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ) ) ,
|
| 253 |
+
|
| 254 |
+
where qโsuperscript ๐ q^{*}italic_q start_POSTSUPERSCRIPT โ end_POSTSUPERSCRIPT is the query returned by the strategy.
|
| 255 |
+
|
| 256 |
+
###### Proposition 4
|
| 257 |
+
|
| 258 |
+
Under the similar assumption of proposition 1. assumption, the OQSR of a random query selection strategy is equal to 1/|Q|1 ๐ 1/|Q|1 / | italic_Q |.
|
| 259 |
+
|
| 260 |
+
###### Proposition 5
|
| 261 |
+
|
| 262 |
+
Under the similar assumption of proposition 1, the OQSR of a Top-Query Selection Strategy is equal to the AQSR.
|
| 263 |
+
|
| 264 |
+
###### Proposition 6
|
| 265 |
+
|
| 266 |
+
Under the same assumption of Proposition 3, the OQSR of the OAQS strategy is given by
|
| 267 |
+
|
| 268 |
+
OQSR=AโขQโขSโขRโ
Y 1โ
1โ[(1โAโขQโขSโขR)โขY 0]K 1โ(1โAโขQโขSโขR)โขY 0.OQSRโ
๐ด ๐ ๐ ๐
subscript ๐ 1 1 superscript delimited-[]1 ๐ด ๐ ๐ ๐
subscript ๐ 0 ๐พ 1 1 ๐ด ๐ ๐ ๐
subscript ๐ 0\text{OQSR}=AQSR\cdot Y_{1}\cdot\frac{1-[(1-AQSR)Y_{0}]^{K}}{1-(1-AQSR)Y_{0}}.OQSR = italic_A italic_Q italic_S italic_R โ
italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT โ
divide start_ARG 1 - [ ( 1 - italic_A italic_Q italic_S italic_R ) italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT end_ARG start_ARG 1 - ( 1 - italic_A italic_Q italic_S italic_R ) italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG .
|
| 269 |
+
|
| 270 |
+
###### Corollary 2
|
| 271 |
+
|
| 272 |
+
Based on Proposition [6](https://arxiv.org/html/2412.07207v2#Sx5.Ex5 "Proposition 6 โฃ Oracle-guided active query selection (OAQS) โฃ LLM-Guided Active Preference Learning โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models"), the OAQS strategy will have a higher OQSR than the top-query selection strategy if (1โAโขQโขSโขR)โขY 0+Y 1>1 1 ๐ด ๐ ๐ ๐
subscript ๐ 0 subscript ๐ 1 1(1-AQSR)Y_{0}+Y_{1}>1( 1 - italic_A italic_Q italic_S italic_R ) italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 1 as Kโโโ๐พ K\rightarrow\infty italic_K โ โ, and then random query selection strategy if AQSRโ
Y 1>1โ(1โAQSR)โขY 0|Q|โ
AQSR subscript ๐ 1 1 1 AQSR subscript ๐ 0 ๐\mathrm{AQSR}\cdot Y_{1}>\frac{1-(1-\mathrm{AQSR})Y_{0}}{|Q|}roman_AQSR โ
italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > divide start_ARG 1 - ( 1 - roman_AQSR ) italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG start_ARG | italic_Q | end_ARG as Kโโโ๐พ K\rightarrow\infty italic_K โ โ.
|
| 273 |
+
|
| 274 |
+
#### Human feedback collection
|
| 275 |
+
|
| 276 |
+
MAPLE queries the human โ โ\mathcal{H}caligraphic_H using the query q ๐ q italic_q returned by Algorithm [2](https://arxiv.org/html/2412.07207v2#alg2 "Algorithm 2 โฃ Stopping criteria โฃ LLM-Guided Active Preference Learning โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models") to collect feedback. For each query q ๐ q italic_q, MAPLE provides a pair of trajectories, and โ โ\mathcal{H}caligraphic_H returns an answer ๐ ฯ โ=(f h,f l,f q)subscript superscript ๐ โ ๐ subscript ๐ โ subscript ๐ ๐ subscript ๐ ๐\mathcal{A}^{\mathcal{H}}_{\tau}=(f_{h},f_{l},f_{q})caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT = ( italic_f start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_f start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ), where f h subscript ๐ โ f_{h}italic_f start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is binary feedback, f l subscript ๐ ๐ f_{l}italic_f start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT is an optional natural language explanation associated with that feedbackโpossibly empty if the human does not provide an explanationโand f q subscript ๐ ๐ f_{q}italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT is a optional natural language feedback about the difficulty of the query. Each piece of feedback is then added to the corresponding feedback set (lines 7-8, Algorithm [1](https://arxiv.org/html/2412.07207v2#alg1 "Algorithm 1 โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models")).
|
| 277 |
+
|
| 278 |
+
#### LLM-guided posterior update
|
| 279 |
+
|
| 280 |
+
Once feedback is added to the set, we update our current weight sample ฮฉ ๐ฏ subscript ฮฉ ๐ฏ\Omega_{\mathcal{T}}roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT by sampling Pโข(F hโฃฯ)โขPโข(ฯโฃF l)๐ conditional subscript ๐น โ ๐ ๐ conditional ๐ subscript ๐น ๐ P(F_{h}\mid\omega)P(\omega\mid F_{l})italic_P ( italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT โฃ italic_ฯ ) italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) using MCMC sampling, where Pโข(ฯโฃF h)๐ conditional ๐ subscript ๐น โ P(\omega\mid F_{h})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) is given by Equation[1](https://arxiv.org/html/2412.07207v2#Sx3.E1 "In Bayesian preference learning โฃ Background โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models"), and Pโข(ฯโฃF l)๐ conditional ๐ subscript ๐น ๐ P(\omega\mid F_{l})italic_P ( italic_ฯ โฃ italic_F start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) is given by Equations[6](https://arxiv.org/html/2412.07207v2#Sx5.E6 "In Preference weight sampling from LLM โฃ Initialization โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models") and [7](https://arxiv.org/html/2412.07207v2#Sx5.E7 "In Distribution weight sampling using LLM โฃ Initialization โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models").
|
| 281 |
+
|
| 282 |
+
#### Stopping criteria
|
| 283 |
+
|
| 284 |
+
MAPLE can employ various stopping criteria for active query generation, including:
|
| 285 |
+
|
| 286 |
+
* โขA fixed budget approach, where MAPLE operates within a predefined maximum query limit.
|
| 287 |
+
* โขA human-gated stopping criterion, based on the humanโs assessment of the systemโs competence. MAPLEโs interpretability enhances this process, allowing the inclusion of its current predictions and explanations in each query for human evaluation (line 7, Algorithm [1](https://arxiv.org/html/2412.07207v2#alg1 "Algorithm 1 โฃ Detailed Description of the Proposed Method โฃ MAPLE: A Framework for Active Preference Learning Guided by Large Language Models")).
|
| 288 |
+
|
| 289 |
+
Algorithm 2 Oracle-Guided Query Selection
|
| 290 |
+
|
| 291 |
+
0:Acquisition function
|
| 292 |
+
|
| 293 |
+
๐ f subscript ๐ ๐\mathcal{A}_{f}caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT
|
| 294 |
+
, List of queries
|
| 295 |
+
|
| 296 |
+
Q ๐ Q italic_Q
|
| 297 |
+
, Query preference feedback
|
| 298 |
+
|
| 299 |
+
F q subscript ๐น ๐ F_{q}italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT
|
| 300 |
+
, Set of weights from current posterior
|
| 301 |
+
|
| 302 |
+
ฮฉ ๐ฏ subscript ฮฉ ๐ฏ\Omega_{\mathcal{T}}roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT
|
| 303 |
+
, Oracle
|
| 304 |
+
|
| 305 |
+
๐ช ๐ช\mathcal{O}caligraphic_O
|
| 306 |
+
, # of Oracle queries
|
| 307 |
+
|
| 308 |
+
K ๐พ K italic_K
|
| 309 |
+
|
| 310 |
+
1:
|
| 311 |
+
|
| 312 |
+
Q sโขoโขrโขtโsortโข(Q|๐ f,ฮฉ ๐ฏ)โsubscript ๐ ๐ ๐ ๐ ๐ก sort conditional ๐ subscript ๐ ๐ subscript ฮฉ ๐ฏ Q_{sort}\leftarrow\text{sort}(Q|\mathcal{A}_{f},\Omega_{\mathcal{T}})italic_Q start_POSTSUBSCRIPT italic_s italic_o italic_r italic_t end_POSTSUBSCRIPT โ sort ( italic_Q | caligraphic_A start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT , roman_ฮฉ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT )
|
| 313 |
+
|
| 314 |
+
2:
|
| 315 |
+
|
| 316 |
+
Q tโขoโขpโQ sโขoโขrโขt[0:K]Q_{top}\leftarrow Q_{sort}[0:K]italic_Q start_POSTSUBSCRIPT italic_t italic_o italic_p end_POSTSUBSCRIPT โ italic_Q start_POSTSUBSCRIPT italic_s italic_o italic_r italic_t end_POSTSUBSCRIPT [ 0 : italic_K ]
|
| 317 |
+
|
| 318 |
+
3:for
|
| 319 |
+
|
| 320 |
+
qโQ tโขoโขp ๐ subscript ๐ ๐ก ๐ ๐ q\in Q_{top}italic_q โ italic_Q start_POSTSUBSCRIPT italic_t italic_o italic_p end_POSTSUBSCRIPT
|
| 321 |
+
do
|
| 322 |
+
|
| 323 |
+
4:
|
| 324 |
+
|
| 325 |
+
s qโ๐ชโข(promptโข(F q,q))โsubscript ๐ ๐ ๐ช prompt subscript ๐น ๐ ๐ s_{q}\leftarrow\mathcal{O}(\text{prompt}(F_{q},q))italic_s start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT โ caligraphic_O ( prompt ( italic_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT , italic_q ) )
|
| 326 |
+
|
| 327 |
+
5:if
|
| 328 |
+
|
| 329 |
+
s q subscript ๐ ๐ s_{q}italic_s start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT
|
| 330 |
+
is True then
|
| 331 |
+
|
| 332 |
+
6:return
|
| 333 |
+
|
| 334 |
+
q ๐ q italic_q
|
| 335 |
+
|
| 336 |
+
7:end if
|
| 337 |
+
|
| 338 |
+
8:end for
|
| 339 |
+
|
| 340 |
+
9:return
|
| 341 |
+
|
| 342 |
+
Q sโขoโขrโขtโข[0]subscript ๐ ๐ ๐ ๐ ๐ก delimited-[]0 Q_{sort}[0]italic_Q start_POSTSUBSCRIPT italic_s italic_o italic_r italic_t end_POSTSUBSCRIPT [ 0 ]
|
| 343 |
+
|
| 344 |
+
#### Handling unknown concepts
|
| 345 |
+
|
| 346 |
+
It should be noted that humans may provide instructions ๐ ๐ฏ โ subscript superscript ๐ โ ๐ฏ\mathcal{A}^{\mathcal{H}}_{\mathcal{T}}caligraphic_A start_POSTSUPERSCRIPT caligraphic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT that cannot be sufficiently captured by the available concepts in the concept maps. While this case is beyond the scope of this paper, several remedies exist in the literature to address this issue. LLMs can be prompted to add new concepts when generating weights. By leveraging the generalization capability of Cโข(โ
)๐ถโ
C(\cdot)italic_C ( โ
) we can attempt to apply these new concepts directly. If the new concept is significantly different from those in C ๐ถ C italic_C, few-shot-learning techniques can be employed. In particular, during interactions, if a new concept is important, we can use non-parametric few-shot learning from human feedback, such as nearest neighbor search, to improve concept mapping(Tian et al. [2024](https://arxiv.org/html/2412.07207v2#bib.bib43)). Finally, if a new concept arises repeatedly, it can be added to the concept map by retraining C ๐ถ C italic_C with data collected from multiple interactions through few-shot learning, as considered in(Soni et al. [2022](https://arxiv.org/html/2412.07207v2#bib.bib40)).
|
| 347 |
+
|
| 348 |
+
### Policy Optimization
|
| 349 |
+
|
| 350 |
+
The method for utilizing the weights generated by MAPLE to optimize policy varies based on the trajectory encoding and the chosen policy solver algorithm. For example, for Markovian preferences, the weights can be directly used with an MDP solver. In non-Markovian settings, the weights can be used to rank trajectories and directly align the policy with algorithms such as DPO(Rafailov et al. [2024](https://arxiv.org/html/2412.07207v2#bib.bib36)), or train a dense reward function(Guan, Valmeekam, and Kambhampati [2022](https://arxiv.org/html/2412.07207v2#bib.bib19)) using preference learning algorithms such as TREX(Brown et al. [2019](https://arxiv.org/html/2412.07207v2#bib.bib9)), and then use that reward function with reinforcement learning algorithms.
|
| 351 |
+
|
| 352 |
+

|
| 353 |
+
|
| 354 |
+
Figure 2: OpenStreetMap Routing
|
| 355 |
+
|
| 356 |
+
Experiments
|
| 357 |
+
-----------
|
| 358 |
+
|
| 359 |
+
In this section, we describe a comprehensive evaluation of MAPLE within the two environments detailed below. It is important to note that none of the models used in our experiments were fine-tuned; they were utilized in their publicly available form. We ran the local language model, specifically Mistral-7B-instruct-v0.3 (4-bit quantization), on a computer equipped with 64GB RAM and an Nvidia RTX 4090 24GB graphics card. For larger models, we relied on public API infrastructure. Note that we present results using preference weight sampling as it outperformed distribution weight sampling in both benchmarks (Appendix for details).
|
| 360 |
+
|
| 361 |
+
#### OpenStreetMap Routing
|
| 362 |
+
|
| 363 |
+
We use OpenStreetMap to generate routing graphs for different U.S. states. The environment includes a concept mapping function capable of using ten different concepts: 1) Time, 2) Speed, 3) Safety, 4) Scenic, 5) Battery Friendly, 6) Gas Station Nearby, 7) Charging Station Nearby, 8) Human Driving Friendly, 9) Battery ReGen Friendly, and 10) Autopilot Friendly. The goal is to find a route between a given source and destination that aligns with user preferences. To generate D ฯ subscript ๐ท ๐ D_{\tau}italic_D start_POSTSUBSCRIPT italic_ฯ end_POSTSUBSCRIPT, we used 200 random source and destination pairs with randomly sampled weights from ฮฉ ฮฉ\Omega roman_ฮฉ. For modeling human interaction, we utilized two different datasets, each containing 50 human interaction templates. The first dataset, called โClear,โ provides clear, knowledgeable instructions. The second dataset, called โNatural,โ obfuscates the โClearโ dataset with more natural-sounding language typical of everyday conversation and contextual information, for example:
|
| 364 |
+
|
| 365 |
+
> Clear: โI prefer routes that are safe and scenic, with a moderate focus on speed and low importance on time.โ
|
| 366 |
+
|
| 367 |
+
> Natural: โIโm planning a weekend drive to enjoy the countryside, so Iโm not in a hurry. I want the route to be as safe as possible because Iโll be driving with my family. It would be great if the drive is scenic too, so we could take in the beautiful views along the way. Speed isnโt a top concern, and weโre really just out to enjoy the journey rather than worry about how long it takes to get there.โ
|
| 368 |
+
|
| 369 |
+
For modeling f l subscript ๐ ๐ f_{l}italic_f start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT, the human clarifies the type of car (gas, autonomous, or electric) with a probability of 0.2 per feedback. For f q subscript ๐ ๐ f_{q}italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, the human is unable to answer when the top two highest-weighted (based on ground-truth weights) concepts in both trajectories are closer than a predefined threshold.
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+
Figure 3: HomeGrid
|
| 374 |
+
|
| 375 |
+
#### HomeGrid
|
| 376 |
+
|
| 377 |
+
The HomGrid environment is a simplified Minigrid(Chevalier-Boisvert et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib14)) setting designed to simulate a robot performing household tasks(Lin et al. [2023](https://arxiv.org/html/2412.07207v2#bib.bib25)). It features a discrete, finite action space and a partially observable language observation space for a 3ร3 3 3 3\times 3 3 ร 3 grid, detailing the objects and flooring in each grid square, within a truncated 12ร14 12 14 12\times 14 12 ร 14 grid. The initial abstract concepts include: 1) avoiding objects such as tables and chairs, 2) avoiding walls, 3) avoiding placing objects like bottles and plates on the floor, 4) avoiding placing objects on the stove, and 5) avoiding placing objects on the left chairs. A total of 60 trajectories were manually generated to update the posterior distribution of the weights ฯ ๐\omega italic_ฯ for each method. For modeling f l subscript ๐ ๐ f_{l}italic_f start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT, the human highlights the concept that was most influential for their preference. The modeling of f q subscript ๐ ๐ f_{q}italic_f start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT follows a similar approach to that used in OSM Routing.
|
| 378 |
+
|
| 379 |
+
### Experimental Results
|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+
(a) Test accuracy (OSM Routing)
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+
(b) Cosine distance (OSM Routing)
|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
|
| 391 |
+
(d) Test accuracy (HomeGrid)
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+
(e) Cosine distance (HomeGrid)
|
| 396 |
+
|
| 397 |
+
Figure 4: Comparison of efficacy of language feedback for preference inference.
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+
(a) Test accuracy (OSM Routing)
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
|
| 405 |
+
(b) Test accuracy (HomeGrid)
|
| 406 |
+
|
| 407 |
+
Figure 5: Efficacy of Oracle-guided Active Query Selection (OAQS).
|
| 408 |
+
|
| 409 |
+
We use three key metrics for evaluation: 1) the cosine distance between inferred preference weights (MAP of the distribution) and ground truth preference weights; 2) preference prediction accuracy, which evaluates the modelโs ability to generalize and accurately predict human preferences from an unseen set of trajectories; and 3) the policy cost difference, which compares the true cost of policies calculated using the ground truth preference function and the learned preference function.
|
| 410 |
+
|
| 411 |
+
#### Impact of linguistic feedback
|
| 412 |
+
|
| 413 |
+
Figure 4a-c presents the results of the OSM routing domain experiments. In this experiment, we did not apply OAQS; instead, we selected queries randomly from the dataset to isolate the impact of language. Several noteworthy insights emerge from the results. First, we observe that MAPLE outperforms B-REX on both the natural and clear datasets, demonstrating the effectiveness of integrating complex language feedback with conventional feedback. Additionally, as feedback increases, B-REXโs accuracy begins to approach that of MAPLE. This suggests that MAPLE is particularly advantageous when feedback is limited, such as in online settings where the agent must quickly infer rewards.
|
| 414 |
+
|
| 415 |
+
Examining the cosine distance offers further insight. Language alone appears almost sufficient to align the reward angle, as the cosine distance remains static despite the increasing number of queries. This suggests that preference feedback is more effective for calibrating the magnitude of the preference vector rather than its direction. In contrast, while B-REX achieves good accuracy with large amounts of feedback, it seems to exhibit significant misalignment, which could suggest overfitting and potential failure in out-of-distribution scenarios. Lastly, we evaluated existing publicly available models and found that both GPT-4o and GPT-4o-mini outperformed other models. However, the small local model (Mistral-7B Instruct) proved to be competitive, so we used it to generate all the results shown in Figures 4a, 4b, 4d, and 4e.
|
| 416 |
+
|
| 417 |
+
Figure 4d-f shows the results of the HomeGrid experiments. In this environment, we observe that natural instructions do affect performance, but MAPLE still significantly outperforms B-REX in both datasets. Notably, the Mistral-Large-2 models surpassed B-REX by a wide margin, achieving nearly one-third of the cost difference. Surprisingly, GPT-4-mini performed poorly, with a worse cost difference than B-REX. This is due to its inference of highly misaligned preference weights for certain instructions. In this environment, we also see that most of the angle alignment was done using the language feedback and B-REX remains highly misaligned even after 30 feedback.
|
| 418 |
+
|
| 419 |
+
#### Impact of OAQS
|
| 420 |
+
|
| 421 |
+
The results of the Oracle-Guided Active Query Selection (OAQS) using an LLM as an oracle are shown in Figure 5. In the routing environment, the Active Query Success Rate (AQSR) is approximately 0.64, while in the HomeGrid environment, it is 0.46. We first evaluated the capability of various models for in-context query selection (Figure 5c) using a dataset of 500 queries. The Mistral-7B model, used in the previous experiment, failed to meet the condition Y 0+Y 1>1 subscript ๐ 0 subscript ๐ 1 1 Y_{0}+Y_{1}>1 italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 1 in both environments. The Gemini-1.5-Pro model showed the best overall performance among publicly available models and was used to generate Figures 5a and 5b.
|
| 422 |
+
|
| 423 |
+
Figures 5a and 5b compare the test accuracy of Active B-REX with MAPLE, both with and without OAQS. In both environments, MAPLE with OAQS achieved the highest performance, with a significant margin in the OSM routing environment. We also calculated the Query Success Rate (QSR) for all three algorithms: 0.43 for Active B-REX, 0.43 for MAPLE without OAQS, and 0.58 for MAPLE with OAQS in the routing domain. The QSR was lower than the AQSR for the top-query selection strategy due to a violation of the independence assumption, suggesting that the variance ratio is more likely to select more challenging queries. We refer to this experimental metric as the Effective Query Success Rate (EQSR). Based on Proposition 3, the QSR for MAPLE with OAQS should be 0.77, but it was observed to be lower for the same reason. Replacing AQSR with EQSR in Proposition 3 gives us a value of 0.59, which closely matches the experimental value. Therefore, we conclude that EQSR is a more practical metric for estimating a modelโs success based on Y 0 subscript ๐ 0 Y_{0}italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and Y 1 subscript ๐ 1 Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. This phenomenon is also observed in HomeGrid. Finally, in the HomeGrid environment, the overall EQSR was low (around 0.2); therefore, even with OAQS, we saw an increase of 2-3 feedback signals after 30 queries, which was not enough to create a large margin and therefore we see only a modest difference between MAPLE with and without OAQS.
|
| 424 |
+
|
| 425 |
+
Conclusions and Future Works
|
| 426 |
+
----------------------------
|
| 427 |
+
|
| 428 |
+
We introduced MAPLE, a framework for active preference learning guided by large language models (LLMs). Our experiments in the OpenStreetMap Routing and HomeGrid environments demonstrated that incorporating language descriptions and explanations significantly improves preference alignment, and that LLM-guided active query selection enhances sample efficiency while reducing the burden on users. Future work could extend MAPLE to more complex environments and tasks, explore different types of linguistic feedback, and conduct user studies to evaluate its usability and effectiveness in real-world applications.
|
| 429 |
+
|
| 430 |
+
Acknowledgments
|
| 431 |
+
---------------
|
| 432 |
+
|
| 433 |
+
This research was supported in part by the U.S.Army DEVCOM Analysis Center(DAC) under contract number W911QX23D0009, and by the National Science Foundation under grants 2321786, 2326054, and 2416459.
|
| 434 |
+
|
| 435 |
+
References
|
| 436 |
+
----------
|
| 437 |
+
|
| 438 |
+
* Abbeel and Ng (2004) Abbeel, P.; and Ng, A.Y. 2004. Apprenticeship learning via inverse reinforcement learning. In _Proceedings of the 21st International Conference on Machine learning_.
|
| 439 |
+
* Achiam et al. (2023) Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. 2023. GPT-4 technical report. _arXiv preprint arXiv:2303.08774_.
|
| 440 |
+
* Basu, Singhal, and Dragan (2018) Basu, C.; Singhal, M.; and Dragan, A.D. 2018. Learning from richer human guidance: Augmenting comparison-based learning with feature queries. In _13th International Conference on Human-Robot Interaction_, 132โ140.
|
| 441 |
+
* Biyik (2022) Biyik, E. 2022. _Learning preferences for interactive autonomy_. Ph.D. thesis, Stanford University.
|
| 442 |
+
* Biyik et al. (2019) Biyik, E.; Palan, M.; Landolfi, N.C.; Losey, D.P.; and Sadigh, D. 2019. Asking easy questions: A user-friendly approach to active reward learning. In _Proceedings of the 3rd Annual Conference on Robot Learning_, 1177โ1190.
|
| 443 |
+
* Bobu et al. (2021) Bobu, A.; Paxton, C.; Yang, W.; Sundaralingam, B.; Chao, Y.-W.; Cakmak, M.; and Fox, D. 2021. Learning perceptual concepts by bootstrapping from human queries. _arXiv preprint arXiv:2111.05251_.
|
| 444 |
+
* Bradley and Terry (1952) Bradley, R.A.; and Terry, M.E. 1952. Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. _Biometrika_, 39: 324.
|
| 445 |
+
* Brown, Goo, and Niekum (2019) Brown, D.S.; Goo, W.; and Niekum, S. 2019. Better-than-demonstrator imitation learning via automatically-ranked demonstrations. In _3rd Annual Conference on Robot Learning_, 330โ359.
|
| 446 |
+
* Brown et al. (2019) Brown, D.S.; Goo, W.; Prabhat, N.; and Niekum, S. 2019. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. In _36th International Conference on Machine Learning_, 783โ792.
|
| 447 |
+
* Brown et al. (2020) Brown, D.S.; Niekum, S.; Coleman, R.; and Srinivasan, R. 2020. Safe imitation learning via fast Bayesian reward inference from preferences. In _37th International Conference on Machine Learning_, 1165โ1177.
|
| 448 |
+
* Brown, Schneider, and Niekum (2021) Brown, D.S.; Schneider, J.J.; and Niekum, S. 2021. Value alignment verification. In _38th International Conference on Machine Learning_, 1105โ1115.
|
| 449 |
+
* Bucker et al. (2023) Bucker, A.; Figueredo, L. F.C.; Haddadin, S.; Kapoor, A.; Ma, S.; Vemprala, S.; and Bonatti, R. 2023. LATTE: LAnguage Trajectory TransformEr. In _IEEE International Conference on Robotics and Automation_, 7287โ7294.
|
| 450 |
+
* Chebotar et al. (2021) Chebotar, Y.; Hausman, K.; Lu, Y.; Xiao, T.; Kalashnikov, D.; Varley, J.; Irpan, A.; Eysenbach, B.; Julian, R.; Finn, C.; et al. 2021. Actionable models: Unsupervised offline reinforcement learning of robotic skills. _arXiv preprint arXiv:2104.07749_.
|
| 451 |
+
* Chevalier-Boisvert et al. (2023) Chevalier-Boisvert, M.; Dai, B.; Towers, M.; Perez-Vicente, R.; Willems, L.; Lahlou, S.; Pal, S.; Castro, P.S.; and Terry, J. 2023. Minigrid & Miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. In _Advances in Neural Information Processing Systems 36_.
|
| 452 |
+
* Cui et al. (2023) Cui, Y.; Karamcheti, S.; Palleti, R.; Shivakumar, N.; Liang, P.; and Sadigh, D. 2023. โNo, to the Rightโ โ Online language corrections for robotic manipulation via shared autonomy. _arXiv preprint arXiv:2301.02555_.
|
| 453 |
+
* Dietterich (2017) Dietterich, T.G. 2017. Steps toward robust artificial intelligence. _AI Magazine_, 38(3): 3โ24.
|
| 454 |
+
* Gal and Ghahramani (2016) Gal, Y.; and Ghahramani, Z. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In _33nd International Conference on Machine Learning_, 1050โ1059.
|
| 455 |
+
* Guan, Sreedharan, and Kambhampati (2022) Guan, L.; Sreedharan, S.; and Kambhampati, S. 2022. Leveraging approximate symbolic models for reinforcement learning via skill diversity. _arXiv preprint arXiv:2202.02886_.
|
| 456 |
+
* Guan, Valmeekam, and Kambhampati (2022) Guan, L.; Valmeekam, K.; and Kambhampati, S. 2022. Relative behavioral attributes: Filling the gap between symbolic goal specification and reward learning from human preferences. _arXiv preprint arXiv:2210.15906_.
|
| 457 |
+
* Guan et al. (2021) Guan, L.; Verma, M.; Guo, S.S.; Zhang, R.; and Kambhampati, S. 2021. Widening the pipeline in human-guided reinforcement learning with explanation and context-aware data augmentation. _Advances in Neural Information Processing Systems_, 34: 21885โ21897.
|
| 458 |
+
* Guo et al. (2022) Guo, C.; Zou, S.; Zuo, X.; Wang, S.; Ji, W.; Li, X.; and Cheng, L. 2022. Generating diverse and natural 3D human motions from text. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 5152โ5161.
|
| 459 |
+
* Icarte et al. (2022) Icarte, R.T.; Klassen, T.Q.; Valenzano, R.; and McIlraith, S.A. 2022. Reward machines: Exploiting reward function structure in reinforcement learning. _Journal of Artificial Intelligence Research_, 73: 173โ208.
|
| 460 |
+
* Illanes et al. (2020) Illanes, L.; Yan, X.; Icarte, R.T.; and McIlraith, S.A. 2020. Symbolic plans as high-level instructions for reinforcement learning. In _30th International Conference on Automated Planning and Scheduling_, 540โ550.
|
| 461 |
+
* Lee and Popoviฤ (2010) Lee, S.J.; and Popoviฤ, Z. 2010. Learning behavior styles with inverse reinforcement learning. _ACM transactions on graphics_, 29(4): 1โ7.
|
| 462 |
+
* Lin et al. (2023) Lin, J.; Du, Y.; Watkins, O.; Hafner, D.; Abbeel, P.; Klein, D.; and Dragan, A. 2023. Learning to model the world with language. _arXiv preprint arXiv:2308.01399_.
|
| 463 |
+
* Lin et al. (2022) Lin, J.; Fried, D.; Klein, D.; and Dragan, A. 2022. Inferring rewards from language in context. _arXiv preprint arXiv:2204.02515_.
|
| 464 |
+
* Lou et al. (2024) Lou, X.; Zhang, J.; Wang, Z.; Huang, K.; and Du, Y. 2024. Safe reinforcement learning with free-form natural language constraints and pre-trained language models. _arXiv preprint arXiv:2401.07553_.
|
| 465 |
+
* Luo et al. (2020) Luo, Y.-S.; Soeseno, J.H.; Chen, T. P.-C.; and Chen, W.-C. 2020. CARL: Controllable agent with reinforcement learning for quadruped locomotion. _ACM Transactions on Graphics_, 39(4): 38โ1.
|
| 466 |
+
* Lyu et al. (2019) Lyu, D.; Yang, F.; Liu, B.; and Gustafson, S. 2019. SDRL: interpretable and data-efficient deep reinforcement learning leveraging symbolic planning. In _33rd AAAI Conference on Artificial Intelligence_, 2970โ2977.
|
| 467 |
+
* Ma et al. (2023) Ma, Y.J.; Liang, W.; Wang, G.; Huang, D.-A.; Bastani, O.; Jayaraman, D.; Zhu, Y.; Fan, L.; and Anandkumar, A. 2023. Eureka: Human-level reward design via coding large language models. _arXiv preprint arXiv:2310.12931_.
|
| 468 |
+
* Mahmud, Saisubramanian, and Zilberstein (2023) Mahmud, S.; Saisubramanian, S.; and Zilberstein, S. 2023. Explanation-guided reward alignment. In _32nd International Joint Conference on Artificial Intelligence_, 473โ482.
|
| 469 |
+
* Ng and Russell (2000) Ng, A.Y.; and Russell, S.J. 2000. Algorithms for inverse reinforcement learning. In _17th International Conference on Machine Learning_, 663โ670.
|
| 470 |
+
* OpenStreetMap Contributors (2017) OpenStreetMap Contributors. 2017. Planet dump retrieved from https://planet.osm.org . https://www.openstreetmap.org.
|
| 471 |
+
* Peng et al. (2018) Peng, X.B.; Kanazawa, A.; Malik, J.; Abbeel, P.; and Levine, S. 2018. SFV: Reinforcement learning of physical skills from videos. _ACM Transactions On Graphics_, 37(6): 178:1โ178:14.
|
| 472 |
+
* Peng et al. (2021) Peng, X.B.; Ma, Z.; Abbeel, P.; Levine, S.; and Kanazawa, A. 2021. AMP: Adversarial motion priors for stylized physics-based character control. _ACM Transactions On Graphics_, 40(4): 144:1โ144:20.
|
| 473 |
+
* Rafailov et al. (2024) Rafailov, R.; Sharma, A.; Mitchell, E.; Manning, C.D.; Ermon, S.; and Finn, C. 2024. Direct preference optimization: Your language model is secretly a reward model. _Advances in Neural Information Processing Systems_, 36.
|
| 474 |
+
* Ramachandran and Amir (2007) Ramachandran, D.; and Amir, E. 2007. Bayesian inverse reinforcement learning. In _20th International Joint Conference on Artifical intelligence_, 2586โ2591.
|
| 475 |
+
* Sadigh et al. (2017) Sadigh, D.; Dragan, A.D.; Sastry, S.S.; and Seshia, S.A. 2017. Active preference-based learning of reward functions. In _Robotics: Science and Systems XIII_.
|
| 476 |
+
* Silver et al. (2022) Silver, T.; Athalye, A.; Tenenbaum, J.B.; Lozano-Perez, T.; and Kaelbling, L.P. 2022. Learning neuro-symbolic skills for bilevel planning. _arXiv preprint arXiv:2206.10680_.
|
| 477 |
+
* Soni et al. (2022) Soni, U.; Thakur, N.; Sreedharan, S.; Guan, L.; Verma, M.; Marquez, M.; and Kambhampati, S. 2022. Towards customizable reinforcement learning agents: Enabling preference specification through online vocabulary expansion. _arXiv preprint arXiv:2210.15096_.
|
| 478 |
+
* Sontakke et al. (2024) Sontakke, S.; Zhang, J.; Arnold, S.; Pertsch, K.; Biyik, E.; Sadigh, D.; Finn, C.; and Itti, L. 2024. RoboCLIP: One demonstration is enough to learn robot policies. _Advances in Neural Information Processing Systems_, 36.
|
| 479 |
+
* Tevet et al. (2022) Tevet, G.; Raab, S.; Gordon, B.; Shafir, Y.; Cohen-Or, D.; and Bermano, A.H. 2022. Human motion diffusion model. _arXiv preprint arXiv:2209.14916_.
|
| 480 |
+
* Tian et al. (2024) Tian, S.; Li, L.; Li, W.; Ran, H.; Ning, X.; and Tiwari, P. 2024. A survey on few-shot class-incremental learning. _Neural Networks_, 169: 307โ324.
|
| 481 |
+
* Tien et al. (2024) Tien, J.; Yang, Z.; Jun, M.; Russell, S.J.; Dragan, A.; and Biyik, E. 2024. Optimizing robot behavior via comparative language feedback. In _3rd HRI Workshop on Human-Interactive Robot Learning_.
|
| 482 |
+
* Wang et al. (2024) Wang, Y.; Sun, Z.; Zhang, J.; Xian, Z.; Biyik, E.; Held, D.; and Erickson, Z. 2024. RL-VLM-F: Reinforcement learning from vision language foundation model feedback. _arXiv preprint arXiv:2402.03681_.
|
| 483 |
+
* Wang et al. (2017) Wang, Z.; Merel, J.S.; Reed, S.E.; de Freitas, N.; Wayne, G.; and Heess, N. 2017. Robust imitation of diverse behaviors. _Advances in Neural Information Processing Systems_, 30.
|
| 484 |
+
* Yu et al. (2023) Yu, W.; Gileadi, N.; Fu, C.; Kirmani, S.; Lee, K.-H.; Arenas, M.G.; Chiang, H.-T.L.; Erez, T.; Hasenclever, L.; Humplik, J.; et al. 2023. Language to rewards for robotic skill synthesis. _arXiv preprint arXiv:2306.08647_.
|
| 485 |
+
* Zhang et al. (2022) Zhang, R.; Bansal, D.; Hao, Y.; Hiranaka, A.; Gao, J.; Wang, C.; Martรญn-Martรญn, R.; Fei-Fei, L.; and Wu, J. 2022. A dual representation framework for robot learning with human guidance. In _6th Annual Conference on Robot Learning_, 738โ750.
|
| 486 |
+
* Zhou and Dragan (2018) Zhou, A.; and Dragan, A.D. 2018. Cost functions for robot motion style. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems_, 3632โ3639. IEEE.
|
| 487 |
+
* Ziebart et al. (2008) Ziebart, B.D.; Maas, A.L.; Bagnell, J.A.; and Dey, A.K. 2008. Maximum entropy inverse reinforcement learning. In _Proceedings of the 23rd AAAI Conference on Artificial Intelligence_, 1433โ1438.
|
| 488 |
+
* Zilberstein (2015) Zilberstein, S. 2015. Building strong semi-autonomous systems. In _29th AAAI Conference on Artificial Intelligence_, 4088โ4092.
|