| { |
| "File Number": "101", |
| "Title": "A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning", |
| "Limitation": "One limitation of our work is the experimental focus on only Minigrid environments, due to the need to validate carefully our approach. For future works, we would also like to extend these ideas to temporally extended models, which could simplify the planning task, and are also better suited as a conceptual model of C1. Finally, we note that the architectures we use are involved and can require careful tuning for new types of environments.", |
| "Reviewer Comment": "Reviewer_1: I want to thank the authors for their work. The paper is rather clear, well structured and easy to follow. However, there are many abbreviations in the experimental part that I believe may be unusual for most readers, which makes it hard to follow. I think that this research domain is very promising. Indeed, MBRL methods recently demonstrated impressive results and studying how to better structure latent spaces is an exciting research direction. However, this paper exhibits many limitations:\nThis method falls in the category of deep MBRL methods, however it lacks many references to modern state-of-the-art methods in the field. Notably, the authors mention Line 30 that except for MuZero, MBRL methods show poor performance. They also argue that only Predictron and MuZero plan over a latent space while the others methods construct a model over the true state space. I think that the authors missed many recent and important references. Notably, methods such as PETS, MBPO, Dreamer or LEAP demonstrated impressive results in terms of both sample efficiency and asymptotic performance on challenging benchmarks. Among these methods both LEAP and Dreamer plan over a latent space. Furthermore, Dreamer also introduced set-based representation. Thus, it would be key in this paper to cite Dreamer and to carefully position the proposed method with respect to it.\nAs this paper does not provide theoretical results or analysis, the experimental section is key to demonstrate the usefulness of the method. However, this section is in my sense too weak to achieve this goal. Indeed, the authors considered a single toy environment that does not exhibit any of the challenges that the deep RL and deep MBRL have been designed to address. I would highly recommend the authors to assess the performance of their method on standard benchmarks such as the Atari suite or the Gym Mujoco environments. In addition, the authors do not compare their methods to classical SOTA methods such as the one aforementioned. The authors should compare their algorithm to at least Dreamer.\nRegarding the paragraph between lines 72 and 83, the authors missed the reference to Hamrick et al. (on the role of planning in model-based deep reinforcement learning) which I believe is important to mention here.\nThe conclusions in Section 5.5 are too broad and in my opinion, not enough justified. For instance, the authors claim that \"Model-free methods face difficulties in OOD generalization;\" where Model-free methods refer to double DQN and this method has been tested on a single toy environment. There have been tremendous efforts to study and improve the generalization of model free methods and I would recommend the authors either moderate their conclusions or to better inform them with more experiments and relevant citations.\nRegarding the notations, it is misleading to use\nS\nfor the latent state space and\ns\n^\nfor latent states. Most of the time,\ns\n^\nis used for reconstructed states. I would recommend another notation such as\nz\nfor latent variables. Also, when the authors introduce the action space line 44, I would recommend to mention whether these actions are assumed to be discrete or continuous. To conclude, I think that this research direction is promising, however, this paper lacks many important references and exhibits too weak an experimental study. I would recommend the authors strengthen their positioning with respect to the existing literature and either bring theoretical elements or consider more challenging environments and baselines in their study.\nLimitations And Societal Impact:\nThis paper does not add limitations or potential negative societal impact to existing reinforcement learning methods.\nNeeds Ethics Review: No\nTime Spent Reviewing: 4\n\nReviewer_2: This paper proposes the use of attention-based bottleneck to improve the performance of model-based RL (MBRL) agents, which is intuitive and well explained. This mechanism seems similar to the top-k attention mechanism presented with BRIMs [1] also targeting OOD generalisation. But to the best of my knowledge, this is the first work that assess the impact of this mechanism in MBRL. Authors also study the impact of using a vector embedding vs am array of vectors (set embedding) which is also interesting. I think these results would be of interest for the RL community.\nClarity and quality is good in general, authors should be commended for the abundant graphs and visual explanations. My comments on this side focus on a few points easily fixable that I raise below. My biggest concern, and the reason why I think the paper cannot be accepted in its current form, are the claims that authors do of the conclusions extracted from their work, which are too generic and broad for the work presented and do not reflect correctly what has been presented:\n*Line 304 says that authors have drawn the conclusion that set-based representations are better in multitask environments. I think that if authors want to state this they should use more than one benchmark, currently all the experiments concern the same task which is \"reach green cell while avoiding lava cells\"\n*Line 307 \"model-free methods face difficulties in OOD generalisation\". Again too broad when tested on a grid world with a single type of task\n*Line 309 states that \"online joint learning is good for RL\" has the same problem and, moreover this one is a controversial statement since there are evidences in the opposite direction (e.g., [2]), again I think authors cannot do such broad claims with the experiments included.\nTo clarify, I am not asking the authors to do more experiments to consider accepting this paper, I believe the proposed approach and the experiments included are enough to show the potential of the proposed mechanisms, I am ok if authors just adjust their claims to do more realistic statements according to what they present.\nAdditional comments:\nChecklist: multiple answers should be expanded, the answer should include brief explanations, most are only \"yes\"\nFigure 2 caption, it would be helpful to say which appendix section to go for.\nThe FC downscale operation, is barely explained, how much they downscale? *The FC layer in the permutation invariant receives as input an array of vectors, but the FC layer must operate with vectors, I understand that each vector is passed through the FC separately, isn't it?\nSections 5.3 and 5.4 should be merged. Section 5.3 is supposed to be about in-distribution evaluation but the last lines are about OOD evaluation, which is section 5.4. I think it is good to have the text as it is now, thus, I would only put everything in the same subsection.\nLines 111-115. You cite twice \"[10]\" but seems tha tehy should be different references (one to refer what are you following and the other to refer to other families of approaches). *Caption Figure 10, you say \" Note that the concatenation does not happen outside the residual pass in case the dimensions do not match\" but from the picture is looks like in the residual connection is where there is no concatenation of actions.\n--After Rebuttal--- Authors have correctly addressed my concerns about the claims and updated them accordingly. This, together with the additional experiments and updates makes me confident to raise my score and recommend the acceptance of this work.\n[1] Mittal, Sarthak, et al. \"Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules.\" International Conference on Machine Learning. PMLR, 2020. [2] Lehuger, Auguste, and Matthew Crosby. \"Fixed\nβ\n-VAE Encoding for Curious Exploration in Complex 3D Environments.\" arXiv preprint arXiv:2105.08568 (2021).\nLimitations And Societal Impact:\nAuthors state limitations of their study, but make too broad claims for the experiments included as explained above.\nNeeds Ethics Review: No\nTime Spent Reviewing: 7\n\nReviewer_3: Originality\nThe ideas in this paper are novel and interesting. The contrast to related work in the background section is one of the best I've ever read. It is immediately clear what novel aspects are part of this work and the motivation behind them.\nQuality\nThe submission is technically sound, and the core hypothesis is adequately tested in the evaluation environment. The set of baselines are well-thought-out, and the evaluation seems complete.\nThe authors are quite honest and clear about the limitations, including the complexity of the hyperparameter space (which is a concern for models such as this).\nOne concern I have is in the presentation of some of the results. For Fig 7, it looks like the World Models approach is still warming up -- further training would quite possibly change the \"best approach\". What happens when you allow for a longer budget? I understand that there is a claim to data efficiency, but that is a weaker result than what is currently presented.\nClarity\nI found the paper to be extremely clear and filled with interesting insights -- both on the analysis in the evaluations and on the design decisions compared to related work. Nothing to suggest.\nSignificance\nThis is perhaps the biggest weakness of the paper. I find the ideas compelling, and the paper rides on that + the clarity alone, but the approach is tested in an arguably very simple setting. It is designed to exhibit the phenomena of interest, but there is little indication that the results will generalize to more complex settings. From the difficulty of jointly learning the representation+selection in the bottleneck, to the argument for ignoring reconstruction, there are many decisions that may only serve useful in this one particular environment.\nQuestions\nI am left with a few open questions after reading the work at various levels:\nThe bottleneck still looks at everything in the memory in order to get the compressed latent state representation. This is arguably different from how we might do it cognitively (going over every fact to find those we might want to change). How might this be improved? It's not only about reasoning on a small set of facts, but avoiding the full\nO\n(\nn\n)\ncomputation of what those\nm\n<<\nn\nfacts would be.\nI am interested in the claim included in the first Background & Context section as to why reconstruction isn't useful. Would this still be the case if given the opportunity for unsupervised pre-training for a decent reconstruction?\nThe \"integration\" phase appears to not be an inversion of the \"selection\" phase? If so, why? If they are indeed a mirror (i.e., the integration places the new representation back in the appropriate position), then the paper should clarify this.\nLimitations And Societal Impact:\nThe biggest limitation is the scope of the evaluation. The authors have identified this, along with other key limitations of the approach. Generally, the limitations of the work are carefully considered and discussed.\nEthical Concerns:\nI see no major ethical concerns with this body of work.\nNeeds Ethics Review: No\nTime Spent Reviewing: 5\n\nReviewer_4: After reading the authors' responses, the new results, and the other reviews, I believe that my concerns have been mostly addressed. I am raising my score from a 5 to a 7. (I am tempted to give an 8 or 9, as this paper has some fascinating and exceptionally well-motivated ideas. However, I cannot give an extremely high score to an empirical-only paper which tests only on gridworlds, despite the fact that there are good reasons why more complex environments are beyond the scope of this work and the fact that the authors did add a new gridworld variation.) I hope the authors follow up this work by showing how to apply similar ideas to more complex environments.\nSections 1-4 (relatively small concerns):\nThe encoder description in Section 3 (“We use the features...dynamics model, discussed below”) was insufficient and vague; I read it several times, but have no real idea how the encoder works. A more precise definition of the encoder would strengthen this work (in the appendix if there’s no room in the main paper). Edit: while re-reading, I noticed that Figure 1 is describing this. I think simply adding a reference to Figure 1 in the text (I don’t think there is any existing reference to Figure 1) might be sufficient to address most of this concern. However, I’m still confused about how “This approach is different from the practice of adding positional information onto the features”, so clarifying some more in the appendix might still be helpful.\nTD loss: “In experiments, a distributional output is used for both value and reward estimation, making this loss a KL-divergence.” This is a little vague and confusing; I’d suggest defining it more precisely.\nSection 5 (significant concerns):\n-Low number of trials (5 runs): This is too low to make the claims that are made in Section 5. RL algorithms are notoriously unstable between trials, and the differences shown could easily not appear in the true curves (that is, given an infinite number of trials). I suggest 1) running MANY more trials, and plotting the standard error instead of standard deviation, to show that the difference is significant, and/or better yet, 2) do proper statistical hypothesis testing to show your conclusions are valid given the data. For either approach, you will certainly need many more than 5 trials. This concern is somewhat mitigated in 5.4.1 by having effectively 20 trials. Still, it would be nice to see the 4 5-trial plots combined as a single plot with 20 trials and standard error bars, and/or see some statistical hypothesis testing (and far more than 20 trials would be better).\n-Gridworld: for an entirely empirical paper, only running gridworld experiments is not ideal. The claim is “CP allows better generalization”, but at best (ignoring the concern above) what is shown is “CP allows for better generalization on 8x8 Gridworlds”, which is not a huge contribution. More environments (including more difficult/complex ones) would help alleviate this problem.\nMinor edits:\n-Figure 1 caption, add an article adjective before CNN: “e.g., a CNN”\n-Too informal: “Luckily, our feature-position...”\nStrengths:\n-The plots of different bottleneck sizes are a great idea. However, I’m not 100% satisfied; more sizes and more trials (runs) per size would be nice.\n-Good limitations and failed experiments sections in the appendix.\n-Many of the hyperparameter selections are justified in the appendix via references. This is great and more papers should do this (otherwise the conclusions drawn may be based on the effects of the hyperparameter search results rather than the effects of the differences between the algorithm and baseline).\nQuestion/suggestion: Are the losses weighted when summed to produce the total loss? If so, this information should be included (that is to say, the total loss equation is wrong). If not, this is surprising; I’d suggest giving some intuition about why these losses are simply summed without any weighting and why that works. Perhaps some empirical evidence in the ablation showing that the summed losses all tend to be a similar order of magnitude, and all help achieve good performance. In other words, if you remove any of the losses, do the results get significantly worse? Or are there one or two losses that are not affecting much?\nSummary:\nWhile the ideas the authors propose are fascinating and well-motivated, this paper has no theoretical contribution (which is fine for a strong empirical paper), and a somewhat weak empirical contribution. Nonetheless, it is an interesting paper and has its strengths. I recommend continuing to refine this work by strengthening the experimental results, as discussed in more detail above.\nLimitations And Societal Impact:\nYes.\nNeeds Ethics Review: No\nTime Spent Reviewing: ~4.5", |
| "6 Conclusion & Limitations": "We introduced a conscious bottleneck mechanism into MBRL, facilitated by set-based representations, end-to-end learning and tree search MPC. In the non-static RL settings, the bottleneck allows selecting the relevant objects for planning and hence enables significant OOD performance. One limitation of our work is the experimental focus on only Minigrid environments, due to the need to validate carefully our approach. For future works, we would also like to extend these ideas to temporally extended models, which could simplify the planning task, and are also better suited as a conceptual model of C1. Finally, we note that the architectures we use are involved and can require careful tuning for new types of environments.", |
| "abstractText": "We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state during planning. The agent uses a bottleneck mechanism over a set-based representation to force the number of entities towhich the agent attends duringplanning to be small. In experiments, we investigate the bottleneck mechanism with several sets of customized environments featuring different challenges. We consistently observe that the design allows theplanning agents to generalize their learned task-solving abilities in compatible unseen environments by attending to the relevant objects, leading to better out-of-distribution generalization performance. Check project page https://github.com/PwnerHarry/CP.", |
| "1 Introduction": "Whether when planning our paths home from the office or from a hotel to an airport in an unfamiliar city, we typically focus on a small subset of relevant variables, e.g. the change in position or the presence of traffic. An interesting hypothesis of how this path planning skill generalizes across scenarios is that it is due to computation associated with the conscious processing of information [2, 3, 14]. Conscious attention focuses on a few necessary environment elements, with the help of an internal abstract representation of the world [43, 14]. This pattern, also known as consciousness in the first sense (C1) [14], has been theorized to enable humans’ exceptional adaptability and learning efficiency [2, 3, 14, 43, 7, 15]. A central characterization of conscious processing is that it involves a bottleneck, which forces one to handle dependencies between very few environmental characteristics at a time [14, 7, 15]. Though focusing on a subset of the available information may seem limiting, it facilitates Out-Of-Distribution (OOD) and systematic generalization to other situations where the ignored variables are different and yet still irrelevant [7, 15]. In this paper, we encode some of these ideas into reinforcement learning agents. Reinforcement learning (RL) is an approach for learning behaviors from agent-environment interactions [41]. However, most of the big successes of RL have been obtained by deep, model-free agents [30, 37, 38]. While Model-Based RL (MBRL) has generated significant research due to the potentials of using an extra model [31], its empirical performance has typically lagged behind, with some recent notable exceptions [36, 24, 17]. Our proposal is to take inspiration from human consciousness to build an architecture which learns a useful state space and in which attention can be focused on a small set of variables at any time, where the aspect of “partial planning”1 is enabled by modern deep\n1Partial planning is interpreted in different ways. For example, concurrent work [26] focuses on modelling “affordable” temporally extended actions, s.t. an “intent” could be achievedmore efficiently.\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nRL techniques [42, 26]. Specifically, we propose an end-to-end latent-space MBRL agent which does not require reconstructing the observations, as in most existing works, and uses Model Predictive Control (MPC) framework for decision-time planning [34, 35]. From an observation, the agent encodes a set of objects as a state, with a selective attention bottleneck mechanism to plan over selected subsets of the state (Sec. 4). Our experiments show that the inductive biases improve a specific form of OOD generalization, where consistent dynamics are preserved across seemingly different environment settings (Sec. 5).", |
| "2 Background & Context": "We consider an agent interacting with its environment at discrete timesteps. At time C, the agent receives observation >C and takes action 0C , receiving a reward AC+1 and new observation >C+1. The interaction is episodic. The agent is also building a latent-space transition model,ℳ, which can be used to sample a next state, B̂C+1, a reward ÂC+1 and a binary signal $̂C+1 which indicates if the model predicts termination after the transition. We will now compare and contrast our approach with some existing methods from the MBRL literature, explaining the rationale for our design choices. Observation Level Planning and Reconstruction vs Latent Space Planning Many MBRL methods plan in the observation space or rely on reconstruction-based losses to obtain state representations [24, 36, 17, 48]. Appropriate as these methods may be for some robotic tasks with few sensory inputs, e.g. continuous control with joint states, they are arguably difficult with high-dimensional inputs like images, since they may focus on predictable yet useless aspects of the raw observations [31]. Besides suffering from the need to reconstruct noise or irrelevant parts of the signal, it is not clear if representations built by a reconstruction loss (e.g. !2 in the observation space) are effective for an MBRL agent to plan or predict the desired signals [39, 17, 18], e.g. values (in the RL sense), rewards, etc.. In this work, we use an approach similar to those in [39, 36, 17], building a latent space representation that is jointly shaped by all the relevant RL signals (to serve value estimation and planning) without using reconstruction. Staged Training vs End-to-End Training Some MBRL agents based on a world model [16, 24, 31] use two explicit stages of training: (1) an inner representation of the world is trained using exploration (usually with random trajectories); (2) the representation is fixed and used for planning and MBRL. Despite the advantages of being more stable and easier to train, this procedure relies on having an environment where the initial exploration provides transitions that are sufficiently similar to those observed under improved policies, which is not the case in many environments. Furthermore, the learned representation may not be effective for value estimation, if these transitions do not contain reward information that can be used to update the input-torepresentation encoder. End-to-end MBRL agents, e.g. [39, 36], are able to learn the representation online, simultaneously with the value function, hence adapting better to non-stationarity in the transition distribution and rewards. Type of planning MBRL agents can use the model in different ways. Dyna [40] learns a model to generate “imaginary” transitions, which contribute to the training of the value estimator [40], in addition to the real observations, thus boosting sample efficiency. However, if the model is inaccurate, the transitions it generates may be “delusional\", which may alter the value estimator and negatively impact performance. Moreover, Dyna is typically used to generate extra transitions from the states visited in a trajectory, and updates the model based on the observed transitions as well. This means Dyna is focused on the data distribution encountered by the agent and may have trouble generalizing OOD. In contrast, simulationbased model-predictive control (MPC) and its variants [34, 35, 18] only update the value estimator based on real data, using the model simply to perform lookahead at decision-time. Hence, model inaccuracies impact less, with more favorable OOD generalization capabilities. Hence, MPC is adopted in our approach. Vectorized vs Set Representations for RL Most Deep Reinforcement Learning (DRL) work focus on learning vectorized state rep-\nresentations, where the agents’ observation is transformed into a feature vector of fixed dimensionality [30, 19]. Instead, set-based encoders, a.k.a. object-oriented architectures, are designed to extract a set of unordered vectors from which to predict the desired signals via permutation-invariant computations [50], as illustrated in Fig. 1. Recent works in RL have shown the promise of set-based representations in capturing environmental states, in terms of generalization, as well as their similarities to human perception [13, 47, 32, 46, 29]. Additionally in this work, we utilize the compositionality of set representations to enable the discovery of sparse interactions among objects, i.e. underlying dynamics, as well as to facilitate the bottleneck mechanism, analogous to C1 selection. The set-based representation coupled with the bottleneck provides an inductive bias consistent with selecting only the relevant aspects of a situation on-the-fly through an attention mechanism. The small size of the working memory bottleneck also enforces sparsity of the dependencies [7, 15] captured by the learned dynamics model: each transition can only relate a few objects together, no more than the size of the bottleneck.", |
| "3 MBRL with Set Representations": "We present an end-to-end baseline MBRL agent that uses a set-based representation and carries out latent space planning, but without a consciousness-inspired small bottleneck. This agent serves as a baseline to investigate the OOD generalization capabilities brought by the bottleneck, which is to be introduced later in Sec. 4. The mapping from observations to values is a combination of an encoder and a value estimator. The encoder maps an observation vector to a set of objects, which constitutes the latent state. The value estimator is a permutation-invariant set-to-vector architecture that maps the latent state to a value estimate. Note that the same state set is used for all the agents’ predictions, including future states, rewards etc., as we will discuss later. Encoder. For image-based observations, we use the features at each position of the CNN output feature map to characterize the feature of an object, similar to [9], as shown in Figure 1. To recover positional information lost during the process, we concatenate each object feature vector with a positional embedding to form a complete object embedding. Such approach is different from the common practice of mixing positional information by addition [45]. This is for the compatibility with our dynamics model training procedure, discussed below. (State-Action) Value Estimator takes the form & : S → R|A| , where S is the learned state space by the set-based encoder (hoping to capture the real underlying state space of the MDP) and A is a discrete action set. We use an improved architecture upon DeepSets [50], depicted in Figure 2. The architecture performs reasoning on a set of encoded objects, resembling pervasive usage in natural language processing, where the objects are typically word tokens [33].\nTransition Model. The transition model maps from BC , 0C to B̂C+1, ÂC and $̂C+1. We separate this into: 1) the dynamics model, in charge of simulating how the state would change with the input of 0C and 2) the reward-termination estimator which maps BC , 0C to ÂC and $̂C+1. While designing reward-termination estimator is straightforward (a two-headed augmented architecture similar to the value estimator), the dynamics model requires regression on unordered sets of objects (set-to-set). A common approach is to use matching methods, e.g. Chamfer matching or Hausdorff distance, However, they are computationally demanding and subject to local optima [5, 8, 28]. Targeting this, our feature-position separated set encoding not only makes the permutation-invariant computations position-aware, but also allows simple end-to-end training over the dynamics. By forcing the positional tails to be immutable during the computational pass, we can use them to solve the matching trivially: objects “labeled” with the same positional tail in the prediction B̂C+1 (output of the dynamics model) and the training sample BC+1 (state obtained from the next observation) are aligned, forming pairs of objects with changes only in the feature, as shown in Figure 3. Tree Search MPC. The agent employs a tree-search based behavior policy (with &-greedy exploration). During planning, each tree search call maintains a priority queue of branches to simulate with the model. When a designated budget (e.g. number of steps of simulation) is spent, the agent greedily picks the immediate action that leads to the most promising path. We present the pseudocode of the Q-value based prioritized tree-search MPC in Appendix. Equivalence could be drawn from this planning approach to Monte-Carlo Tree Search (MCTS) [37, 38]. While this method is far more simplistic and require fewer simulations for each planning call (see example in Appendix). Training. The proposed agent is trained from sampled transitions with the following losses:\n• Temporal Difference (TD) ℒTD: regresses the current value estimate to the update target, e.g. calculated according to DQN or Double DQN (DDQN) [30, 44]. In experiments, a distributional output is used for both value and reward estimation, making this loss a KL-divergence [6].\n• Dynamics Consistency ℒdyn: A !2 penalty established between the aligned B̂C+1 and BC+1, where B̂C+1 is the imagined next (latent) state given >C , 0C and BC+1 is the true next (latent) state encoded from >C+1.\n• Reward Estimation ℒA : the KL-divergence between the imagined reward ÂC+1 predicted by the model and the true reward AC+1 of the observed transition.\n• Termination Estimation ℒ$: the binary cross-entropy loss from the imagined termination $̂C+1 to the ground truth $C+1, obtained from environment feedback.\nThe resulting total loss for end-to-end training of this set-based MBRL agent is thus2:\nℒ = ℒTD + ℒdyn + ℒA + ℒ$ Jointly shaping the states avoids the representation collapsing to trivial solutions and makes the representation useful for all signal predictions of interest.", |
| "4 Consciousness-Inspired Bottleneck": "In this section, we introduce an inductive bias which facilitates C1-capable planning. In a nutshell, the planning is expected to focus on the parts of the world that matter for the plan. Simulations and predictions are all expected to be performed on a (small) bottleneck set, which contains all the important transition-related information. As illustrated in Figure 4, the model performs 1) selection of the bottleneck set from the full state-set, 2) dynamics simulation on the bottleneck set and 3) integration of predicted bottleneck set to form the predicted next state. Conditional State Selection We select a bottleneck set 2C of = objects from the potentially large state set BC of < = objects. Then we only model the transition for the selected objects in 2C . To make this selection, we use a key-query-value attention mechanism, where the key and the value for each object in BC are obtained from that object, and the query is a function of some learned dedicated set of vectors and of the action considered (see Appendix for details). Inspired by the work on self-attention for memory access [25], we use a semi-hard top-: attention mechanism to facilitate the selection of the bottleneck set. That is, after the query, the top-: attention weights are kept, all others are set to 0, and then the attention weights are renormalized. This semi-hard attention technique limits the influence of the ill-matched objects on the bottleneck set 2C while allowing for a gradient to propagate on the assignment of relative weight to different objects. With purely soft attention, weights for irrelevant objects are never 0 and learning to disentangle objects may be more difficult. Dynamics / Reward-Termination Prediction on Bottleneck Sets. We use the same architecture as described in Sec. 3, but taking the bottleneck objects as input rather than the full state set. Details of the architecture are in the Appendix. Change Integration. An integration operation, intuitively the inverse operation of selection, is implemented to ‘soft paste-back’ the changes of the bottleneck state onto the state set BC ,\n2In our experiments, no re-weighting is used for each term of the total loss. This is possible for the fact that they are in similar magnitudes. In our experimental implementation, no recurrent mechanism is used however the same training procedure is naturally extendable.\nyielding the imagined next state set B̂C+1. This is also achieved by attention operations, more specifically querying 2̂C+1 with BC , conditioned on the action 0C . Please check the Appendix for more details. Discussion. The bottleneck described in this section is a natural complement to the MBRL model with set representations discussed previously. In particular, planning and training are carried out the same way as discussed in Sec. 3. We expect the Conscious Planning (CP) agent to demonstrate the following advantages:\n• Higher Quality Representation: the interplay between the set representation and the selection / integration forces the representation to bemore disentangled andmore capable of capturing the locally sparse dynamics.\n• MoreEffectiveGeneralization: only essential objects for thepurpose of planningparticipate in the transition, thus generalization should be improved both in-distribution and OOD, because the transition does not depend on the parts of the state ignored by the bottleneck.\n• Lower Computational Complexity: directly employing transformers to simulate the full state dynamics results in a complexity of O(|BC |23), where 3 is the length of the objects, due to the use of Self-Attention (SA), while the bottleneck lowers it to O(|BC | |2C |3).\n(a) In-dist, diff 0.35 (b) OOD, diff 0.25 (c) OOD, diff 0.35 (d) OOD, diff 0.45 (e) OOD, diff 0.55\nFigure 5: Non-Static RL Setting, with in-distribution and OOD tasks: (a) example of training environments (b - e) examples of OOD environments (rotated 90 degrees, changing the distribution of grid elements). For OOD testing, we evaluate different levels of difficulty (b - e). The agent (red triangle) points in the forward movement direction. The goal is marked in green. For each episode (training or OOD), we randomly generate a new world from a sampling distribution. Note that the training environments and the OOD testing environments have no intersecting observations.", |
| "5 Experiments": "We present our experimental settings and ablation studies of our CP agent against baselines to investigate the OOD generalization capabilities enabled by the C1-inspired bottleneck mechanism. To clarify, the OOD generalization we refer to specifically is the agents’ ability to generalize its learned task skills across seemingly different tasks with common underlying dynamics. Take the set of experiments in this section for example, we want the agent to be able to generalize its navigation skills in unseen environments.", |
| "5.1 Environment / Task Description": "We use environments based on the MiniGrid-BabyAI framework [11, 10, 21], which can be customized for generating OOD generalization tests with varying difficulties. To make sure we assess the agents as clearly as possible, the customized environments feature clear object definitions, with well-understood underlying dynamics based on object interactions. Furthermore, the environments are solvable by Dynamic Programming (DP) and can be easily tuned to generate OOD evaluation tasks. These characteristics are crucial for the experimental insights we are seeking. In this section, the experiments are carried out on 8 × 8 gridworlds3, as shown in Figure 5. The agent (red triangle) needs to navigate (by turning left, right or stepping forward) to the goal while dodging the lava cells along the way4. If the agent steps into lava (orange square), the episode terminates immediately with no reward. If the agent successfully reaches the goal (green square), it receives a reward of +1 and the episode terminates. For better generalization, the agent needs to understand how to avoid lava in general (and not at specific locations, since their placement changes) and to reach the goal as quickly as possible5. The environments provide grid-based observations that are ready to be interpreted as set representations: each cell of the observation array is an object, thus resulting in a set of 64 objects in BC for each observation. For the agent to be able to understand the environment dynamics instead of memorizing specific task layouts, we generate a new environment for each training or evaluation episode. In each training episode, the agent starts at a random position on the leftmost or rightmost edge and the goal is placed randomly somewhere along the opposite edge. In between the two edges, the lava cells are randomly generated according to a difficulty parameter which controls the probability of placing a lava cell at each valid position. The difficulty parameter controls partially how seemingly different the OOD evaluation tasks are to the in-distribution training tasks, though we know the underlying dynamics of all these tasks are the same. For training episodes, the difficulty is fixed to 0.35. We note that most usual RL benchmarks contain fixed environments, where the agent is expected to acquire a specific optimal policy. These environments are ill-suited for our purpose. For OOD evaluation, the agent is expected to adapt in new tasks with the same underlying dynamics in a 0-shot fashion, i.e. with the agent’s parameters fixed. The OOD tasks are crafted to include changes both in the support (orientation) and in the distribution (difficulty): the agent is deployed in transposed layouts6 with varying levels of difficulty ({0.25, 0.35, 0.45, 0.55}). The differences of in-distribution (training) and OOD (evaluation) environments are illustrated in Figure 5.", |
| "5.2 Agent Setting": "We build all the set-based MBRL agents included in the evaluation on a common model-free baseline: a set-based variant of Double-DQN (DDQN) [44] with prioritized replay and distributional outputs. For more details, please check the Appendix.\n3We provide additional results for world sizes ranging from 6 × 6 to 10 × 10 in the Appendix. 8 × 8 is chosen as the demonstrative case.\n4In the Appendix, we provide additional test settings with different dynamics, which also demonstrates the agents’ ability to work well despite cluttering distractions.\n5Please check the Appendix for extra sets of tasks with different agent actions and task objectives. 6The agent starts at the top or bottom edge and the goal is respectively on the bottom or top edge,\nwhereas a training environment has the agent and goal on the left or right edges\nWe compare the proposed approach, labelled CP in the figures (for Conscious Planning) against the following methods:\n• UP (for Unconscious Planning): the agent proposed in Section 3, lacking the bottleneck. • model-free: the model-free set-based agent is the basis for the set-based model-based\nagents. It consists of only the encoder and the value estimator, sharing their architectures with CP and UP.\n• Dyna: the set-based MBRL agent which includes a model-free agent and an observationlevel transition model, i.e. a transition generator. For the model, we use the CP transition model (with the same hyperparameters as the best performing CP agent) on the original environment features without an encoder. We also use the same hyperparameters as in the CP model training. The agent essentially doubles the batch size of the model-free baseline by augmenting training batches with an equal number of generated transitions.\n• Dyna*: A Dyna baseline that uses the true environment model for transition generation. This is expected to demonstrate Dyna’s performance limit.\n• WM-CP: A world model CP variant that differs by following a 2-stage training procedure [16]. First, the model (together with the encoder) is trained with 106 random transitions. After this, the encoder and the model are fixed and RL begins.\n• NOSET: A UP-counterpart with vectorized representations and no bottleneck mechanism. Particularly, for CP and UP agents, we also test the following variants:\n• CP-noplan: A CP agent that trains normally but does not plan in OOD evaluations, i.e. carrying out model-free behavior. This baseline aims to demonstrate the impact of planning in the training process on the OOD capability of the value estimator.\n• UP-noplan: UP counterpart of CP-noplan. Note that the compared methods share architectures as much as possible to ensure fair comparisons. Details of the compared methods, their design and hyperparameters are provided in the Appendix.", |
| "5.3.1 In-Distribution": "In Figure 6, we present the in-distribution evaluation curves for the different agents. For UP, CP and the corresponding model-free baselines, the performance curves show no significant difference, which demonstrates that these agents are effective in learning to solve the in-distribution tasks. During the “warm-up” period of the WM baseline, the model learns a representation that captures the underlying dynamics. After the warm-up, the encoder and the model parameters are fixed and only the value estimator learns to predict the state-action values based on the given representation. The increase in performance is not only delayed due to the warm-up phase (during which rewards are not taken into account) but also harmed, presumably because the value estimator has no ability to shape the representation to better suit its needs. The Dyna baseline performs badly while the Dyna* baselines perform relatively well. This is likely due to the delusional transitions generated by the model at the early stages of training, from which the value estimator never recovers. However, the Dyna* baseline does not achieve satisfactory OOD performance (Figure 7), presumably because its planning only focuses on observed data, and hence only improves the in-distribution performance, due to insufficiently strong generalization. The NOSET baseline performs very badly even in-distribution, per Figure 6. In the Appendix, we show that the NOSET baseline seems only able to perform well in a more classical, static RL setting, which may indicate that it relies on memorization. We provide more results regarding the model accuracy in the Appendix.", |
| "5.3.2 OOD Task-Solving Performance": "The OOD evaluation focuses on testing the agents’ performance in a set of environments forming a gradient of task difficulty. In Figure 7, we present the performance error bars of\n0 0.5 1 1.5 2 2.5\nagent-env interactions 106\n0\n0.2\n0.4\n0.6\n0.8\n1\nsu cc\nes s\nra te\nCP(8) UP modelfree WM Dyna Dyna* NOSET\nthe compared methods under different OOD difficulty levels. CP(8), CP with bottleneck size = = 8, shows a clear performance advantage over UP, validating the OOD generalization capability. The Dyna* baseline, essentially the performance upper bound of Dyna-based planning methods, shows no significant performance gain in OOD tests compared to modelfree methods. WMmay have the potential to reach similar performance as CP, yet it needs to warm up the encoder with a large portion of the agent-environment interaction budget, if no free unsupervised phase is provided. We dive into this matter in the Appendix.", |
| "5.3.3 Ablation": "We validate design choices with ablation. Figure 8 visualizes two of these experiments. For more ablation results, which include validation of the effectiveness of different model choices, and further quantitative measurements, e.g. of OOD ability as a function of behavior optimality and model accuracy, please check the Appendix.", |
| "5.4 Summary of Experimental Results": "With the scope limited to our experiments, the results allow us to draw these conclusions:\n• Set-based representations enable at least in-distribution generalization across different environment instances in our non-static setting, where the agents are forced to discover dynamics that are preserved across environments;\n• Model-free methods seem to face more difficulties in solving our OOD evaluation tasks which preserved the same environment dynamics to the corresponding in-distribution training settings;\n• MPC exhibits better performance than Dyna in the tested OOD generalization settings; • Online joint training of the representation with all the relevant signals could bring benefits\nto RL, as suggested in [22]. Please check Appendix E for more discussions of this matter; • In accordancewith our intuition, transitionmodelswith bottlenecks tend to learndynamics\nbetter in our tests. This is likely for they prioritize learning the relevant aspects, while models without bottleneck may have to waste capacity on irrelevance;\n0 0.5 1 1.5 2 2.5\nagent-env interactions 106\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\nsu cc\nes s\nra te\nCP(8)\nnoplan(8)\nUP\nnoplan(UP)\nmodelfree\n(a) Bottleneck benefits OOD capability: noplan(8) andnoplan(UP) correspond to theCP(8) andUP variantswith planning disabled during OOD tests. Comparing noplan against modelfree, we see that planning during training is beneficial for both value estimation and representation learning.\n0 0.5 1 1.5 2 2.5\nagent-env interactions 106\n0\n0.2\n0.4\n0.6\nsu cc\nes s\nra te\nCP(16)-best\nCP(8)-best\nCP(4)-best\nCP(16)\nCP(8)\nCP(4)\n(b)Value estimators do not generalizewell in our OOD tests: random heuristic significantly outperforms best-first heuristic OOD.\nFigure 8: Key ablation results: With diff 0.35, each error bar is obtained from 20 independent runs.\n• From further experiments provided in the Appendix E, we observe that bottleneckequipped agents may also be less affected by larger environmental scales, possibly due to their prioritized learning of interesting entities.", |
| "Acknowledgements": "Mingde is grateful for the financial support from the Fonds de Recherche du Québec - Nature et Technologies (FRQNT). Yoshua acknowledges the financial support from Samsung Electronics and IBM. We acknowledge the computational power provided by Compute Canada. We are also thankful for the helpful discussions with Xiru Zhu (about the design of the environment generation procedure), David Yu-Tung Hui (about the bag-of-word representations, insights on BabyAI as well as about the writing of the introduction section), Min Lin (about the design of the dynamics model as well as the early stage brainstorming) and Ian Porada (for consistently supporting the student authors).", |
| "Reviewer Summary": "Reviewer_1: The authors introduce a model-based deep reinforcement learning algorithm that uses tree search based MPC over a learned latent space. In this work, the latent space does not use a vectorized representation but rather a set-based representation with a small bottleneck. The authors then compare their methods to baselines and ablations on a toy environment.\n\nReviewer_2: This work presents a model-based reinforcement learning agent that makes use of a bottleneck attention mechanism for the planning module.\n\nReviewer_3: The authors introduce an architecture for focusing agents in model-based reinforcement learning on just the relevant subset of the environment representation for making decisions (reward, termination, and successor state prediction). The approach leans heavily on recent advances in set-based representations and the approach is tested in the MiniGrid-BabyAI framework. Results show promising behaviour, particularly in the ability to generalize across different settings.\n\nReviewer_4: The authors take inspiration from theories of human consciousness to construct a model-based architecture for RL, with the goal of generalizing more effectively. They conduct experiments on a gridworld to show that their proposed architecture generalizes more effectively." |
| } |