FigAgent / 2002.07375 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
c9b3f01 verified
# Introduction
A Relational Markov Decision Process (RMDP) [\(Boutilier](#page-9-0) [et al.,](#page-9-0) [2001\)](#page-9-0) is a first-order, predicate calculus-based representation for expressing instances of a probabilistic planning domain with a possibly unbounded number of objects. An RMDP *domain* has object types, relational state predicate and action symbols that are applied over objects, first order transition templates that specify probabilistic effects associated with action symbols, and a first-order reward structure. A domain *instance* additionally specifies a set of objects and a start state, thus defining a ground MDP with a known start state [\(Kolobov et al.,](#page-9-0) [2012\)](#page-9-0)). *Relational* planners aim to produce a single *generalized* policy that can yield a ground policy for *all* instances of the domain, with little instancespecific computation. *Domain-independent* planners are representation-specific, but domain-agnostic, making them applicable to all domains expressible in the language. In this paper, we design a domain-independent relational planner.
RMDP planners, in their vision, expect to scale to very large problem sizes by exploiting the first-order structures of a domain – thereby reducing the curse of dimensionality. Traditional RMDP planners attempted to find a generalized *first-order* value function or policy using symbolic dynamic programming [\(Boutilier et al.,](#page-9-0) [2001\)](#page-9-0), or by approximating them via a function over first-order basis functions (e.g., [\(Guestrin et al.,](#page-9-0) [2003;](#page-9-0) [Sanner & Boutilier,](#page-10-0) [2009\)](#page-10-0)). Unfortunately, these methods met with rather limited success, for e.g., no relational planner participated in International Probabilistic Planning Competition (IPPC)<sup>1</sup> after 2006, even though all competition domains were relational. We believe that this lack of success may be due to the inherent limitations in the representation power of a basis function-based representation. Through this work, we wish to revive the research thread on RMDPs and explore if neural models could be effective in representing these first-order functions.
We present Symbolic NetWork (SYMNET), the first domainindependent neural relational planner that computes generalized policies for RMDPs that are expressed in the symbolic representation language of RDDL [\(Sanner,](#page-10-0) [2010\)](#page-10-0). SYMNET outputs its generalized policy via a neural model whose all parameters are specific to a domain, but tied among all in-
<sup>1</sup> Indian Institute of Technology Delhi. Correspondence to: Sankalp Garg <sankalp2621998@gmail.com>, Aniket Bajapi <quantum.computing96@gmail.com>, Mausam <mausam@cse.iitd.ac.in>.
<sup>1</sup>[http://www.icaps-conference.org/index.](http://www.icaps-conference.org/index.php/Main/Competitions) [php/Main/Competitions](http://www.icaps-conference.org/index.php/Main/Competitions)
stances of that domain. So, on a new test instance, the policy can be applied out of the box using pre-trained parameters, i.e., without any retraining on the test instance. SYMNET is domain-independent because it converts an RDDL domain file (and instance files) completely automatically into neural architectures, without any human intervention.
SYMNET architecture uses two key ideas. First, it visualizes each state of each domain instance as a graph, where nodes represent the *object tuples* that are valid arguments to some relational predicate. An edge between two nodes indicates that an action causes predicates over these two nodes to interact in the instance. The values of predicates in a state act as features for corresponding nodes. SYMNET then learns node (and state) embeddings for these graphs using graph neural networks. Second, SYMNET learns a neural network to represent the policy and value function over this graphstructured state. To learn these in an instance-independent way, we recognize that most ground actions are a first-order action symbol applied over some object tuple. SYMNET scores such ground actions as a function over the action symbol and the relevant embeddings of object tuples. After training all model parameters using reinforcement learning over training instances of a domain, SYMNET architecture can be applied on any new (possibly larger) test problem without any further retraining.
We perform experiments on nine RDDL domains from IPPC 2014 [\(Grzes et al.,](#page-9-0) [2014\)](#page-9-0). Since no planner exists that can run without computation on a given instance, we compare SYMNET to random policies (lower bound) and policies trained from scratch on the test instance. We find that SYM-NET obtains hugely better rewards than random, and is quite close to the policies trained from scratch – it even outperforms them in 28% instances. Overall, we believe that our work is a step forward for the difficult problem of domain-independent RMDP planning. We release the code of SYMNET for future research.<sup>2</sup>
# Method
We now present SYMNET's architecture for training a generalized policy for a given RMDP domain. We follow existing research to hypothesize that for any instance of a domain, we can learn a representation of the current state in a latent space and then output a policy in latent space, which is decoded into a ground action. To achieve this, SYMNET uses three modules: (1) problem representation, which constructs an instance graph for every problem instance, (2) representation learning, which learns embeddings for every node in the instance graph, and for the state, and (3) policy decoder, which computes a value for every ground action, outputting a mixed policy for a given state. All parameters of representation learning and policy learning modules are shared across all instances of a domain. SYMNET's full architecture is shown in Figure [2.](#page-4-0)
We follow TRAPSNET, in that we continue the general idea of converting an instance into an instance graph and then learning a graph encoder to handle different-sized domains. However, the main challenge for a general RMDP, one that does not satisfy the restricted assumptions of TRAPSNET, is in defining a coherent graph structure for an instance. The first key question is what should be a node in the instance graph. TRAPSNET's approach was to use a single object as nodes, as all fluents (and actions) in its domains took single objects as arguments. This may not work for a general RMDP since it's fluents and actions may take several objects as arguments. Secondly, how should edges be defined. Edges represent the interaction between nodes. TRAPSNET defined them based on the one binary non-fluent in its domain. A general RMDP may not have any non-fluent symbol or may have many (possibly higher-order) non-fluents.
![](_page_3_Figure_7.jpeg)
Figure 1. DBN for a modified wildfire problem.
Last but not least, the real domain-independence for SYM-NET can be achieved only when it parses an RDDL domain file without any human intervention. This leads to a novel challenge of reconciling multiple different ways in RDDL to express the same domain. In our running example, connectivity structure between cells may be defined using nonfluents y-neighbour(y, y<sup>0</sup> ), x-neighbour(x, x<sup>0</sup> ), or using a quarternary non-fluent neighbour(x, y, x, y<sup>0</sup> ). Since both these representations represent the same problem, an ideal desideratum is that the graph construction algorithm leads to the same instance graph in both cases. But, this is a challenge since the corresponding RDDL domains may look very different. While, in general, this problem seems too hard to solve, since it is trying to judge logical equivalence of two domains, SYMNET attempts to achieve the same instance graphs in case the equivalence is within non-fluents.
To solve these problems, we make the observation that dynamics of an RDDL instance ultimately compile to a ground DBN with nodes as state variables (fluent symbols applied on object tuples) and actions (action symbols applied on object tuples).<sup>3</sup> DBN exposes a connectivity structure that determines which state variables and actions directly affect another state variable. It additionally has conditional probability tables (CPTs) for each transition. Figure 1 shows an example of a DBN for our running example instance. Here, left column is for current time step, and right for the next one. The edges represent which state and action variables affect the next state-variable. We note that the ground DBN does not expose non-fluents since its values are fixed, and their dependence can be compiled directly into CPTs.
SYMNET converts a ground DBN to an instance graph. It constructs a node for every unique *object tuple* that appears as an argument in any state variable in the DBN. Moreover, two nodes are connected if the state variables associated with two nodes influence each other in the DBN through
<sup>3</sup> done automatically using code from [https://github.](https://github.com/ssanner/rddlsim) [com/ssanner/rddlsim](https://github.com/ssanner/rddlsim)
<span id="page-4-0"></span>![](_page_4_Figure_1.jpeg)
Figure 2. Policy network for SYMNET demonstrated on 2 Γ— 1 wildfire domain. Fully Connected Network is used in Action Decoder.
some action. This satisfies all our challenges. First, it goes beyond an object as a node, but only defines those nodes that are likely important in the instance. Second, it defines a clear semantics of edges, while maintaining its intuition of "directly influences." Finally, it can handles some variety of non-fluent representations for the same domain. Since the DBN does not even expose non-fluent state variables, and compiles them away, same instance encoded with different non-fluent representations often yield yield same ground DBNs and thus the same instance graphs.
Construction of Instance Graph: We now formally describe the conversion of a DBN into a directed instance graph, G=(V,E), where V is the set of vertices and E is the set of edges. G is composed of $\mathcal{K}=|\mathcal{A}|+1$ disjoint subgraphs $G_j=(V_j,E_j)$ . Intuitively, each graph $G_j$ has information about influence of each individual action symbol $\mathbf{a}_j \in \mathcal{A}$ . $G_{\mathcal{K}}$ represents the influence of the full set $\mathcal{A}$ , and also the natural dynamics. In our example $\mathcal{K}=4$ since we have three action symbols: put-out, cut-out and finisher.
To describe the formal process, we define three analogous sets: $O_f$ , $O_{nf}$ and $O_a$ . $O_f$ represents the set of all object tuples that act as a valid argument for any fluent symbol. $O_{nf}$ and $O_a$ are analogous sets for non-fluent and action symbols. In our running example, $O_f = \{(x1,y1),(x2,y1)\}$ , $O_{nf} = \{(x1,y1),(x2,y1),(x1,y1,x2,y1),(x2,y1)\}$ , and $O_a = \{(x1,y1),(x2,y1)\}$ . Nodes in the instance graph associate with object tuples. We use $o_v$ to denote the object tuple associated with node v. SYMNET converts a DBN into an instance graph as follows:
1. The distinct object tuples in fluents form the nodes of the graph, i.e. $V_j = \{v|o_v \in O_f\}, \forall j$ . For the example, each $V_j =$ different copies of $\{(x1,y1),(x2,y1)\}$ .
- 2. We add an edge between two nodes in $G_j$ if some state variables corresponding to them are connected in the DBN through $\mathbf{a}_j$ . Formally, $E_j(u,v)=1$ , if $\exists f,g\in F, \exists o_a\in O_a, j\in \{1,\dots,|\mathcal{A}|\}$ s.t. the transition dynamics $(T^f)$ for state variable $g'(o_v)$ and action $\mathbf{a}_j(o_a)$ depend on state variable $f(o_u)$ or $f'(o_u)$ . For the running example, there is no edge between (x1,y1) and (x2,y1) since cut-out, put-out or finisher's effects on one cell do not depend on any other cell.
- 3. We add an edge between two nodes in $G_K$ if some state variables corresponding to them are connected in the DBN (possibly through natural dynamics). I.e., $E_K(u,v)=1$ , if $\exists f,g\in F$ s.t. there is an edge from $f(o_u)$ (or $f'(o_u)$ ) and $g'(o_v)$ in the DBN. For the example, $E_4((x1,y1),(x2,y1))=1$ as there is an edge between burning(x1,y1) and burning'(x2,y1) since fire propagates to neighboring cells through natural dynamics. Similarly, $E_4((x2,y1),(x1,y1))=1$ .
- 4. As every node influences itself, self loops are added on each node. $E(v, v) = 1, \forall v \in V$ .
For each node $v \in V$ , we additionally construct a feature vector (h(v)) which consists of fluent feature vector $(h^f(v))$ and non-fluent feature vector $(h^{nf}(v))$ , such that $h = concat(h^f, h^{nf})$ . The feature vector for all nodes for the same object tuple is the same. The feature vector is constructed as follows:
1. The fluent features for each node is obtained from the state of the problem instance. The values of state variables corresponding to a node are added as feature to that node. Whenever a fluent symbol cannot take a node as an argument, we add zero as the feature for it. Formally, $h^f(v)_i = g_i(o_v)$ if $g_i \in \mathcal{F}, v \in V$
- and o<sup>v</sup> is an argument of g<sup>i</sup> , otherwise, h f (v)<sup>i</sup> = 0, βˆ€i = 1 . . . |F|. For the running example, we have two state-fluents. Hence, h f ((x1, y1)) = [burning(x1, y1), out-of-fuel(x1, y1)].
- 2. The non-fluent feature vector for each node is obtained from the RDDL file. The values of non-fluents defined on the node, and additionally any unary nonfluents where the argument intersects the node are added as the features for the node. The default value is obtained from the domain file while the specific value (if available) is obtained from the instance file. Formally, h nf (v)<sup>i</sup> = gi(onf ) if g<sup>i</sup> ∈ N F, v ∈ V, onf ∈ Onf ,((o<sup>v</sup> = onf ) ∨ (|onf | = 1 ∧ onf βŠ‚ ov)), otherwise, h f (v)<sup>i</sup> = 0, βˆ€i = 1 . . . |N F|. In our example, h nf ((x1, y1)) = [target(x1, y1)].
We note that the size of feature vector on each node depends on the domain, but is independent of the number of objects in the instance – there are a constant number of feature values per state predicate symbol. This allows variable-sized instances of the same domain to use the same representation.
SYMNET runs GAT on the instance graph to obtain node embeddings v for each node v ∈ V , It then constructs tuple embedding for each object tuple by concatenating node embeddings of all associated nodes. Formally, let O<sup>V</sup> = {ov|v ∈ V }. For o ∈ O<sup>V</sup> , the tuple embedding o = concat(v), over all v s.t. o<sup>v</sup> = o. SYMNET also computes a state embedding s by taking a dimension-wise max over all tuple embeddings, i.e., s = M axP oolo∈O<sup>V</sup> (o).
SYMNET maps latent representations o and s into a state value V (s) (long-term expected discounted reward starting in state s) and mixed policy Ο€(s) (probability distribution over all ground actions). This is done using a value decoder and a policy decoder, respectively.
There are several challenges in designing a (generalized) policy decoder. First, the action symbols may take multiple objects as arguments. Second, and more importantly, action symbols may even take those object tuples as arguments that do not correspond to any node in the instance graph. This will happen if an object tuple (in Oa) is not an argument to any fluent symbol, i.e., βˆƒo<sup>a</sup> s.t. o<sup>a</sup> ∈ O<sup>a</sup> ∧ o<sup>a</sup> ∈/ O<sup>f</sup> . We note that adding these object tuples as nodes in the instance graph may not work, since we will not have any natural features for those nodes.
In response, we design a novel framework for policy and value decoders. The decoders consist of fully connected layers, the input to which are a subset of the tuple embeddings o. SYMNET uses the following rules to construct decoders:
- 1. The number of decoders is constant for a given domain and is equal to the number of distinct action symbols (|A|). For the running example, three different decoders for each policy and value decoding are constructed, namely cut-out, put-out and f inisher.
- 2. The input to a decoder is the state embedding s concatenated with embeddings of object tuples corresponding to the state variables affected by the action in the DBN. In running example, put-out(x1, y1) action takes only the tuple embedding of (x1, y1) as input. However, the number of state-variables being affected by a ground action might vary across instances of the same domain. For example, the f inisher action affects all cells. To alleviate this, we use size-independent max pool aggregation over the embeddings of all affected tuple embeddings to create a fixed-sized input.
- 3. Decoder parameters are specific to action symbols and not to ground actions. In running example, put-out(x1, y1) will be scored using embedding of (x1, y1); similarly, for (x2, y1). But, both scorings will use a single parameter set specific to put-out.
- 4. The policy decoder computes scores of all ground actions, which are normalized using softmax to output the final policy in a state. For It, the highest probability action is selected as the final action.
- 5. All value outputs are summed to give the final value for that state. This modeling choice reflects the additive reward aspect of many RDDL domains.
While construction of SYMNET architecture is heavily dependent on the RDDL domain and instance files, actual training is done via model-free reinforcement learning approach of A3C [\(Mnih et al.,](#page-9-0) [2016\)](#page-9-0). RL learns from interactions with environment – SYMNET simulates the environment using RDDL-specified dynamics. Use of model-based planning algorithms for this purpose is left as future work. We formulate training of SYMNET as a multi-task learning problem (see Section [3\)](#page-3-0), so that it generalizes well and does not overfit on any one problem instance. The parameters for the state encoder, policy decoder, and value decoder are learned using updates similar to that in A3C. SYMNET's loss function for the policy and value network is the same as that in the A3C paper (summed over the multi-task problem instances).
As constructed, SYMNET's number of parameters is independent of the size of the problem instance. Hence, the same network can be used for problem instances of any size. After the learning is completed, the network represents a generalized policy (or value), since it can be directly used on a new problem instance to compute the policy in a single forward pass.
<span id="page-6-0"></span>
| | F | | | | | | | | | |
|--------|----------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|--|--|--|
| | Instance | 5 | 6 | 7 | 8 | 9 | 10 | | | |
| | AA | $\textbf{0.93} \pm \textbf{0.01}$ | $\textbf{0.94} \pm \textbf{0.01}$ | $\textbf{0.94} \pm \textbf{0.01}$ | $\textbf{0.92} \pm \textbf{0.02}$ | $\textbf{0.95} \pm \textbf{0.03}$ | $\textbf{0.91} \pm \textbf{0.05}$ | | | |
| | CT | $0.87 \pm 0.16$ | $0.78 \pm 0.14$ | $\textbf{1.00} \pm \textbf{0.07}$ | $\textbf{0.98} \pm \textbf{0.13}$ | $\textbf{0.99} \pm \textbf{0.04}$ | $\textbf{1.00} \pm \textbf{0.05}$ | | | |
| | GOL | $\textbf{0.96} \pm \textbf{0.06}$ | $\textbf{1.00} \pm \textbf{0.05}$ | $0.65\pm0.05$ | $0.83 \pm 0.03$ | $\textbf{0.95} \pm \textbf{0.04}$ | $0.64 \pm 0.08$ | | | |
| aj. | Nav | $\textbf{0.99} \pm \textbf{0.01}$ | $\textbf{1.00} \pm \textbf{0.01}$ | $\textbf{0.99} \pm \textbf{0.01}$ | $\textbf{1.00} \pm \textbf{0.01}$ | $\textbf{1.00} \pm \textbf{0.02}$ | $\textbf{1.00} \pm \textbf{0.02}$ | | | |
| Domain | ST | $\textbf{0.91} \pm \textbf{0.05}$ | $0.84 \pm 0.02$ | $0.86\pm0.05$ | $0.85 \pm 0.05$ | $0.81\pm0.02$ | $0.89 \pm 0.03$ | | | |
| Ŏ | Sys | $\textbf{0.96} \pm \textbf{0.03}$ | $\textbf{0.98} \pm \textbf{0.02}$ | $\textbf{0.98} \pm \textbf{0.02}$ | $\textbf{0.97} \pm \textbf{0.02}$ | $\textbf{0.99} \pm \textbf{0.01}$ | $\textbf{0.96} \pm \textbf{0.03}$ | | | |
| | Tam | $\textbf{0.92} \pm \textbf{0.07}$ | $\textbf{1.00} \pm \textbf{0.12}$ | $\textbf{0.98} \pm \textbf{0.06}$ | $\textbf{1.00} \pm \textbf{0.12}$ | $\textbf{1.00} \pm \textbf{0.12}$ | $\textbf{0.95} \pm \textbf{0.06}$ | | | |
| | Tra | $0.85 \pm 0.18$ | $\textbf{0.93} \pm \textbf{0.06}$ | $0.88 \pm 0.21$ | $0.74 \pm 0.17$ | $\textbf{0.94} \pm \textbf{0.12}$ | $0.87 \pm 0.13$ | | | |
| | Wild | $\textbf{0.99} \pm \textbf{0.01}$ | $\textbf{1.00} \pm \textbf{0.00}$ | $\textbf{1.00} \pm \textbf{0.00}$ | $\textbf{1.00} \pm \textbf{0.00}$ | $\textbf{1.00} \pm \textbf{0.01}$ | $\textbf{1.00} \pm \textbf{0.01}$ | | | |
Table 1. $\alpha_{symnet}(0)$ values of SYMNET. Bold values represent over 90% the score of max performance.