# Method We are interested in solving problems specified through a Markov Decision Process, which consists of states $s \in S$, actions $a \in A$, rewards $r(s, s') \in R$, and a transition model $T(s, a, s')$ that indicates the probability of transitioning to a specific next state given a current state and action, $P(s' | s, a)$ [@sutton1998reinforcement][^1]. For simplicity, we refer to all rewards $r(s,s')$ as $r$ for the remainder of the paper. Importantly, we assume that the reward function does not depend on actions, which allows us to formulate QSS values without any dependency on actions. Reinforcement learning aims to find a policy $\pi(a|s)$ that represents the probability of taking action $a$ in state $s$. We are typically interested in policies that maximize the long-term discounted return $R=\sum_{k=t}^H \gamma^{k-t} r_k$, where $\gamma$ is a discount factor that specifies the importance of long-term rewards and $H$ is the terminal step. Optimal QSA values express the expected return for taking action $a$ in state $s$ and acting optimally thereafter: $$\begin{align*} Q^*(s,a) = \mathbb{E}[r + \gamma \max_{a'} Q^*(s',a')|s,a]. \end{align*}$$ These values can be approximated using an approach known as Q-learning [@watkins1992q]: $$\begin{align*} Q(s,a) \leftarrow Q(s,a) + \alpha [r + \gamma \max_{a'} Q(s',a') - Q(s,a)]. \end{align*}$$ Finally, QSA learned policies can be formulated as: $$\begin{align*} \pi(s) = \mathop{\mathrm{arg\,max}}_a Q(s,a). \end{align*}$$ We propose an alternative paradigm for defining optimal values, $Q^*(s, s')$, or the value of transitioning from state $s$ to state $s'$ and acting optimally thereafter. By analogy with the standard QSA formulation, we express this quantity as: $$\begin{equation} Q^*(s,s') = r + \gamma \max_{s'' \in N(s')} Q^*(s', s''). \end{equation}$$ Although this equation may be applied to any environment, for it to be a useful formulation, the *environment must be deterministic*. To see why, note that in QSA-learning, the max is over actions, which the agent has perfect control over, and any uncertainty in the environment is integrated out by the expectation. In QSS-learning the max is over next states, which in stochastic environments are not perfectly predictable. In such environments the above equation does faithfully track a certain value, but it may be considered the "best possible scenario value" --- the value of a current and subsequent state assuming that any stochasticity the agent experiences turns out as well as possible for the agent. Concretely, this means we assume that the agent can transition reliably (with probability 1) to any state $s'$ that it is possible (with probability $>$ 0) to reach from state $s$. Of course, this will not hold for stochastic domains in general, in which case QSS-learning does not track an actionable value. While this limitation may seem severe, we will demonstrate that the QSS formulation affords us a powerful tool for use in deterministic environments, which we develop in the remainder of this article. Henceforth we assume that the transition function is deterministic, and the empirical results that follow show our approach to succeed over a wide range of tasks. We first consider the simple setting where we have access to an inverse dynamics model $I(s,s') \rightarrow a$ that returns an action $a$ that takes the agent from state $s$ to $s'$. We also assume access to a function $N(s)$ that outputs the neighbors of $s$. We use this as an illustrative example and will later formulate the problem without these assumptions. We define the Bellman update for QSS-learning as: $$\begin{align} Q(s,s') \leftarrow Q(s,s') + \alpha [r + \gamma \max_{s'' \in N(s)} Q(s',s'') - Q(s,s')]. \label{eqn:qss_learning} \end{align}$$ Note $Q(s,s')$ is undefined when $s$ and $s'$ are not neighbors. In order to obtain a policy, we define $\tau(s)$ as a function that selects a neighboring state from $s$ that maximizes QSS: $$\begin{equation} \tau(s) = \mathop{\mathrm{arg\,max}}_{s' \in N(s)} Q(s,s'). \end{equation}$$ In words, $\tau(s)$ selects states that have large value, and acts similar to a policy over states. In order to obtain the policy over actions, we use the inverse dynamics model: $$\begin{equation} \pi(s) = I(s, \tau(s)). \label{eqn:inverse_dynamics} \end{equation}$$ This approach first finds the state $s'$ that maximizes $Q(s,s')$, and then uses $I(s,s')$ to determine the action that will take the agent there. We can rewrite Equation [\[eqn:qss_learning\]](#eqn:qss_learning){reference-type="ref" reference="eqn:qss_learning"} as: $$\begin{equation} Q(s,s') = Q(s,s') + \alpha [r + \gamma Q(s', \tau(s')) - Q(s,s')]. \end{equation}$$
$\max\limits_a Q(s,a)$
$\max\limits_{s'} Q(s,s')$
$\frac{QSS - QSA}{|QSS|}$
Learned values for tabular Q-learning in an 11x11 gridworld. The first two figures show a heatmap of Q-values for QSA and QSS. The final figure represents the fractional difference between the learned values in QSA and QSS.
Let us now investigate the relation between values learned using QSA and QSS. ::: theorem **Theorem 1**. *QSA and QSS learn equivalent values in the deterministic setting.* ::: ::: proof *Proof.* Consider an MDP with a deterministic state transition function and inverse dynamics function $I(s, s')$. QSS can be thought of as equivalent to using QSA to solve the sub-MDP containing only the set of actions returned by $I(s, s')$ for every state $s$: $$\begin{equation} Q(s, s') = Q(s, I(s, s')) \nonumber \end{equation}$$ Because the MDP solved by QSS is a sub-MDP of that solved by QSA, there must always be at least one action $a$ for which $Q(s, a) \ge \max_{s'} Q(s, s')$. The original MDP may contain additional actions not returned by $I(s, s')$, but following our assumptions, their return must be less than or equal to that by the action $I(s, s')$. Since this is also true in every state following $s$, we have: $$\begin{equation} Q(s, a) \le \max_{s'} Q(s, I(s, s'))\quad\text{for all }a \nonumber \end{equation}$$ Thus we obtain the following equivalence between QSA and QSS for deterministic environments: $$\begin{equation} \max_{s'} Q(s, s') = \max_a Q(s, a) \nonumber \end{equation}$$ This equivalence will allow us to learn accurate action-values without dependence on the action space. ◻ :::
$\max\limits_a Q(s,a)$
$\max\limits_{s'} Q(s,s')$
Value distance
Learned values for tabular Q-learning in an 11x11 gridworld with stochastic transitions. The first two figures show a heatmap of Q-values for QSA and QSS in a gridworld with 100% slippage. The final figure represents the euclidean distance between the learned values in QSA and QSS as the transitions become more stochastic (averaged over 10 seeds with 95% confidence intervals).
QSA
QSS
QSS + inverse dynamics
Transfer of permuted actions
Tabular experiments in an 11x11 gridworld. The first three experiments demonstrate the effect of redundant actions in QSA, QSS, and QSS with learned inverse dynamics. The final experiment represents how well QSS and QSA transfer to a gridworld with permuted actions. All experiments shown were averaged over 50 random seeds with 95% confidence intervals.
In simple settings where the state space is discrete, $Q(s,s')$ can be represented by a table. We use this setting to highlight some of the properties of QSS. In each experiment, we evaluate within a simple 11x11 gridworld where an agent, initialized at $\langle 0, 0 \rangle$, navigates in each cardinal direction and receives a reward of $-1$ until it reaches the goal. We first examine the values learned by QSS (Figure [3](#fig:heatmap){reference-type="ref" reference="fig:heatmap"}). The output of QSS increases as the agent gets closer to the goal, which indicates that QSS learns meaningful values for this task. Additionally, the difference in value between $\max_a Q(s, a)$ and $\max_{s'} Q(s, s')$ approaches zero as the values of QSS and QSA converge. Hence, QSS learns similar values as QSA in this deterministic setting. The next experiment measures the impact of stochastic transitions on learned QSS values. To investigate this property, we add a probability of slipping to each transition, where the agent takes a random action (i.e. slips into an unintended next state) some percentage of time. First, we notice that the values learned by QSS when transitions have 100% slippage (completely random actions) are quite different from those learned by QSA (Figure [7](#fig:stochastic_values){reference-type="ref" reference="fig:stochastic_values"}-). In fact, the values learned by QSS are similar to the previous experiment when there was no stochasticity in the environment (Figure [2](#fig:model_q){reference-type="ref" reference="fig:model_q"}). As the transitions become more stochastic, the distance between values learned by QSA and QSS vastly increases (Figure [6](#fig:euclidean_distance){reference-type="ref" reference="fig:euclidean_distance"}). This provides evidence that the formulation of QSS assumes the best possible transition will occur, thus causing the values to be overestimated in stochastic settings. We include further experiments in the appendix that measure how stochastic transitions affect the average episodic return. One benefit of training QSS is that the transitions from one action can be used to learn values for another action. Consider the setting where two actions in a given state transition to the same next state. QSA would need to make updates for both actions in order to learn their values. But QSS only updates the transitions, thus ignoring any redundancy in the action space. We further investigate this property in a gridworld with redundant actions. Suppose an agent has four underlying actions, up, down, left, and right, but these actions are duplicated a number of times. As the number of redundant actions increases, the performance of QSA deteriorates, whereas QSS remains unaffected (Figure [12](#fig:redundant_actions){reference-type="ref" reference="fig:redundant_actions"}-). We also evaluate how QSS is impacted when the inverse dynamics model $I$ is learned rather than given (Figure [12](#fig:redundant_actions){reference-type="ref" reference="fig:redundant_actions"}). We instantiate $I(s,s')$ as a set that is updated when an action $a$ is reached. We sample from this set anytime $I$ is called, and return a random sampling over all redundant actions if $I(s,s')=\emptyset$. Even in this setting, QSS is able to perform well because it only needs to learn about a single action that transitions from $s$ to $s'$. The final experiment in the tabular setting considers the scenario of transferring to an environment where the meaning of actions has changed. We imagine this could be useful in environments where the physics are similar but the actions have been labeled differently. In this case, QSS values should directly transfer, but not the inverse dynamics, which would need to be retrained from scratch. We trained QSA and QSS in an environment where the actions were labeled as 0, 1, 2, and 3, then transferred the learned values to an environment where the labels were shuffled. We found that QSS was able to learn much more quickly in the transferred environment than QSA (Figure [12](#fig:redundant_actions){reference-type="ref" reference="fig:redundant_actions"}). Hence, we were able to retrain the inverse dynamics model more quickly than the values for QSA. Interestingly, QSA also learns quickly with the transferred values. This is likely because the Q-table is initialized to values that are closer to the true values than a uniformly initialized value. We include an additional experiment in the appendix where taking the incorrect action has a larger impact on the return. :::: algorithm ::: algorithmic **Inputs:** Demonstrations or replay buffer $D$ Randomly initialize $Q_{\theta_1}, Q_{\theta_2}, \tau_\psi, I_\omega, f_\phi$ Initialize target networks $\theta'_1 \leftarrow \theta_1, \theta'_2 \leftarrow \theta_2, \psi' \leftarrow \psi$ Sample from demonstration buffer $s, r, s' \sim D$ Take action $a \sim I(s, \tau(s)) + \epsilon$ Observe reward and next state Store experience in $D$ Sample from replay buffer $s, a, r, s' \sim D$ Compute $y = r + \gamma \min\limits_{i=1,2} Q_{\theta'_i} (s', C(s', \tau_{\psi'}(s')))$ // Update critic parameters: Minimize $\mathcal{L}_\theta=\sum_i \Vert y - Q_{\theta_i}(s,s') \Vert$ // Update model parameters: Compute $s'_f = C(s, \tau_\psi(s))$ Minimize $\mathcal{L}_\psi = -Q_{\theta_1}(s, s'_f) + \beta \Vert \tau_\psi(s) - s'_f)\Vert$ // Update target networks: $\theta' \leftarrow \eta \theta + (1-\eta)\theta'$ $\psi' \leftarrow \eta \psi + (1-\eta)\psi'$ // Update forward dynamics parameters: Minimize $\mathcal{L}_\phi = \Vert f_\phi(s,Q_{\theta'_1}(s,s')) - s' \Vert$ // Update forward dynamics parameters: Minimize $\mathcal{L}_\phi = \Vert f_\phi(s,a) - s' \Vert$ // Update inverse dynamics parameters: Minimize $\mathcal{L}_\omega = \Vert I_\omega(s,s') - a\Vert$ ::: []{#ref:alg_d3g label="ref:alg_d3g"} :::: :::: algorithm ::: algorithmic $q = Q_\theta(s, s'_\tau)$ $s'_f = f_\phi(s, q)$ $a = I_\omega(s, s'_\tau)$ $s'_f = f_\phi(s, a)$ ::: []{#ref:alg_cycle label="ref:alg_cycle"} :::: In contrast to domains where the state space is discrete and both QSA and QSS can represent relevant functions with a table, in continuous settings or environments with large state spaces we must approximate values with function approximation. One such approach is Deep Q-learning, which uses a deep neural network to approximate QSA [@mnih-2013-arXiv-playing-atari-with; @mnih2015human]. The loss is formulated as: $\mathcal{L}_\theta=\Vert y - Q_\theta(s,a) \Vert$, where $y = r + \gamma \max_{a'} Q_{\theta'}(s',a')$. Here, $\theta'$ is a target network that stabilizes training. Training is further improved by sampling experience from a replay buffer $s,a,r,s' \sim D$ to decorrelate the sequential data observed in an episode. Deep Deterministic Policy Gradient (DDPG) [@lillicrap2015continuous] applies Deep Q-learning to problems with continuous actions. Instead of computing a max over actions for the target $y$, it uses the output of a policy that is trained to maximize a critic $Q$: $y = r + \gamma Q_{\theta'}(s, \pi_{\psi'}(s))$. Here, $\pi_\psi(s)$ is known as an actor and trained using the following loss: $$\begin{align*} \mathcal{L}_\psi = -Q_\theta(s, \pi_\psi(s)). \end{align*}$$ This approach uses a target network $\theta'$ that is moved slowly towards $\theta$ by updating the parameters as $\theta' \leftarrow \eta \theta + (1-\eta)\theta'$, where $\eta$ determines how smoothly the parameters are updated. A target policy network $\psi'$ is also used when training $Q$, and is updated similarly to $\theta'$. Twin Delayed DDPG (TD3) is a more stable variant of DDPG [@fujimoto2018addressing]. One improvement is to delay the updates of the target networks and actor to be slower than the critic updates by a delay parameter $d$. Additionally, TD3 utilizes Double Q-learning [@hasselt2010double] to reduce overestimation bias in the critic updates. Instead of training a single critic, this approach trains two and uses the one that minimizes the output of $y$: $$\begin{align*} y = r + \gamma \min_{i=1,2} Q_{\theta'_i} (s', \pi_{\psi'}(s')). \end{align*}$$ The loss for the critics becomes: $$\begin{align*} \mathcal{L}_\theta = \sum_i \Vert y - Q_{\theta_i}(s,a) \Vert. \end{align*}$$ Finally, Gaussian noise $\epsilon \sim \mathcal{N}(0,0.1)$ is added to the policy when sampling actions. We use each of these techniques in our own approach. A clear difficulty with training QSS in continuous settings is that it is not possible to iterate over an infinite state space to find a maximizing neighboring state. Instead, we propose training a model to directly output the state that maximizes QSS. We introduce an analogous approach to TD3 for training QSS, Deep Deterministic Dynamics Gradients (D3G). Like the deterministic policy gradient formulation $Q(s,\pi_\psi(s))$, D3G learns a model $\tau_\psi(s) \rightarrow s'$ that makes predictions that maximize $Q(s,\tau_\psi(s))$. To train the critic, we specify the loss as: $$\begin{align} \mathcal{L}_\theta=\sum_i \Vert y - Q_{\theta_i}(s,s') \Vert. \end{align}$$ Here, the target $y$ is specified as: $$\begin{align} y = r + \gamma \min_{i=1,2} Q_{\theta_i'}(s', {\tau}_{\psi'}(s'))]. \end{align}$$ Similar to TD3, we utilize two critics to stabilize training and a target network for Q. We train $\tau$ to maximize the expected return, $J$, starting from any state $s$: $$\begin{align} \nabla_\psi J &= \mathbb{E}[\nabla_\psi Q(s, s')_{s' \sim \tau_\psi(s)}] \\ &= \mathbb{E}[\nabla_{s'} Q(s, s') \nabla_\psi \tau_\psi(s)] && \text{[using chain rule]} \nonumber \end{align}$$ This can be accomplished by minimizing the following loss: $$\begin{align*} \mathcal{L}_\psi = -Q_\theta(s, \tau_\psi(s)). \end{align*}$$ We discuss in the next section how this formulation alone may be problematic. We additionally use a target network for $\tau$, which is updated as $\psi' \leftarrow \eta \psi + (1-\eta)\psi$ for stability. As in the tabular case, $\tau_{\psi}(s)$ acts as a policy over states that aims to maximize $Q$, except now it is being trained to do so using gradient descent. To obtain the necessary action, we apply an inverse dynamics model $I$ as before: $$\begin{equation} \pi(s) = I_\omega(s,\tau_\psi(s)). \end{equation}$$ Now, $I$ is trained using a neural network with data $\langle s,a,s' \rangle \sim D$. The loss is: $$\begin{equation} \mathcal{L}_\omega = \Vert I_\omega(s,s') - a\Vert. \end{equation}$$
Illustration of the cycle consistency for training D3G. Given a state s, τ(s) predicts the next state sτ (black arrow). The inverse dynamics model I(s, sτ) predicts the action that would yield this transition (blue arrows). Then a forward dynamics model fϕ(s, a) takes the action and current state to obtain the next state, sf (green arrows).
DDPG has been shown to overestimate the values of the critic, resulting in a policy that exploits this bias [@fujimoto2018addressing]. Similarly, with the current formulation of the D3G loss, $\tau(s)$ can suggest non-neighboring states that the critic has overestimated the value for. To overcome this, we regularize $\tau$ by ensuring the proposed states are reachable in a single step. In particular, we introduce an additional function for ensuring cycle consistency, $C(s, \tau_\psi(s))$ (see Algorithm [\[ref:alg_cycle\]](#ref:alg_cycle){reference-type="ref" reference="ref:alg_cycle"}). We use this regularizer as a substitute for training interactions with $\tau$. As shown in Figure [13](#fig:cycle){reference-type="ref" reference="fig:cycle"}, given a state $s$, we use $\tau(s)$ to predict the value maximizing next state $s'_\tau$. We use the inverse dynamics model $I(s, s'_\tau)$ to determine the action $a$ that would yield this transition. We then plug that action into a forward dynamics model $f(s,a)$ to obtain the final next state, $s'_f$. In other words, we regularize $\tau$ to make predictions that are consistent with the inverse and forward dynamics models. To train the forward dynamics model, we compute: $$\begin{equation} \mathcal{L}_\phi = \Vert f_\phi(s,a) - s' \Vert. \end{equation}$$ We can then compute the cycle loss for $\tau_\psi$: $$\begin{align} \mathcal{L}_\psi = -Q_\theta(s, C(s, \tau_\psi(s)) + \beta \Vert \tau_\psi(s) - C(s, \tau_\psi(s)) \Vert. \end{align}$$ The second regularization term further encourages prediction of neighbors. The final target for training Q becomes: $$\begin{align} y = r + \gamma \min_{i=1,2} Q_{\theta_i'}(s', C(s', \tau_{\psi'}(s'))) \end{align}$$ We train each of these models concurrently. The full training procedure is described in Algorithm [\[ref:alg_d3g\]](#ref:alg_d3g){reference-type="ref" reference="ref:alg_d3g"}. We found it useful to train the models $\tau_\psi$ and $f_\phi$ to predict the difference between states $\Delta = s' - s$ rather than the next state, as has been done in several other works [@nagabandi2018neural; @goyal2018recall; @edwards2018forward]. As such, we compute $s'_\tau = s + \tau(s)$ to obtain the next state from $\tau(s)$, and $s'_f = s + f(s,a)$ to obtain the next state prediction for $f(s,a)$. We describe this implementation detail here for clarity of the paper.