\n",
"\n",
"This notebook describes a utility function included in the MuJoCo Python library performing box-bounded nonlinear least squares optimization. We provide some theoretical background, describe our implementation and show example usage.\n",
"\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zhVv8-0Tvlrl"
},
"source": [
"# All imports"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "zT4UpIXCvsia"
},
"outputs": [],
"source": [
"# Install mujoco.\n",
"!pip install mujoco\n",
"from google.colab import files\n",
"import distutils.util\n",
"import os\n",
"import subprocess\n",
"if subprocess.run('nvidia-smi').returncode:\n",
" raise RuntimeError(\n",
" 'Cannot communicate with GPU. '\n",
" 'Make sure you are using a GPU Colab runtime. '\n",
" 'Go to the Runtime menu and select Choose runtime type.')\n",
"\n",
"# Add an ICD config so that glvnd can pick up the Nvidia EGL driver.\n",
"# This is usually installed as part of an Nvidia driver package, but the Colab\n",
"# kernel doesn't install its driver via APT, and as a result the ICD is missing.\n",
"# (https://github.com/NVIDIA/libglvnd/blob/master/src/EGL/icd_enumeration.md)\n",
"NVIDIA_ICD_CONFIG_PATH = '/usr/share/glvnd/egl_vendor.d/10_nvidia.json'\n",
"if not os.path.exists(NVIDIA_ICD_CONFIG_PATH):\n",
" with open(NVIDIA_ICD_CONFIG_PATH, 'w') as f:\n",
" f.write(\"\"\"{\n",
" \"file_format_version\" : \"1.0.0\",\n",
" \"ICD\" : {\n",
" \"library_path\" : \"libEGL_nvidia.so.0\"\n",
" }\n",
"}\n",
"\"\"\")\n",
"#\n",
"# Configure MuJoCo to use the EGL rendering backend (requires GPU)\n",
"print('Setting environment variable to use GPU rendering:')\n",
"%env MUJOCO_GL=egl\n",
"\n",
"try:\n",
" print('Checking that the installation succeeded:')\n",
" import mujoco\n",
" from mujoco import minimize\n",
" from mujoco import rollout\n",
" mujoco.MjModel.from_xml_string('')\n",
"except Exception as e:\n",
" raise e from RuntimeError(\n",
" 'Something went wrong during installation. Check the shell output above '\n",
" 'for more information.\\n'\n",
" 'If using a hosted Colab runtime, make sure you enable GPU acceleration '\n",
" 'by going to the Runtime menu and selecting \"Choose runtime type\".')\n",
"print('MuJoCo installation successful.')\n",
"\n",
"print('Installing mediapy:')\n",
"!command -v ffmpeg >/dev/null || (apt update && apt install -y ffmpeg)\n",
"!pip install -q mediapy\n",
"#\n",
"from IPython.display import clear_output\n",
"clear_output()\n",
"\n",
"# Other imports.\n",
"import mediapy as media\n",
"import time\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"from matplotlib.patches import Rectangle\n",
"from typing import Tuple, Optional, Union\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "N94RGEvkL4z6"
},
"source": [
"# Background\n",
"\n",
"We begin with a quick primer. If you know this stuff, feel free to skip this section."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FcrX_sv6kjlh"
},
"source": [
"## Newton's method\n",
"Nonlinear optimization (unconstrained, for now) means finding the value of a vector **decision variable** $x \\in \\mathbb R^n$, which minimizes\n",
"the **objective** $f(x): \\mathbb R^n \\rightarrow \\mathbb R$, here assumed to be a smooth function. This is often written as $$x^* = \\arg \\min_x f(x).$$\n",
"\n",
"Second-order optimization, a.k.a Newton's method, is an iterative procedure that finds a sequence of candidates $x_k$ which reduce the value of $f(x_k)$, until convergence to the minimum. At every iteration we measure the local 1st and 2nd derivatives of $f(\\cdot)$, called the **gradient** vector $g = \\nabla f(x_k)$ and the **Hessian** matrix $H = \\nabla^2 f(x_k)$. With these we have a local quadratic approximation of $f(x_k)$:\n",
"$$\n",
"f(x_k + \\delta x) \\approx f(x_k) + \\delta x^T g + \\frac{1}{2} \\delta x^T H\\delta x\n",
"$$\n",
"By differentiating with respect to $\\delta x$, equating to 0 and solving, we hope to jump to the minimium of the quadratic, giving us the next candidate:\n",
"$$\n",
"x_{k+1} = x_k + \\delta x = x_k - H^{-1}g\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JU7bHvtJkuKP"
},
"source": [
"## Levenberg-Marquardt regularization\n",
"Raw Newton's method, in its form above, is not a good idea. There is no guarantee that the candidates $x_{k}$ will converge or indeed that the objective will be reduced at all. This because\n",
"1. The local quadratic approximation can be very bad (valid only for a small neighborhood).\n",
"2. Even if the approximation is good, $H$ could be negative-definite (or indefinite) rather than positive-definite. In this case we would be jumping to the *maximum* (or *saddle-point*) rather than the *minimum* of the quadratic, taking the situation from bad to worse.\n",
"\n",
"Levenberg-Marquardt (LM) regularization solves both these problems by introducing a regularization parameter $\\mu \\ge 0$:\n",
"$$\n",
"\\delta x_{LM} = - (H + \\mu I)^{-1}g\n",
"$$\n",
"For $\\mu=0$ we get the classic Newton step. For $\\mu$ so large that $\\mu I \\gg H$, we get $\\delta x \\approx -\\tfrac{1}{\\mu}g$: a small gradient step, which is guaranteed to reduce the objective for a large enough $\\mu$. Run the cell below for a visualization of LM regularization. For the simple 2D quadratic and the initial guess\n",
"$$\n",
"f(x) = \\frac{1}{2}x^T \\begin{pmatrix} 2 & 0.9\\\\ 0.9 & 1 \\end{pmatrix} x,\n",
"\\qquad x_0 = \\begin{pmatrix} -0.9\\\\ 0.5 \\end{pmatrix},\n",
"$$\n",
"We plot the trajectory of $x_1(\\mu)$, varying the parameter logarithmically: $\\mu = 10^{-4} \\ldots 10^2$"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "S59y7eGbH8sv"
},
"outputs": [],
"source": [
"#@title LM curve visualization\n",
"\n",
"# Problem setup.\n",
"H = np.array([[2, 0.9], [0.9, 1]])\n",
"x0 = np.array((-0.9, 0.5))\n",
"\n",
"# Logarithmically spaced mu regularizer values.\n",
"lm_values = 10**np.linspace(-4, 2, 40)\n",
"\n",
"# Sequence of next candidates x1(mu).\n",
"g0 = H.dot(x0)\n",
"dx = np.asarray([-np.linalg.inv(H + mu*np.eye(2)).dot(g0) for mu in lm_values])\n",
"x1 = x0 + dx\n",
"\n",
"# Grid of function values, contour plot.\n",
"res = 400\n",
"X,Y = np.meshgrid(np.linspace(-1, 1, res),\n",
" np.linspace(-1, 1, res))\n",
"Z = 0.5 * (H[0,0]*X**2 + H[1,1]*Y**2 + 2*H[1,0]*X*Y)\n",
"fig = plt.figure(figsize=(10,10))\n",
"contours = np.linspace(0, np.sqrt(Z.max()), 20)**2\n",
"plt.contour(X, Y, Z, contours, linewidths=0.5)\n",
"\n",
"# Draw x0, actual minimum, x1(mu).\n",
"point_spec = {'markersize':10, 'markeredgewidth':2, 'fillstyle':'none',\n",
" 'marker':'o', 'linestyle':'none'}\n",
"curve_spec = {'markersize':1.5, 'linewidth':1, 'color':'red'}\n",
"plt.plot(x0[0], x0[1], color='grey', **point_spec)\n",
"plt.plot(0, 0, color='pink', **point_spec)\n",
"plt.plot(x1[:,0], x1[:,1], '-o', **curve_spec)\n",
"\n",
"# Finalize figure.\n",
"plt.title('Levenberg–Marquardt curve: $x_1(\\mu),\\; \\mu = 10^{-4} \\ldots 10^2$')\n",
"plt.xlabel('x')\n",
"plt.ylabel('y')\n",
"plt.legend(['$x_0$', 'minimum', 'LM trajectory'])\n",
"plt.rc('font', family='serif')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qYlpqaquCNh0"
},
"source": [
"In the plot above we can see the \"LM curve\", starting at the minimum of the quadratic for small $\\mu$ and then, as $\\mu$ grows, curving towards $x_0$, approaching it along the gradient. Powell's dog-leg trajectory can be considered a piecewise-linear approximation of the LM curve.\n",
"\n",
"Importantly, note how the points clump at the ends of the curve. The range of $\\mu$ values where the curve is \"valuable\", enabling meaningful search, is rather narrow, especially considering that we are varying it by 6 orders of magnitude. This implies that careful control of $\\mu$ is critical for efficiency of convergence."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NFa0PO1bJg3h"
},
"source": [
"## Box constraints\n",
"Additional stabillity and robustness can be achieved by limiting the search space. An important type of constraint is the box constraint:\n",
"$$\n",
"\\begin{aligned}\n",
"x^* &= \\arg \\min_x f(x)\\\\\n",
"\\textrm{s.t.} &\\quad l \\preccurlyeq x \\preccurlyeq u\n",
"\\end{aligned}\n",
"$$\n",
"Where $l$ and $u$ are respectively lower and upper bound vectors and the inequalities $l \\preccurlyeq x \\preccurlyeq u$ are read elementwise. Solving for the minimizing $\\delta x$ is now a bit more involved than solving a linear system, and the optimization method is now referred to as [Sequential Quadratic Programming](https://en.wikipedia.org/wiki/Sequential_quadratic_programming) (SQP). Fortunately MuJoCo has an efficient box-constrained QP solver: the [mju_boxQP](https://mujoco.readthedocs.io/en/latest/APIreference/APIfunctions.html#mju-boxqp) function."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ag4L4GeX4iMV"
},
"source": [
"## Least Squares and the Gauss-Newton Hessian\n",
"\n",
"Additional simplification is achieved by a structured objective: $f(x) = \\frac{1}{2} r(x)^T\\cdot r(x)$, which is called the *Least Squares* objective. The vector function $r(x): \\mathbb R^n \\rightarrow \\mathbb R^m$, is called the **residual**, and the optimization problem is referred to as *Nonlinear Least Squares*. For readers with a statistics background, the Least Squares problem lends itself to reinterpertation as **Maximum Likelihood Estimation** of a Gaussian posterior $x^* = \\arg \\max_x p(x)$ with $p(x) \\sim e^{-\\frac{1}{2}r(x)^Tr(x)}$. To see why the least-squares structure is beneficial, consider the gradient and Hessian of the objective. Letting $J$ be the $n \\times m$ **Jacobian** matrix of the residual $J = \\nabla r(x)$ we have\n",
"$$\n",
"\\begin{align}\n",
"g &= J^Tr\\\\\n",
"H &= J^TJ + \\nabla J \\cdot r\n",
"\\end{align}\n",
"$$\n",
"The *Gauss-Newton* approximation of the Hessian $H$ involves dropping the second term: $H \\approx H_{GN}= J^TJ$. This approximation has two obvious benefits:\n",
"1. The approximate Hessian $H_{GN}$, being a matrix square, cannot be negative-definite and is at least semidefinite. For any $\\mu \\gt 0$, the LM-regularized matrix $H_{GN} + \\mu I$ is guaranteed to be Symmetric Positive Definite (SPD).\n",
"2. In order to compute $H_{GN}$ we only require the Jacobian $J$. In other words, we get the goodness of 2nd-order optimization with only 1st-order derivatives.\n",
"\n",
"This completes the background section and allows us to accurately describe the function explored in the rest of the notebook. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IaaI2ZIY5FIw"
},
"source": [
"## Least Norm Generalization\n",
"\n",
"The Least Squares problem described above can be generalized as follows. Instead of the quadratic norm $\\frac{1}{2} r(x)^T\\cdot r(x)$, we can use some other smooth, convex norm $f(x) = n(r(x))$. Letting $\\nabla n = \\frac{\\partial n}{\\partial r}$ and $\\nabla^2 n = \\frac{\\partial^2 n}{\\partial r^2}$ be respectively the gradient and Hessian of $n$ with respect to $r$, we have\n",
"$$\n",
"\\begin{align}\n",
"g &= J^T\\cdot\\nabla n\\\\\n",
"H_{GN} &= J^T\\cdot \\nabla^2 n \\cdot J\n",
"\\end{align}\n",
"$$\n",
"It is easy to verify that for the quadratic norm, these expression reduce to the ones above as $\\nabla (\\frac{1}{2}r^T\\cdot r) = r$ and $\\nabla^2 (\\frac{1}{2}r^T\\cdot r) = I_m$.\n",
"\n",
"This completes the background section and allows us to accurately describe the function explored in the rest of the notebook. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nqqtSIkGe-ay"
},
"source": [
"# `minimize.least_squares`\n",
"\n",
"The `minimize.least_squares` function in the `mujoco` Python library solves the Box-Constrained Nonlinear Least Squares problem. It uses the Gauss-Newton Hessian approximation, and takes Levenberg-Marquardt search steps, while adjusting the LM parameter $\\mu$."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jXuA25sJydAy"
},
"source": [
"## Motivation and assumptions\n",
"\n",
"The `minimize.least_squares()` function is motivated by two problems:\n",
"\n",
"- The **System Identification** (sysID) problem is the main use case. SysID involves finding the parameters of a simulated model to better match the behaviour of the real system. In the notation of the previous section, this problem can be cast as a Least Squares problem with the decision variable $x$ corresponding to the model parameters one wishes to identify and the residual $r$ corresponding to the difference between measured and simulated sensor values. A full sysID tutorial with examples will be made available soon.\n",
"- The **Inverse Kinematics** (IK) problem. In this case the decision variables are joint angles, while the residual is the pose difference between an end-effector and some desired pose. The IK problem was less central to our design choices, yet is solved very efficiently with our code.\n",
"\n",
"This motivation led us to the following assumptions and design choices.\n",
"\n",
"1. The dimension of the decision variable is small: $\\dim(x) \\lesssim 100$. While Cholesky factorization (the main cost in the QP solver) scales cubically in $\\dim(x)$, it is very fast at these sizes.\n",
"2. The dimension of the residual vector $r$ (corresponding to the number of measured sensor values, in the sysID case) can be much larger.\n",
"3. Since each evaluation of the residual involves rolling out the physics to obtain simulated sensor values, computing $r(x)$ is the most expensive part of the optimization.\n",
"4. Analytic Jacobians $J = \\frac{\\partial r}{\\partial x}$ are usually not available and must be obtained with **finite-differencing**.\n",
"5. Due to the sematics of $x$, box-bounds are usually sufficient (for example, masses and friction coefficients cannot be negative, joint angles should not exceed their limits).\n",
"6. The implementation should be efficient yet readable. The [least_squares function](https://github.com/google-deepmind/mujoco/blob/main/python/mujoco/minimize.py) takes up ~250 lines of code.\n",
"\n",
"Let's look at the function's docstring and then discuss some implementation notes."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "IL1Us09FrXHP"
},
"outputs": [],
"source": [
"print(minimize.least_squares.__doc__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ppWHo99NqW9U"
},
"source": [
"## Implementation notes\n",
"\n",
"1. The residual funciton must be vectorized: besides taking a column vector $x$ and returning the residual $r(x)$, it must accept an $n\\times k$ matrix $X$, returning an $m\\times k$ matrix $R$. The vectorized format is used by the internal finite-difference implementation and can be exploited to speed up the minimization by using multi-threading inside the residual function implementation.\n",
"1. Bounds must be `None` or fully specified for all dimensions of $x$.\n",
"1. The `jacobian` callback can be supplied by the user and is finite-differenced otherwise. Note that this callback is not made available in the case the user knows the analytic Jacobian (this is very rare in the sysID context), but in case the user wants to implement their own multi-threaded fin-diff callback.\n",
"1. Automatic forward/backward differencing, chosen to avoid crossing the bounds, with optional central differencing. The fin-diff epsilon `eps` is scaled by the size of the bounds, if provided.\n",
"1. The termination criterion is based on small step size $||\\delta x|| < \\textrm{tol}$.\n",
"1. We use the simple yet affective $\\mu$-search strategy described in [Bazaraa et-al.](https://onlinelibrary.wiley.com/doi/book/10.1002/0471787779). Backtracking $\\mu$-increases are *careful*, attempting to find the smallest $\\mu$ where sufficient reduction is found. $\\mu$-decreases are *aggressive*, allowing fast quadratic convergence to a local minimum.\n",
"1. The user may optionally provide a `norm` different than the quadratic norm (the default), this is covered in more detail below."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MvnHwh2ZgGT2"
},
"source": [
"# Toy examples"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "ybrntfKZyTaQ"
},
"outputs": [],
"source": [
"# @title Plotting utility\n",
"def plot_2D(residual, name, plot_range, true_minimum, trace=None, bounds=None):\n",
" n = 400\n",
" x_range, y_range = plot_range\n",
" x_grid = np.linspace(x_range[0], x_range[1], n)\n",
" y_grid = np.linspace(y_range[0], y_range[1], n)\n",
" X, Y = np.meshgrid(x_grid, y_grid)\n",
"\n",
" R = residual(np.stack((X, Y)))\n",
" Z = 0.5 * np.sum(R**2, axis=0)\n",
"\n",
" fig = plt.figure(figsize=(10, 10))\n",
" cntr_levels = np.linspace(0, np.log1p(Z.max()), 30)\n",
" plt.contour(X, Y, np.log1p(Z), cntr_levels, linewidths=0.5)\n",
" plt.title(name)\n",
" plt.xlabel('x')\n",
" plt.ylabel('y')\n",
" plt.rc('font', family='serif')\n",
"\n",
" # Draw bounds.\n",
" hbounds = None\n",
" if bounds is not None:\n",
" lower, upper = bounds\n",
" width = upper[0] - lower[0]\n",
" height = upper[1] - lower[1]\n",
" rect = Rectangle(lower, width, height, edgecolor='blue', facecolor='none',\n",
" fill=False, lw=1)\n",
" hbounds = fig.axes[0].add_patch(rect)\n",
"\n",
" # Draw global minimum, initial point\n",
" point_spec = {'markersize': 10, 'markeredgewidth': 2, 'fillstyle': 'none',\n",
" 'marker': 'o', 'linestyle': 'none'}\n",
" curve_spec = {'marker': 'o', 'markersize': 2, 'linewidth': 1.5,\n",
" 'color': 'red', 'alpha':0.5}\n",
" final_spec = {'marker': 'o', 'markersize': 4, 'color': 'green',\n",
" 'linestyle': 'none'}\n",
" hmin = plt.plot(true_minimum[0], true_minimum[1], color='pink', **point_spec)\n",
"\n",
" def add_curve(t):\n",
" x0 = plt.plot(t[0, 0], t[0, 1], color='grey', **point_spec)\n",
" traj = plt.plot(t[:, 0], t[:, 1], **curve_spec)\n",
" final = plt.plot(t[-1, 0], t[-1, 1], **final_spec)\n",
" return x0, traj, final\n",
"\n",
" if trace is not None:\n",
" if isinstance(trace, list):\n",
" for t in trace:\n",
" x0, traj, final = add_curve(t)\n",
" else:\n",
" x0, traj, final = add_curve(trace)\n",
"\n",
" handles = [x0[0], traj[0], hmin[0], final[0]]\n",
" labels = ['$x_0$', 'optimization trace', 'true minimum', 'solution']\n",
" if hbounds is not None:\n",
" handles.append(hbounds)\n",
" labels.append('box bounds')\n",
" plt.legend(handles, labels)\n",
"\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "l64G6VcnWQ2P"
},
"source": [
"## Rosenbrock function\n",
"\n",
"The classic [Rosenbrock test function](https://en.wikipedia.org/wiki/Rosenbrock_function) can be written in least-squares form:\n",
"\n",
"$f(x,y)=r^Tr,$ with the residual\n",
"$r(x,y)=\\begin{pmatrix} 1-x \\\\ 10\\cdot(y-x^2) \\end{pmatrix}$.\n",
"\n",
"Note that here in the 2D example section, we use $x, y$ to denote the two coordinates of our decision variable. Let's see `minimize.least_squares` find the minimum:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "OYpX8d2Z60Xq"
},
"outputs": [],
"source": [
"# Minimize Rosenbrock function.\n",
"def rosenbrock(x):\n",
" return np.stack([1 - x[0, :], 10 * (x[1, :] - x[0, :] ** 2)])\n",
"\n",
"x0 = np.array((0.0, 0.0))\n",
"x, rb_trace = minimize.least_squares(x0, rosenbrock);"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "Cw8ki9WHwB_j"
},
"outputs": [],
"source": [
"#@title Visualize Rosenbrock solution\n",
"\n",
"plot_range = (np.array((-0.5, 1.5)), np.array((-1.5, 2.)))\n",
"minimum = (1, 1)\n",
"points = np.asarray([t.candidate for t in rb_trace])\n",
"plot_2D(rosenbrock, 'Rosenbrock Function', plot_range, minimum, trace=points)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qvGC6U0T2DE9"
},
"source": [
"## Beale function\n",
"\n",
"A somewhat more elaborate example is the [Beale function](https://en.wikipedia.org/wiki/Test_functions_for_optimization#Test_functions_for_single-objective_optimization):\n",
"\n",
"$f(x,y)=r^Tr,$ with the residual\n",
"$r(x,y)=\\begin{pmatrix} 1.5-x+xy \\\\ 2.25-x+xy^2 \\\\ 2.625-x+xy^3 \\end{pmatrix}$."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "BQuBr8UeTucX"
},
"outputs": [],
"source": [
"#@title Minimize, visualize Beale\n",
"def beale(x):\n",
" return np.stack((1.5-x[0, :]+x[0, :]*x[1, :],\n",
" 2.25-x[0, :]+x[0, :]*x[1, :]*x[1, :],\n",
" 2.625-x[0, :]+x[0, :]*x[1, :]*x[1, :]*x[1, :]))\n",
"\n",
"x0 = np.array((-3.0, -3.0))\n",
"x, bl_trace = minimize.least_squares(x0, beale)\n",
"\n",
"# Visualize solution.\n",
"plot_range = (np.array((-4., 4.)), np.array((-4., 4.)))\n",
"minimum = (3., 0.5)\n",
"points = np.asarray([t.candidate for t in bl_trace])\n",
"plot_2D(beale, 'Beale Function', plot_range, minimum, trace=points)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GmNQKK9GecOz"
},
"source": [
"## Box bounds\n",
"\n",
"Let's see solutions for these problems with box bounds:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "bBYet9hWkGZv"
},
"outputs": [],
"source": [
"#@title Rosenbrock with bounds\n",
"\n",
"# Choose bounds.\n",
"lower = np.array([-.3, -1.])\n",
"upper = np.array([0.9, 1.9])\n",
"bounds = [lower, upper]\n",
"\n",
"# Make some initial points, minimize, save taces.\n",
"num_points = 4\n",
"px = np.linspace(lower[0]+.1, upper[0]-.1, num_points)\n",
"py = np.linspace(lower[1]+.1, upper[1]-.1, num_points)\n",
"traces = []\n",
"for i in range(num_points):\n",
" for j in range(num_points):\n",
" x0 = np.array((px[j], py[num_points-i-1]))\n",
" _, trace = minimize.least_squares(x0, rosenbrock, bounds, verbose=0)\n",
" traces.append(np.asarray([t.candidate for t in trace]))\n",
"\n",
"# Plot.\n",
"plot_range = (np.array((-0.5, 1.5)), np.array((-1.5, 2.5)))\n",
"minimum = (1, 1)\n",
"plot_2D(rosenbrock, 'Rosenbrock Function', plot_range, minimum, traces, bounds)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "8AGuQmQDxgeV"
},
"outputs": [],
"source": [
"#@title Beale with bounds\n",
"\n",
"# Choose bounds.\n",
"lower = np.array([-2, -1.3])\n",
"upper = np.array([1.5, 3.])\n",
"bounds = [lower, upper]\n",
"\n",
"# Make some initial points, minimize, save taces.\n",
"num_points = 5\n",
"p0 = np.linspace(lower[0]+.4, upper[0]-.4, num_points)\n",
"p1 = np.linspace(lower[1]+.4, upper[1]-.4, num_points)\n",
"traces = []\n",
"\n",
"for i in range(num_points):\n",
" for j in range(num_points):\n",
" x0 = np.array((p0[j], p1[i]))\n",
" x, trace = minimize.least_squares(x0, beale, bounds, verbose=0)\n",
" traces.append(np.asarray([t.candidate for t in trace]))\n",
"\n",
"# Plot.\n",
"plot_range = (np.array((-3., 3.5)), np.array((-3., 4.)))\n",
"minimum = (3., 0.5)\n",
"plot_2D(beale, 'Beale Function', plot_range, minimum, traces, bounds)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "X1anTzAVGzDj"
},
"source": [
"## n-dimensional Rosenbrock\n",
"\n",
"An $n$-dimensional generalization of the 2D rosenbrock function is\n",
"$$\n",
"f(x) = \\sum_{i=1}^{n-1} \\left[ 100 \\cdot (x_{i+1} - x_{i}^{2})^{2} + \\left(1 - x_{i}\\right)^{2}\\right],\n",
"$$\n",
"which can also be written in Least Squares form and solved by our optimizer:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "_1B8r9z30Hgc"
},
"outputs": [],
"source": [
"n = 20\n",
"\n",
"def rosenbrock_n(x):\n",
" res0 = [1 - x[i, :] for i in range(n - 1)]\n",
" res1 = [10 * (x[i, :] - x[i + 1, :] ** 2) for i in range(n - 1)]\n",
" return np.asarray(res0 + res1)\n",
"\n",
"x0 = np.zeros(n)\n",
"x, _ = minimize.least_squares(x0, rosenbrock_n);\n",
"\n",
"# Expected solution is a vector of 1's\n",
"assert np.linalg.norm(x-1) < 1e-8"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hadCo2CTe_T7"
},
"source": [
"# Inverse Kinematics\n",
"\n",
"Forward kinematics means computing the Cartesian poses of articulated bodies, given joint angles. This computation is easy and well defined. Inverse Kinematics (IK) tries to do the opposite: Given a desired Cartesian pose of an articulated body (usually called the **end effector**), find the joint angles which put the end effector at this pose. The IK problem can easily be over or under determined:\n",
"- Over-determined: The end effector cannot reach the desired pose at all, or can match either position or orientation, but not both.\n",
"- Under-detemined: The articulated chain has more than the minimum required number of degrees-of-freedom (dofs), so after reaching the target there is still a free subspace (imagine rotating your eblow while your hand and shoulder are fixed).\n",
"\n",
"We adress both of these situations in our implementation below, but first mention features which are currently **not implemented**, but could be added in the future.\n",
"1. We currently support only one end-effector and target pose. Adding mutiple targets would be straightforward.\n",
"2. Our implementation currently does not support quaternions in the kinematic chain (ball and free joints). Adding this is possible and would require a small modification to the `least_squares` function (explicit support for non-Cartesian tangent spaces).\n",
"3. We only give examples with simple box bounds, which in the IK context means joint range limits. In order to take into account constraints like collision avoidance, one would need to make use of MuJoCo's constraint Jacobians. Please let us know if you'd like to see an example of this.\n",
"\n",
"## 7-dof arm\n",
"\n",
"Below, we copy the MuJoCo Menagerie's model of Franka's [Panda](https://github.com/google-deepmind/mujoco_menagerie/tree/main/franka_emika_panda#readme), with the following modifications:\n",
"1. Intergrated the [scene](https://github.com/google-deepmind/mujoco_menagerie/blob/main/universal_robots_ur5e/scene.xml) elements into a single XML.\n",
"2. Added a \"target\" mocap body and site with 3 colored, non-colliding geoms that overlap with the site frame."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "uWLYJUe38i15"
},
"outputs": [],
"source": [
"#@title Panda XML\n",
"panda = '''\n",
"\n",
" \n",
"\n",
" \n",
"'''"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "dK9fIoB9EF1P"
},
"outputs": [],
"source": [
"#@title Get Panda assets\n",
"!git clone https://github.com/google-deepmind/mujoco_menagerie\n",
"from os import walk\n",
"from os.path import join\n",
"assets_path = 'mujoco_menagerie/franka_emika_panda/assets/'\n",
"asset_names = next(walk(assets_path), (None, None, []))[2]\n",
"assets = {}\n",
"for name in asset_names:\n",
" with open(join(assets_path, name), 'rb') as f:\n",
" assets[name] = f.read()\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "g_5007tHfX9c"
},
"outputs": [],
"source": [
"#@title Load model, render {vertical-output: true}\n",
"model = mujoco.MjModel.from_xml_string(panda, assets)\n",
"data = mujoco.MjData(model)\n",
"\n",
"# Reset the state to the \"home\" keyframe.\n",
"key = mujoco.mj_name2id(model, mujoco.mjtObj.mjOBJ_KEY, 'home')\n",
"mujoco.mj_resetDataKeyframe(model, data, key)\n",
"mujoco.mj_forward(model, data)\n",
"\n",
"# If a renderer exists, close it.\n",
"if 'renderer' in locals():\n",
" renderer.close()\n",
"\n",
"# Make a Renderer and a camera.\n",
"renderer = mujoco.Renderer(model, height=360, width=480)\n",
"camera = mujoco.MjvCamera()\n",
"mujoco.mjv_defaultFreeCamera(model, camera)\n",
"camera.distance = 1.7\n",
"camera.elevation = -15\n",
"camera.azimuth = -130\n",
"camera.lookat = (0, 0, .3)\n",
"\n",
"# Visualize site frames and labels\n",
"voption = mujoco.MjvOption()\n",
"voption.frame = mujoco.mjtFrame.mjFRAME_SITE\n",
"voption.label = mujoco.mjtLabel.mjLABEL_SITE\n",
"renderer.update_scene(data, camera, voption)\n",
"voption.label = mujoco.mjtLabel.mjLABEL_NONE\n",
"\n",
"media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IdqEhtvqNlS_"
},
"source": [
"Note how the target site frame has boxes along the axes, to make it visually distinct from the end-effector frame."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kSLQYCQJfiB6"
},
"source": [
"## Orientation weighting\n",
"\n",
"A common feature of the IK problem is that matching poses requires matching both positions and orientations. However, these residuals have different units. Position differences are in units of length (say, Meters), while orientation differences (log-map of a quaternion difference) are in unitless Radians. The user needs to provide a conversion factor denoting how much they \"care\" about matching orientation vs. position. Since this factor has units of length we call it `radius`, which can be thought of as the radius of a sphere around the effector frame on the surface of which angular errors are measured."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Bb9vSXf0o0mh"
},
"source": [
"## IK regularization\n",
"\n",
"Our Panda arm has 7 degrees-of-freedom (dofs), making the IK problem under-determined, since the IK problem has 6 constraints (3 translation + 3 rotation). In order to enforce a single solution, we introduce a fixed regularizer attracting the configuration to some base pose."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wFowIXLcqaIC"
},
"source": [
"## Problem definition\n",
"\n",
"We are now in a position to see the problem definition:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0cxn1XcnK2Eu"
},
"source": [
"### Bounds, initial guess"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "WIpghcP0qp7D"
},
"outputs": [],
"source": [
"# Bounds at the joint limits.\n",
"bounds = [model.jnt_range[:, 0], model.jnt_range[:, 1]]\n",
"\n",
"# Inital guess is the 'home' keyframe.\n",
"x0 = model.key('home').qpos"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NDE6ja_-K-EX"
},
"source": [
"### IK residual"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "NZ3qaI0x6hFH"
},
"outputs": [],
"source": [
"def ik(x, pos=None, quat=None, radius=0.04, reg=1e-3, reg_target=None):\n",
" \"\"\"Residual for inverse kinematics.\n",
"\n",
" Args:\n",
" x: joint angles.\n",
" pos: target position for the end effector.\n",
" quat: target orientation for the end effector.\n",
" radius: scaling of the 3D cross.\n",
"\n",
" Returns:\n",
" The residual of the Inverse Kinematics task.\n",
" \"\"\"\n",
"\n",
" # Move the mocap body to the target\n",
" id = model.body('target').mocapid\n",
" data.mocap_pos[id] = model.body('target').pos if pos is None else pos\n",
" data.mocap_quat[id] = model.body('target').quat if quat is None else quat\n",
"\n",
" # Set qpos, compute forward kinematics.\n",
" res = []\n",
" for i in range(x.shape[1]):\n",
" data.qpos = x[:, i]\n",
" mujoco.mj_kinematics(model, data)\n",
"\n",
" # Position residual.\n",
" res_pos = data.site('effector').xpos - data.site('target').xpos\n",
"\n",
" # Effector quat, use mju_mat2quat.\n",
" effector_quat = np.empty(4)\n",
" mujoco.mju_mat2Quat(effector_quat, data.site('effector').xmat)\n",
"\n",
" # Target quat, exploit the fact that the site is aligned with the body.\n",
" target_quat = data.body('target').xquat\n",
"\n",
" # Orientation residual: quaternion difference.\n",
" res_quat = np.empty(3)\n",
" mujoco.mju_subQuat(res_quat, target_quat, effector_quat)\n",
" res_quat *= radius\n",
"\n",
" # Regularization residual.\n",
" reg_target = model.key('home').qpos if reg_target is None else reg_target\n",
" res_reg = reg * (x[:, i] - reg_target)\n",
"\n",
" res_i = np.hstack((res_pos, res_quat, res_reg))\n",
" res.append(np.atleast_2d(res_i).T)\n",
"\n",
" return np.hstack(res)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-FD9ib76wXd5"
},
"source": [
"### Analytic Jacobian\n",
"\n",
"The IK problem is special in another way. Unlike the general case, here analytic Jacobians are computable by MuJoCo. Let's write down this Jacobian and then compare solution timing using finite differencing with timing using the analytic Jacobian."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "-FM8suuv9_-C"
},
"outputs": [],
"source": [
"def ik_jac(x, res, pos=None, quat=None, radius=.04, reg=1e-3):\n",
" \"\"\"Analytic Jacobian of inverse kinematics residual\n",
"\n",
" Args:\n",
" x: joint angles.\n",
" pos: target position for the end effector.\n",
" quat: target orientation for the end effector.\n",
" radius: scaling of the 3D cross.\n",
"\n",
" Returns:\n",
" The Jacobian of the Inverse Kinematics task.\n",
" \"\"\"\n",
" # least_squares() passes the value of the residual at x which is sometimes\n",
" # useful, but we don't need it here.\n",
" del res\n",
"\n",
" # Call mj_kinematics and mj_comPos (required for Jacobians).\n",
" mujoco.mj_kinematics(model, data)\n",
" mujoco.mj_comPos(model, data)\n",
"\n",
" # Get end-effector site Jacobian.\n",
" jac_pos = np.empty((3, model.nv))\n",
" jac_quat = np.empty((3, model.nv))\n",
" mujoco.mj_jacSite(model, data, jac_pos, jac_quat, data.site('effector').id)\n",
"\n",
" # Get Deffector, the 3x3 mju_subquat Jacobian\n",
" effector_quat = np.empty(4)\n",
" mujoco.mju_mat2Quat(effector_quat, data.site('effector').xmat)\n",
" target_quat = data.body('target').xquat\n",
" Deffector = np.empty((3, 3))\n",
" mujoco.mjd_subQuat(target_quat, effector_quat, None, Deffector)\n",
"\n",
" # Rotate into target frame, multiply by subQuat Jacobian, scale by radius.\n",
" target_mat = data.site('target').xmat.reshape(3, 3)\n",
" mat = radius * Deffector.T @ target_mat.T\n",
" jac_quat = mat @ jac_quat\n",
"\n",
" # Regularization Jacobian.\n",
" jac_reg = reg * np.eye(model.nv)\n",
"\n",
" return np.vstack((jac_pos, jac_quat, jac_reg))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ez7bek0Yx5BC"
},
"source": [
"Let's compare the performance of the function with finite-differenced and analytic Jacobians:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "YHmIIn-Xvc81"
},
"outputs": [],
"source": [
"print('Finite-differenced Jacobian:')\n",
"x_fd, _ = minimize.least_squares(x0, ik, bounds, verbose=1);\n",
"print('Analytic Jacobian:')\n",
"x_analytic, _ = minimize.least_squares(x0, ik, bounds, jacobian=ik_jac,\n",
" verbose=1, check_derivatives=True);\n",
"\n",
"# Assert that we got a nearly identical solution\n",
"assert np.linalg.norm(x_fd - x_analytic) < 1e-5"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UP9UamTWzWM4"
},
"source": [
"Nice speed-up! This will become more pronounced the harder the specific IK problem. We'll do a more comprehensive timing comparison a few cells down (this specific configuration happens to be solved rather slowly).\n",
"\n",
"Note that we passed `check_derivatives=True` to ask the function to verify that our analytic Jacobian is correct, by making a comparison to the internal finite-difference function at the first timestep."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "x9nNH8fErRqg"
},
"source": [
"## Visualizing solutions\n",
"\n",
"Let's see what the solution to our problem looks like."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "mfhCxqQiio5l"
},
"outputs": [],
"source": [
"#@title Basic solution {vertical-output: true}\n",
"\n",
"x, _ = minimize.least_squares(x0, ik, bounds, jacobian=ik_jac,\n",
" verbose=0);\n",
"\n",
"# Update and visualize\n",
"data.qpos = x\n",
"mujoco.mj_kinematics(model, data)\n",
"mujoco.mj_camlight(model, data)\n",
"camera.distance = 1\n",
"camera.lookat = data.site('effector').xpos\n",
"renderer.update_scene(data, camera, voption)\n",
"media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Gr4wMZ6ZuIn4"
},
"source": [
"In order to get a better intuition for the effects of the `radius` argument, let's give the IK solver a more difficult target pose, and vary the radius:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "_anJZK-u1MTt"
},
"outputs": [],
"source": [
"#@title Small `radius` {vertical-output: true}\n",
"\n",
"pos = np.array((.2, .8, .5))\n",
"quat = np.array((1, 1, 0, 0))\n",
"radius = 0.04\n",
"\n",
"ik_target = lambda x: ik(x, pos=pos, quat=quat, radius=radius)\n",
"jac_target = lambda x, r: ik_jac(x, r, pos=pos, quat=quat,\n",
" radius=radius)\n",
"\n",
"x, _ = minimize.least_squares(x0, ik_target, bounds,\n",
" jacobian=jac_target,\n",
" verbose=0);\n",
"\n",
"# Update and visualize\n",
"data.qpos = x\n",
"mujoco.mj_kinematics(model, data)\n",
"mujoco.mj_camlight(model, data)\n",
"camera.distance = 1\n",
"camera.lookat = data.site('effector').xpos\n",
"camera.azimuth = -150\n",
"camera.elevation = -20\n",
"renderer.update_scene(data, camera, voption)\n",
"media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LGu94l6NsOHK"
},
"source": [
"We can see that the positions match well, but the orientations are very wrong."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "xAQdrdlQ3cnT"
},
"outputs": [],
"source": [
"#@title Large `radius` {vertical-output: true}\n",
"\n",
"pos = np.array((.2, .8, .5))\n",
"quat = np.array((1, 1, 0, 0))\n",
"radius = 0.5\n",
"\n",
"ik_target = lambda x: ik(x, pos=pos, quat=quat, radius=radius)\n",
"jac_target = lambda x, r: ik_jac(x, r, pos=pos, quat=quat,\n",
" radius=radius)\n",
"\n",
"x, _ = minimize.least_squares(x0, ik_target, bounds,\n",
" jacobian=jac_target,\n",
" verbose=0);\n",
"\n",
"# Update and visualize\n",
"data.qpos = x\n",
"mujoco.mj_kinematics(model, data)\n",
"mujoco.mj_camlight(model, data)\n",
"camera.distance = 1\n",
"camera.lookat = data.site('effector').xpos\n",
"camera.azimuth = -150\n",
"camera.elevation = -20\n",
"renderer.update_scene(data, camera, voption)\n",
"media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kpPzK9qy6NAw"
},
"source": [
"Now the orientations are a better match, but the position error is much larger."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NzoUmgv_EQj1"
},
"source": [
"## Smooth motion\n",
"\n",
"It is often the case that one wants to smoothly track a given end-effector trajectory. Let's invent some smooth traget trajectory:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "GwHQqUEJEU9X"
},
"outputs": [],
"source": [
"# @title Continuous target trajectory {vertical-output: true}\n",
"def pose(time):\n",
" pos = (0.4 * np.sin(time),\n",
" 0.4 * np.cos(time),\n",
" 0.4 + 0.2 * np.sin(3 * time))\n",
" quat = np.array((1.0, np.sin(2 * time), np.sin(time), 0))\n",
" quat /= np.linalg.norm(quat)\n",
" return pos, quat\n",
"\n",
"# Number of frames\n",
"n_frame = 400\n",
"\n",
"# Reset the camera, make the arm point straight up.\n",
"camera.distance = 1.5\n",
"camera.elevation = -15\n",
"camera.azimuth = -130\n",
"camera.lookat = (0, 0, 0.3)\n",
"mujoco.mj_resetData(model, data)\n",
"\n",
"frames = []\n",
"for t in np.linspace(0, 2 * np.pi, n_frame):\n",
" # Move the target\n",
" pos, quat = pose(t)\n",
" id = model.body('target').mocapid\n",
" data.mocap_pos[id] = pos\n",
" data.mocap_quat[id] = quat\n",
"\n",
" mujoco.mj_kinematics(model, data)\n",
" mujoco.mj_camlight(model, data)\n",
" renderer.update_scene(data, camera, voption)\n",
"\n",
" frames.append(renderer.render())\n",
"\n",
"media.show_video(frames)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3130-DIkLrp1"
},
"source": [
"Let's see what the arm motion looks like if we solve the IK problem independently for each frame, always starting at our `x0` initial guess. While we're doing this, we'll also time our solution, with and without analytic derivatives, to get a better sense of how long the optimization takes."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "VGabjS76L-3m"
},
"outputs": [],
"source": [
"#@title Full IK for every frame {vertical-output: true}\n",
"\n",
"frames = []\n",
"time_fd = []\n",
"time_analytic = []\n",
"for t in np.linspace(0, 2 * np.pi, n_frame):\n",
" # Get target pose\n",
" pos, quat = pose(t)\n",
"\n",
" # Define IK problem\n",
" ik_target = lambda x: ik(x, pos=pos, quat=quat)\n",
" jac_target = lambda x, r: ik_jac(x, r, pos=pos, quat=quat)\n",
"\n",
" # Solve while timing, fin-diff Jacobian\n",
" t_start = time.time()\n",
" x, _ = minimize.least_squares(x0, ik_target, bounds,\n",
" verbose=0);\n",
" time_fd.append(1000*(time.time() - t_start))\n",
"\n",
" # Solve while timing, analytic Jacobian\n",
" t_start = time.time()\n",
" x, _ = minimize.least_squares(x0, ik_target, bounds,\n",
" jacobian=jac_target,\n",
" verbose=0);\n",
" time_analytic.append(1000*(time.time() - t_start))\n",
"\n",
" mujoco.mj_kinematics(model, data)\n",
" mujoco.mj_camlight(model, data)\n",
" renderer.update_scene(data, camera, voption)\n",
"\n",
" frames.append(renderer.render())\n",
"\n",
"media.show_video(frames, loop=False)\n",
"\n",
"mean_analytic = np.asarray(time_analytic).mean()\n",
"mean_fd = np.asarray(time_fd).mean()\n",
"prfx = 'Mean solution time using '\n",
"print(prfx + f'fin-diff Jacobian: {mean_fd:3.1f}ms')\n",
"print(prfx + f'analytic Jacobian: {mean_analytic:3.1f}ms')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pL0dArCKP-Y-"
},
"source": [
"Yikes! Solutions are being found, but what's with all the \"glitches\"? The IK problem often has multiple local minima and since we are solving from scratch each time, we end up in a different one, somewhat arbitrarily.\n",
"\n",
"A very easy way to mitigate this is to initialize (\"warmstart\") the solver with the previous solution:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "TgTLDj34oP6g"
},
"outputs": [],
"source": [
"#@title Warmstart with previous solution {vertical-output: true}\n",
"\n",
"frames = []\n",
"x = x0\n",
"for t in np.linspace(0, 2 * np.pi, n_frame):\n",
" # Get target pose\n",
" pos, quat = pose(t)\n",
"\n",
" # Define IK problem\n",
" ik_target = lambda x: ik(x, pos=pos, quat=quat)\n",
" jac_target = lambda x, r: ik_jac(x, r, pos=pos, quat=quat)\n",
"\n",
" x, _ = minimize.least_squares(x, ik_target, bounds,\n",
" jacobian=jac_target,\n",
" verbose=0);\n",
"\n",
" mujoco.mj_kinematics(model, data)\n",
" mujoco.mj_camlight(model, data)\n",
" renderer.update_scene(data, camera, voption)\n",
"\n",
" frames.append(renderer.render())\n",
"\n",
"media.show_video(frames, loop=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fEvXhuv1TrUk"
},
"source": [
"This looks *much* better, but there are still a few \"glitches\". An esy way to mitigate those is to modify the regularization target to also be the previous solution. We are effectively telling the solver not to stray too far."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "mMKc4OZ5pvtd"
},
"outputs": [],
"source": [
"#@title Warmstart with and regularize to previous solution {vertical-output: true}\n",
"\n",
"frames = []\n",
"x = x0\n",
"for t in np.linspace(0, 2 * np.pi, n_frame):\n",
" # Get target pose\n",
" pos, quat = pose(t)\n",
"\n",
" x_prev = x.copy()\n",
"\n",
" # Define IK problem\n",
" ik_target = lambda x: ik(x, pos=pos, quat=quat,\n",
" reg_target=x_prev, reg=.1)\n",
" jac_target = lambda x, r: ik_jac(x, r, pos=pos, quat=quat)\n",
"\n",
" x, _ = minimize.least_squares(x, ik_target, bounds,\n",
" jacobian=jac_target,\n",
" verbose=0);\n",
"\n",
" mujoco.mj_kinematics(model, data)\n",
" mujoco.mj_camlight(model, data)\n",
" renderer.update_scene(data, camera, voption)\n",
"\n",
" frames.append(renderer.render())\n",
"\n",
"media.show_video(frames, loop=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "69c7LaZ07VGY"
},
"source": [
"# Non-quadratic norms\n",
"\n",
"We'll now use `least_squares` to solve a trajectory optimization (control) problem. While this is not its intended purpose, it demonstrates the power and general usability of the function.\n",
"\n",
"After using regular Least Squares, we'll define a custom **non-quadratic norm** and solve again.\n",
"\n",
"Below, we copy MuJoCo's [standard humanoid model](https://github.com/google-deepmind/mujoco/blob/main/model/humanoid/humanoid.xml), with the following modifications:\n",
"1. Added a \"target\" mocap body with a pink spherical site.\n",
"2. Changed the color of the right hand to pink.\n",
"3. Replaced the torque actuators with position actuators.\n",
"4. Added sensors corresponding to the residual."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"id": "1Cg4ABJa7VGd"
},
"outputs": [],
"source": [
"#@title Humanoid XML\n",
"xml = \"\"\"\n",
"\n",
" \n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "goe2DuOv7VGd"
},
"source": [
"Let's load the model and render the initial state for our control problem, chosen to be the \"squat\" [keyframe](https://mujoco.readthedocs.io/en/latest/XMLreference.html#keyframe)."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "xhvrJ9bX7VGd"
},
"outputs": [],
"source": [
"# Load model, make data\n",
"model = mujoco.MjModel.from_xml_string(xml)\n",
"data = mujoco.MjData(model)\n",
"\n",
"# Set the state to the \"squat\" keyframe, call mj_forward.\n",
"key = model.key('squat').id\n",
"mujoco.mj_resetDataKeyframe(model, data, key)\n",
"mujoco.mj_forward(model, data)\n",
"\n",
"# If a renderer exists, close it.\n",
"if 'renderer' in locals():\n",
" renderer.close()\n",
"\n",
"# Make a Renderer and a camera.\n",
"renderer = mujoco.Renderer(model, height=480, width=640)\n",
"camera = mujoco.MjvCamera()\n",
"mujoco.mjv_defaultFreeCamera(model, camera)\n",
"camera.distance = 3\n",
"camera.elevation = -10\n",
"\n",
"# Point the camera at the humanoid, render.\n",
"camera.lookat = data.body('torso').subtree_com\n",
"renderer.update_scene(data, camera)\n",
"media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yxIzoZ_O7VGd"
},
"source": [
"### Problem definition\n",
"\n",
"We define the following optimal control problem defintion:\n",
"- A trajectory is rolled out from $t=0\\ldots T$, starting at the \"squat\" keyframe shown above.\n",
"- Controls $u_t$ are applied during the rollout which are a linear interpolation of first control $u_0$ and the last one $u_T$. These two vectors are our decision variable $x = \\begin{pmatrix} u_0 & u_T \\end{pmatrix}$.\n",
"- The residual is a concatenation, over all time steps, of:\n",
" - The vector from the right hand to the target.\n",
" - The torques applied by the actuators, scaled by some factor (they are numerically much larger than the hand-target distances).\n",
"\n",
"Our residual uses `mujoco.rollout` to evaluate parallel trajectories:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "cIX95v757VGd"
},
"outputs": [],
"source": [
"def reach(ctrl0T, target, T, torque_scale, traj=None):\n",
" \"\"\"Residual for target-reaching task.\n",
"\n",
" Args:\n",
" ctrl0T: contatenation of the first and last control vectors.\n",
" target: target to which the right hand should reach.\n",
" T: final time for the rollout.\n",
" torque_scale: coefficient by which to scale the torques.\n",
" traj: optional list of positions to be recorded.\n",
"\n",
" Returns:\n",
" The residual of the target-reaching task.\n",
" \"\"\"\n",
" # Extract the initial and final ctrl vectors, transpose to row vectors\n",
" ctrl0 = ctrl0T[:model.nu, :].T\n",
" ctrlT = ctrl0T[model.nu:, :].T\n",
"\n",
" # Move the mocap body to the target\n",
" mocapid = model.body('target').mocapid\n",
" data.mocap_pos[mocapid] = target\n",
"\n",
" # Append the mocap targets to the controls\n",
" nbatch = ctrl0.shape[0]\n",
" mocap = np.tile(data.mocap_pos[mocapid], (nbatch, 1))\n",
" ctrl0 = np.hstack((ctrl0, mocap))\n",
" ctrlT = np.hstack((ctrlT, mocap))\n",
"\n",
" # Define control spec (ctrl + mocap_pos)\n",
" mjtState = mujoco.mjtState\n",
" control_spec = mjtState.mjSTATE_CTRL | mjtState.mjSTATE_MOCAP_POS\n",
"\n",
" # Interpolate and stack the control sequences\n",
" nstep = int(np.round(T / model.opt.timestep))\n",
" control = np.stack(np.linspace(ctrl0, ctrlT, nstep), axis=1)\n",
"\n",
" # Reset to the \"squat\" keyframe, get the initial state\n",
" key = model.key('squat').id\n",
" mujoco.mj_resetDataKeyframe(model, data, key)\n",
" spec = mjtState.mjSTATE_FULLPHYSICS\n",
" nstate = mujoco.mj_stateSize(model, spec)\n",
" state = np.empty(nstate)\n",
" mujoco.mj_getState(model, data, state, spec)\n",
"\n",
" # Perform rollouts (sensors.shape == nbatch, nstep, nsensordata)\n",
" states, sensors = rollout.rollout(model, data, state, control,\n",
" control_spec=control_spec)\n",
"\n",
" # If requested, extract qpos into traj\n",
" if traj is not None:\n",
" assert states.shape[0] == 1\n",
" # Skip the first element in state (mjData.time)\n",
" traj.extend(np.split(states[0, :, 1:model.nq+1], nstep))\n",
"\n",
" # Scale torque sensors\n",
" sensors[:, :, 3:] *= torque_scale\n",
"\n",
" # Reshape to stack the sensor values, transpose to column vectors\n",
" sensors = sensors.reshape((sensors.shape[0], -1)).T\n",
"\n",
" # The normalizer keeps objective values similar when changing T or timestep.\n",
" normalizer = 100 * model.opt.timestep / T\n",
" return normalizer * sensors"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Bgugbs987VGe"
},
"source": [
"Now let's give the rest of the problem definition:\n",
"1. The trajectory is integrated for 0.7s.\n",
"2. Torques are scaled by 0.003.\n",
"3. Since our decision variable is two copies of `mjData.ctrl`, the bounds are two concatenated copies of `mjData.actuator_ctrlrange`.\n",
"4. Our initial guess $x_0 = \\begin{pmatrix} u_0 & u_T \\end{pmatrix} = \\begin{pmatrix} q_\\textrm{squat} & q_\\textrm{stand} \\end{pmatrix}$ is the joint angles at the squatting position, followed by the angles at the default (standing position). We can use angles to initialize our controls because the position actuators have angle semantics.\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "Q_6PshTn7VGe"
},
"outputs": [],
"source": [
"T = 0.7 # Rollout length (seconds)\n",
"torque_scale = 0.003 # Scaling for the torques\n",
"\n",
"# Bounds are the stacked control bounds.\n",
"lower = np.atleast_2d(model.actuator_ctrlrange[:,0]).T\n",
"upper = np.atleast_2d(model.actuator_ctrlrange[:,1]).T\n",
"bounds = [np.vstack((lower, lower)), np.vstack((upper, upper))]\n",
"\n",
"# Initial guess is midpoint of the bounds\n",
"x0 = 0.5 * (bounds[1] + bounds[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DLkrsjeE7VGe"
},
"source": [
"Let's define a utility function for rendering frames and visualize the initial guess:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "Ghnvrsyt7VGe"
},
"outputs": [],
"source": [
"def render_solution(x, target):\n",
" # Ask reach to save positions to traj.\n",
" traj = []\n",
" reach(x, target, T, torque_scale, traj=traj);\n",
"\n",
" frames = []\n",
" counter = 0\n",
" print('Rendering frames:', flush=True, end='')\n",
" for qpos in traj:\n",
" # Set positions, call mj_forward to update kinematics.\n",
" data.qpos = qpos\n",
" mujoco.mj_forward(model, data)\n",
"\n",
" # Render and save frames.\n",
" camera.lookat = data.body('torso').subtree_com\n",
" renderer.update_scene(data, camera)\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
" counter += 1\n",
" if counter % 10 == 0:\n",
" print(f' {counter}', flush=True, end='')\n",
" return frames\n",
"\n",
"# Use default target.\n",
"target = data.mocap_pos[model.body('target').mocapid]\n",
"\n",
"# Visualize the initial guess.\n",
"media.show_video(render_solution(x0, target))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0L_W3Y5r7VGe"
},
"source": [
"### Solutions to the reach task\n",
"\n",
"Let's solve once for some target and look at the optimization printout:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "RrEuIHJF7VGe"
},
"outputs": [],
"source": [
"target = (.4, -.3, 1.2)\n",
"\n",
"reach_target = lambda x: reach(x, target, T, torque_scale, traj=None)\n",
"\n",
"r0 = reach_target(x0)\n",
"print(f'The decision variable x has size {x0.size}')\n",
"print(f'The residual r(x) has size {r0.size}\\n')\n",
"\n",
"x, _ = minimize.least_squares(x0, reach_target, bounds);"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S3r1KWHF7VGe"
},
"source": [
"Let's see what this solution looks like:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "GcRuhJX-7VGe"
},
"outputs": [],
"source": [
"media.show_video(render_solution(x, target))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zqgKu71p7VGe"
},
"source": [
"Let's rerun this for several target values and make a video of all of them:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "GCsIOtdn7VGe"
},
"outputs": [],
"source": [
"targets = [(0.4, 0., 0.), (0.2, -1., 0.5), (-1., -.3, 1.), (0., -.2, 2.2)]\n",
"\n",
"frames = []\n",
"for target in targets:\n",
" res_target = lambda x: reach(x, target, T, torque_scale)\n",
" print(f'Optimizing for target at {target}', flush=True)\n",
" x, trace = minimize.least_squares(x0, res_target, bounds,\n",
" verbose=minimize.Verbosity.FINAL)\n",
" frames += render_solution(x, target)\n",
" print('\\n')\n",
"\n",
"print('Making video', flush=True)\n",
"media.show_video(frames)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d12KzVKU_TFR"
},
"source": [
"### Non-quadratic norms\n",
"\n",
"As explained in the background section, Least Squares can be generalized to norms other than the quadratic. We are now in a position to show how to define a non-quadratic norm, which is important in the estimation and system-identification contexts, where long-tailed, disturbance-rejecting distributions are proportional to the exponent of a non-quadratic function.\n",
"\n",
"Let's say that we wish the task residual i.e., the vector from the hand to the target, to be evaluated with the \"Smooth L2\" function $c(r)$ which, for a given smoothing radius $d \\gt 0$ is\n",
"$$\n",
"c(r) = \\sqrt{r^T\\cdot r + d^2 } - d\n",
"$$\n",
"This function is quadratic in a $d$-sized neighborhood of the origin, and then grows linearly thereafter, like the L2 norm. The first and second derivatives are\n",
"$$\n",
"\\begin{align}\n",
"s&=\\sqrt{r^T\\cdot r + d^2 }\\\\\n",
"g &= \\tfrac{\\partial c}{\\partial r} = \\frac{r}{s} \\\\\n",
"H &=\\tfrac{\\partial^2 c}{\\partial r^2} = \\frac{I_{n_r} - g\\cdot g^T}{s}\n",
"\\end{align}\n",
"$$\n",
"There is no particularly good reason to use this norm for this optimization task, it is merely an example.\n",
"\n",
"Let's read the documentation of the `minimize.Norm` class:\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "TnRn0Mk7QR6Z"
},
"outputs": [],
"source": [
"print(minimize.Norm.__doc__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3JGpR47DQ9S2"
},
"source": [
"Our sensors are 3 `r_pos` residual values for the hand-to-object vector followed by 21 `r_torque` actuator torques, for a total of `ns = 24` sensors. These are concatenated for the entire trajectory, leading to a residual of size `24*N`, where `N` is the number of timesteps in a trajectory. After reshaping and slicing appropriately, the norm implementation looks as follows:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "f3MDXm4e_Wup"
},
"outputs": [],
"source": [
"class SmoothL2(minimize.Norm):\n",
" def __init__(self):\n",
" self.n = model.nsensordata # Equals 24.\n",
" self.d = 0.1 # The smoothing radius (length).\n",
"\n",
" def value(self, r):\n",
" rr = r.reshape((self.n, -1), order='F')\n",
" r_pos = rr[:3, :]\n",
" s = np.sqrt(np.sum(r_pos**2, axis=0) + self.d**2)\n",
" y_pos = (s - self.d).sum()\n",
" r_torque = rr[3:, :]\n",
" y_torque = 0.5 * (r_torque.T**2).sum()\n",
" return y_pos + y_torque\n",
"\n",
" def grad_hess(self, r, proj):\n",
" rr = r.reshape((self.n, -1), order='F')\n",
" r_pos = rr[:3, :]\n",
" s = np.sqrt(np.sum(r_pos**2, axis=0) + self.d**2)\n",
" g_pos = r_pos / s\n",
" g_torque = rr[3:, :]\n",
" g = np.vstack((g_pos, g_torque))\n",
" grad = proj.T @ g.reshape((-1, 1), order='F')\n",
" h_proj = proj.copy() # norm Hessian * projection matrix\n",
" for i in range(g_pos.shape[1]):\n",
" h_i = (np.eye(3) - g_pos[:, i:i+1] @ g_pos[:, i:i+1].T) / s[i]\n",
" j = self.n*i\n",
" h_proj[j:j+3, :] = h_i @ proj[j:j+3, :]\n",
" hess = proj.T @ h_proj\n",
" return grad, hess"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oJrh-rfL0LQh"
},
"source": [
"Before running the optimization, let's ask `least_squares` to check our norm implementation. We'll do this with a short trajectory simulation time `T`, to avoid creating huge matrices."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "MUCDTV6JV0ix"
},
"outputs": [],
"source": [
"target = (.4, -.3, 1.2)\n",
"\n",
"T_short = 0.02\n",
"\n",
"reach_target = lambda x: reach(x, target, T=T_short,\n",
" torque_scale=torque_scale, traj=None)\n",
"\n",
"x, _ = minimize.least_squares(x0, reach_target, bounds, norm=SmoothL2(),\n",
" max_iter=1, check_derivatives=True);"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "J0brs9pG0-8Y"
},
"source": [
"Now that we are confident in our implemetation, we can see what the solution looks like:"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"id": "AICTHd-9z1rL"
},
"outputs": [],
"source": [
"target = (.4, -.3, 1.2)\n",
"\n",
"reach_target = lambda x: reach(x, target, T, torque_scale, traj=None)\n",
"\n",
"x, _ = minimize.least_squares(x0, reach_target, bounds, norm=SmoothL2());\n",
"\n",
"media.show_video(render_solution(x, target))"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [
"zhVv8-0Tvlrl"
],
"gpuType": "T4",
"machine_shape": "hm",
"private_outputs": true,
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}