Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 75
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 17289)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 75
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
The covering radius of the first order Reed--Muller code $RM(1, n)$ is $2^{n-1}-2^{n/2-1}$ for $n$ even \cite{Rothaus}. For odd $n\le 7$, it equals $2^{n-1}-2^{(n-1)/2}$ \cite{Berlekamp,Hou1,Mykkeltveit}.
However, for odd $n>7$, the covering radius of $RM(1,n)$ is still unknown, although, some bounds have been given~ \cite{Hou2,Hou3,Kavut,Kavut2,Patterson}.
In \cite{Schatz}, Schatz proved that the covering radius of the second order Reed--Muller code $RM(2,6)$ is 18. For $n\geq 7$, the covering radius of $RM(2,n)$ is still unknown. Particularly, the covering radius of $RM(2,7)$ has been an open problem for many years \cite{Carlet1,Carlet2,Cohen,Cohen1,WangS}. In \cite{Hou9}, Hou pointed out that every known covering radius was attained by a coset of $RM(r,n)$ in $RM(r + 1,n)$ and conjectured that the covering radius of $RM(2,7)$ is 40.
For $n\geq 7$, the covering radius of $RM(3,n)$ is also unknown \cite{McLoughlin}. In \cite{WangT}, the authors proved that the covering radius of $RM(3,7)$ in $RM(4,7)$ is 20.
It is also interesting to study the covering radius of the Reed-Muller code in the set of cryptographic Boolean functions (see e.g. \cite{Burgess,Kurosawa}). Particularly, the covering radius of $RM(1,8)$ in the set of balanced Boolean functions is still an open problem.
In this paper, we prove that the covering radius of $RM(2,7)$ is 40, which is the same as the covering radius of $RM(2,7)$ in $RM(3,7)$ and gives a positive answer to the conjecture proposed by Hou. As a corollary, we also find new upper bounds for $RM(2,n)$, $n=8,9,10$.
\section{Preliminaries}
Let $\mathbb{F}_{2}^{n}$ be the $n$-dimensional vector space over the
finite field $\mathbb{F}_{2}$. We denote by $B_{n}$ the set
of all $n$-variable Boolean functions, from $\mathbb{F}_{2}^{n}$ into $\mathbb{F}_{2}$.
Any Boolean function $f\in B_{n}$ can be uniquely represented
as a multivariate polynomial in
$\mathbb{F}_{2}[x_{1},\cdots,x_{n}]$, called {\em algebraic normal form} (ANF),
\[
f(x_1,\ldots ,x_n)=\sum_{K\subseteq \{1,2,\ldots ,n\}}a_K\prod_{k\in K}x_k,\quad a_K\in\mathbb{F}_2.
\]
The {\em algebraic degree} of $f$, denoted by $\deg(f)$, is the
number of variables in the highest order term with nonzero
coefficient.
A Boolean function is {\em affine}
if all its ANF terms have degree $\leq 1$. The set of all affine functions is denoted by~$A_{n}$.
The {\em Hamming weight} of $f$ is the cardinality of the set $\{x\in \mathbb{F}_{2}^{n}|f(x)=1\}$. The {\em Hamming distance} between two functions
$f$ and $g$ is the Hamming weight of $f+g$, and will be denoted by $d(f,g)$.
The {\em nonlinearity} of $f\in B_{n}$ is its
distance from the set of all $n$-variable affine functions, that is,
\[
nl(f)=\min_{g\in A_{n}}d(f,g).
\]
The nonlinearity of an $n$-variable Boolean function is bounded above by $2^{n-1}-2^{n/2-1}$ \cite{Carlet0,cs09,Rothaus}.
The {\em $r$-order nonlinearity} of a Boolean function $f$, denoted by $nl_r(f)$, is its
distance from the set of all $n$-variable functions of algebraic degrees at most $r$.
The $r$-th order Reed-Muller code of length $2^n$ is denoted by $RM(r,n)$. Its codewords are the truth tables (output values) of the set of all $n$-variable Boolean functions of degree $\leq r$. The {\em covering radius} of $RM(r,n)$ is defined as
\[
\max_{f\in B_n}d(f,RM(r,n))=\max_{f\in B_n}nl_r(f).
\]
Two $n$-variable Boolean functions $f_1$ and $f_2$ are called affine equivalent modulo $RM(r,n)$ if there
exist $A\in {GL}_n(\mathbb{F}_2)$ and $b\in \mathbb{F}_{2}^{n}$ such that $f_1(x)=f_2(Ax+b)$ modulo $RM(r,n)$.
We use $||$ to denote the concatenation, that is,
\[
(f_1||f_2)(x_1,\ldots,x_n,x_{n+1})=(x_{n+1}+1)f_1(x_1,\ldots,x_n)+x_{n+1}f_2(x_1,\ldots,x_n),
\]
where $f_1,f_2\in B_n$. We let $|A|$ denote the cardinality of the set $A$.
\section{The covering radius of the binary Reed-Muller code $RM(2,7)$ is 40}
Let $f\in B_7$. Then it can be written as $f_1||f_2$, where $f_1,f_2\in B_6$. We need to prove that $nl_2(f_1||f_2)\le 40$.
Let $g\in B_6$. It is well known that $nl_2(g)\le 18$, and $g$ is affine equivalent to $g_0=x_1x_2x_3 + x_1x_4x_5 + x_2x_4x_6 + x_3x_5x_6 + x_4x_5x_6$ modulo $RM(2,6)$, if $nl_2(g)=18$. Moreover, $nl(g_0+g_1)\le 22$, for any $g_1\in B_6$ with $\deg(g_1)\le 2$. Therefore, if $f=f_1||f_2$ and $nl_2(f_1)=18$, then
\[
nl_2(f)\le d(f_2,g_2)+nl(f_1+g_2)\le 18+22=40,
\]
where $g_2$ is a 6-variable Boolean function of degree at most 2 such that $nl_2(f_2)=d(f_2,g_2)$. Similarly, if $f=f_1||f_2$ and $nl_2(f_1)=17$, then we also have $nl_2(f)\le 40$. In fact, we have the following lemma.
\begin{lemma} [Propositions 11 and 14 of \cite{WangS}]
\label{prop1}
Let $f\in B_7$ and $f=f_1||f_2$. If $nl_2(f)>40$, then $15\le nl_2(f_i)\leq16$, for $i=1,2$.
\end{lemma}
The classification of 6-variable Boolean functions under the affine group has been fully studied (see e.g. \cite{Langevin,Maiorana}). It is known that there are exactly 205 affine equivalence classes modulo $RM(2,6)$. Calculating the second-order nonlinearities of these classes, we have the following two lemmas.
\begin{lemma}
\label{obs1}
Let $f\in B_6$. Then $nl_2(f)=16$ if and only if it is affine equivalent to a function with degree $\geq 3$ part among
(1) $fun_1=x_1x_2x_6+x_1x_3x_5+x_2x_3x_4;$
(2) $fun_2=x_1x_2x_3x_4+x_1x_2x_6+x_1x_4x_5+x_2x_3x_5;$
(3) $fun_3=x_1x_2x_3x_4+x_1x_3x_5+x_1x_4x_6+x_2x_3x_5+x_2x_3x_6+x_2x_4x_5;$
(4) $fun_4=x_1x_2x_3x_6+x_1x_2x_4x_5+x_1x_3x_5+x_1x_4x_5+x_1x_4x_6+x_2x_3x_4;$
(5) $fun_5=x_1x_2x_3x_4x_5+x_1x_3x_5+x_1x_4x_6+x_2x_3x_5+x_2x_3x_6+x_2x_4x_5$.
\end{lemma}
\begin{lemma}
\label{obs2}
Let $f\in B_6$. Then $nl_2(f)=15$ if and only if there is a $g\in B_6$ with $\deg(g)\leq 2$ such that $f+g$ is affine equivalent to one of the following functions:
(1) $fun_6=x_1x_2x_3x_4x_5x_6+x_1x_2x_6+x_1x_3x_5+x_2x_3x_4;$
(2) $fun_7=x_1x_2x_3x_4x_5x_6+x_1x_2x_3x_4+x_1x_2x_6+x_1x_4x_5+x_2x_3x_5+x_4x_5;$
(3) $fun_8=x_1x_2x_3x_4x_5x_6+x_1x_2x_3x_4+x_1x_3x_5+x_1x_4x_6+x_2x_3x_5+x_2x_3x_6+x_2x_4x_5;$
(4) $fun_9=x_1x_2x_3x_4x_5x_6+x_1x_2x_3x_6+x_1x_2x_4x_5+x_1x_3x_5+x_1x_4x_5+x_1x_4x_6+x_2x_3x_4+x_4x_6;$
(5) $fun_{10}=x_1x_2x_3x_4x_5x_6+x_1x_2x_3x_4+x_1x_3x_4+x_1x_5x_6+x_2x_3x_4+x_2x_3x_6+x_2x_4x_5+x_3x_4+x_3x_6+x_4x_5$;
(6) $fun_{11}=x_1x_2x_3x_4x_5x_6+x_1x_2x_3x_6+x_1x_2x_4x_5+x_1x_3x_5+x_1x_4x_5+x_1x_4x_6+x_2x_3x_4+x_2x_3x_6+x_2x_4x_5+x_3x_5+x_4x_5+x_4x_6$;
(7) $fun_{12}=x_1x_2x_3x_4x_5x_6+x_2x_3x_4x_5+x_1x_2x_5x_6+x_1x_3x_4x_6+x_1x_2x_4+x_1x_2x_5+x_2x_3x_5+x_3x_4x_5+x_1x_2x_6+x_3x_4x_6$.
\end{lemma}
\begin{definition}
Given $f\in B_n$, we denote by $Fh_f$ the map from $\mathbb{Z}$ to the power set of $B_n$ as follows:
\[
Fh_f(r)=\{g=\sum_{1\leq i<j\leq n}a_{ij}x_ix_j \ | \ a_{ij}\in \mathbb{F}_2 \ and \ nl(f+g)=r\}.
\]
We let $NFh_f:\mathbb{Z}\to \mathbb{Z}$ be the function defined by $ NFh_f(r)=|Fh_f(r)|$. Clearly, $NFh_f$ is affine invariant and $\sum_{i=0}^{\infty}NFh_f(i)=2^{n(n-1)/2}$.
\end{definition}
It is noted that $0\in Fh_{fun_i}(nl_2(fun_i))$, where $1\le i\le 12$. We calculate the values of $NFh_f$ for those functions in Lemmas 2 and 3, and have the following lemma.
\begin{lemma}
We have
\begin{enumerate}
\item[$(1)$] $NFh_{fun_1}(16)=448$, $NFh_{fun_1}(26)=0$ and $NFh_{fun_1}(28)=64;$
\item[$(2)$] $NFh_{fun_2}(16)=384$, $NFh_{fun_2}(26)=1024$ and $NFh_{fun_2}(28)=0;$
\item[$(3)$] $NFh_{fun_3}(16)=64$ and $NFh_{fun_3}(i)=0$, for $i\geq 26;$
\item[$(4)$] $NFh_{fun_4}(16)=224$, $NFh_{fun_4}(26)=512$ and $NFh_{fun_4}(28)=0;$
\item[$(5)$] $NFh_{fun_5}(16)=272$ and $NFh_{fun_5}(i)=0$, for $i\geq 26$.
\item[$(6)$] $NFh_{fun_6}(15)=112$, $NFh_{fun_6}(25)=0$ and $NFh_{fun_6}(27)=64;$
\item[$(7)$] $NFh_{fun_7}(15)=96$, $NFh_{fun_7}(25)=1024$ and $NFh_{fun_7}(27)=0;$
\item[$(8)$] $NFh_{fun_8}(15)=16$ and $NFh_{fun_8}(i)=0$, for $i\geq 25;$
\item[$(9)$] $NFh_{fun_9}(15)=72$, $NFh_{fun_9}(25)=512$ and $NFh_{fun_9}(27)=0;$
\item[$(10)$]$NFh_{fun_{10}}(15)=72$, $NFh_{fun_{10}}(25)=256$ and $NFh_{fun_{10}}(27)=0;$
\item[$(11)$] $NFh_{fun_{11}}(15)=40$, $NFh_{fun_{11}}(25)=544$ and $NFh_{fun_{11}}(27)=0$;
\item[$(12)$] $NFh_{fun_{12}}(15)=66$, $NFh_{fun_{12}}(25)=414$ and $NFh_{fun_{12}}(27)=0$.
\end{enumerate}
\end{lemma}
It is well known that there are three affine equivalent classes of $6$-variable homogeneous quadratic Boolean functions, and their nonlinearities could be 16, 24 or 28. We count the number of functions in $Fh_{fun_{i}}$ with the nonlinearity 16, and display the results in the following lemma, where $S_{16}$ denotes the set of $6$-variable Boolean functions with the nonlinearity 16.
\begin{lemma} We have
\begin{enumerate}
\item[$(1)$] $|Fh_{fun_{2}}(16) \bigcap S_{16}|=47$ and $|Fh_{fun_{4}}(16)\bigcap S_{16}|=43$;
\item[$(2)$] $|(g_1+Fh_{fun_{1}}(28)) \bigcap S_{16}|=7$, $|(g_2+Fh_{fun_{2}}(26)) \bigcap S_{16}|=55$ and $|(g_4+Fh_{fun_{4}}(26)) \bigcap S_{16}|=21$, for any $g_1\in Fh_{fun_{1}}(28)$ and $g_i\in Fh_{fun_{i}}(26)$, where $i=2,4$.
\item[$(3)$] $|Fh_{fun_{7}}(15) \bigcap S_{16}|=23$, $|Fh_{fun_{9}}(15)\bigcap S_{16}|=15$, $|Fh_{fun_{10}}(15)\bigcap S_{16}|=24$, $|Fh_{fun_{11}}(15)\bigcap S_{16}|=21$ and $|Fh_{fun_{12}}(15)\bigcap S_{16}|=17$;
\item[$(4)$] $|(g_7+Fh_{fun_{7}}(25)) \bigcap S_{16}|=55$, $|(g_9+Fh_{fun_{9}}(25)) \bigcap S_{16}|=21$, $|(g_{10}+Fh_{fun_{10}}(25)) \bigcap S_{16}|=13$, $|(g_{11}+Fh_{fun_{11}}(25)) \bigcap S_{16}|<30$ and $|(g_{12}+Fh_{fun_{12}}(25)) \bigcap S_{16}|<30$, for any $g_i\in Fh_{fun_{i}}(25)$, where $i\in \{7,9,10,11,12\}$.
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{thm2}
Let $f\in B_7$ and $f=f_1||f_2$. If $nl_2(f)>40$, then
\[
Fh_{f_i}(k)\subseteq \cup_{m\ge 41-k} Fh_{f_j}(m),
\]
where $i\neq j\in \{1,2\}$.
\end{lemma}
\proof
Let $g\in Fh_{f_i}(k)$. Then $nl(f_i+g)=k$ and there exists an $l_1\in A_6$ such that $d(f_i,g+l_1)=k$. Since
\[
40<nl_2(f)\le d(f_i,g+l_1)+d(f_j,g+l),
\]
for any $l\in A_6$, we have $d(f_j,g+l)\ge 41-k$. That is, $nl(f_j+g)\ge 41-k$,
and the result follows.
\endproof
\begin{lemma}
\label{thm2}
Let $f\in B_7$ and $f=f_1||f_2$. If $nl_2(f_1)=nl(f_2)=15$, then $nl(f)\le 40$.
\end{lemma}
\proof Suppose $nl_2(f)>40$. Then by Lemma 7, $Fh_{f_i}(15)\subseteq Fh_{f_j}(27)$, where $i\neq j\in \{1,2\}$. Therefore, $NFh_{f_j}(27)\ge NFh_{f_i}(15)>0$. Then by Lemmas 3 and 5, $f_i$ and $f_j$ are affine equivalent to $fun_6$. However,
\[
NFh_{fun_6}(27)=64<NFh_{fun_6}(15)=112,
\]
which is contradictory to $Fh_{f_i}(15)\subseteq Fh_{f_j}(27)$, and the result follows.
\endproof
\begin{lemma}
\label{thm2}
Let $f\in B_7$ and $f=f_1||f_2$. If $nl_2(f_1)=nl_2(f_2)=16$, then $nl_2(f)\le 40$.
\end{lemma}
\proof
Suppose $nl_2(f)>40$. By Lemma 7, we have $Fh_{f_i}(16)\subseteq \cup_{m\ge 26} Fh_{f_j}(m)$, where $i\neq j\in \{1,2\}$. Then by Lemmas 2 and 5,
$f_1$ and $f_2$ are affine equivalent to $fun_{i_1}$ and $fun_{i_2}$ modulo $RM(2,6)$, where $i_1,i_2\in \{2,4\}$. Therefore, $f$ is affine equivalent to $fun_{i_1}||(fun_{i_2}(Ax+b)+g)$, where $A\in {GL}_6(\mathbb{F}_2)$, $b\in \mathbb{F}_{2}^{6}$ and $g$ is a $6$-variable homogeneous Boolean function of degree 0 or 2. Moreover,
\[
Fh_{fun_{i_1}}(A^{-1}x)(16)\subseteq g(A^{-1}x)+Fh_{fun_{i_2}}(26),
\]
and
\[
Fh_{fun_{i_2}(Ax)}(16)\subseteq g+Fh_{fun_{i_1}}(26).
\]
{\em Case} 1: $i_1=4$ or $i_2=4$. If $i_1=4$, then $g\in Fh_{fun_{4}}(26)$ (since $0\in Fh_{fun_{i_2}}(16)$) and
\[
Fh_{fun_{i_2}(Ax)}(16)\subseteq g+Fh_{fun_{4}}(26).
\]
Therefore,
\[
Fh_{fun_{i_2}(Ax)}(16)\bigcap S_{16}\subseteq (g+Fh_{fun_{6}}(26))\bigcap S_{16}.
\]
By Lemma 6, $|Fh_{fun_{i_2}(Ax)}(16)\bigcap S_{16}|=43$ or $47$, while $|(g+Fh_{fun_{6}}(26))\bigcap S_{16}|=21$, which is a contradiction. Therefore, if $i_1=4$, then $nl_2(f)\le 40$. Similarly, we have $nl_2(f)\le 40$ for $i_2=4$.\\
{\em Case} 2: $i_1=i_2=2$. We have
\[
Fh_{fun_{2}(Ax)}(16)\bigcap S_{16}\subseteq (g+Fh_{fun_{2}}(26))\bigcap S_{16}.
\]
Let
$Fh_{fun_{2}}(16)\bigcap S_{16}=\{h_1,\ldots,h_{47}\}$ and
\[
(g+Fh_{fun_{2}}(26))\bigcap S_{16}=\{g+k_1,\ldots,g+k_{55}\},
\]
where $h_i(Ax)=g+k_{i}$, for $i=1,2,\ldots,47$. Then $h_i+h_j$ is affine equivalent to $g+k_{i}+g+k_{j}$. Therefore, if $nl(h_i+h_j)=16$, then $nl(k_{i}+k_{j})=16$.
However,
\[
|\{h\in \{h_1,\ldots,h_{47}\} \ | \ |(h+\{h_1,\ldots,h_{47}\})\bigcap S_{16})|\ge 13\}|=45,
\]
which is greater than
\[
|\{k\in \{k_1,\ldots,k_{55}\} \ | \ |(k+\{k_1,\ldots,k_{55}\})\bigcap S_{16})|\ge 13\}|=22,
\]
for any $g\in Fh_{fun_{2}}(26)$. This is a contradiction, and the result follows.
\endproof
\begin{lemma}
\label{thm2}
Let $f\in B_7$ and $f=f_1||f_2$. If $nl_2(f_1)=16$ and $nl_2(f_2)=15$, then $nl_2(f)\le 39$.
\end{lemma}
\proof
Suppose $nl_2(f)\ge 41$. By Lemma 7, we have $Fh_{f_1}(16)\subseteq \cup_{m\ge 25} Fh_{f_2}(m)$ and $Fh_{f_2}(15)\subseteq \cup_{m\ge 26} Fh_{f_1}(m)$. Then by Lemmas 2, 3 and 5, $f_1$ is affine equivalent to $fun_{i_1}$ modulo $RM(2,6)$ and $f_2$ is affine equivalent to $fun_{i_2}$ modulo $RM(2,6)$, where $i_1\in \{1,2,4\}$ and $i_2\in \{7,9,10,11,12\}$.\\
{\em Case} 1: $i_1=1$. We have
\[
Fh_{fun_{i_2}(Ax)}(15)\subseteq g+Fh_{fun_{1}}(28).
\]
where $A\in {GL}_6(\mathbb{F}_2)$ and $g\in Fh_{fun_{1}}(28)$.
Therefore, $NFh_{fun_1}(28)=64\ge NFh_{fun_{i_2}}(15)$ and $i_2=11$. However, by Lemma 6,
\[
|Fh_{fun_{11}}(15)\bigcap S_{16}|=21>7=|(g+Fh_{fun_{1}}(28)) \bigcap S_{16}|,
\]
which is a contradiction, and $nl_2(f)\le 39$.\\
{\em Case} 2: $i_1=2$. We have
\[
Fh_{fun_{2}}(A^{-1}x)(16)\subseteq g(A^{-1}x)+Fh_{fun_{i_2}}(25).
\]
By Lemma 6,
\[
|(g(A^{-1}x)+Fh_{fun_{i_2}}(25))\bigcap S_{16}|\ge|Fh_{fun_{2}}(16)\bigcap S_{16}|=47.
\]
Therefore, $i_2=7$. Let
$Fh_{fun_{2}}(16)\bigcap S_{16}=\{h_1,\ldots,h_{47}\}$ and
\[
(g(A^{-1}x)+Fh_{fun_{7}}(25))\bigcap S_{16}=\{g(A^{-1}x)+k_1,\ldots,g(A^{-1}x)+k_{55}\},
\]
where $h_i(A^{-1}x)=g(A^{-1}x)+k_{i}$, for $i=1,2,\ldots,47$.
However,
\[
|\{h\in \{h_1,\ldots,h_{47}\} \ | \ |(h+\{h_1,\ldots,h_{47}\})\bigcap S_{16})|\ge 13\}|=45,
\]
which is greater than
\[
|\{k\in \{k_1,\ldots,k_{55}\} \ | \ |(k+\{k_1,\ldots,k_{55}\})\bigcap S_{16})|\ge 12\}|=22,
\]
for any $g(A^{-1}x)\in Fh_{fun_{7}}(25)$. This is a contradiction, and $nl_2(f)\le 39$.\\
{\em Case} 3: $i_1=4$. We have
\[
Fh_{fun_{4}}(A^{-1}x)(16)\subseteq g(A^{-1}x)+Fh_{fun_{i_2}}(25).
\]
By Lemma 6,
\[
|(g(A^{-1}x)+Fh_{fun_{i_2}}(25))\bigcap S_{16}|\ge|Fh_{fun_{4}}(16)\bigcap S_{16}|=43.
\]
Therefore, $i_2=7$. Let
$Fh_{fun_{4}}(16)\bigcap S_{16}=\{h_1,\ldots,h_{43}\}$ and
\[
(g(A^{-1}x)+Fh_{fun_{7}}(25))\bigcap S_{16}=\{g(A^{-1}x)+k_1,\ldots,g(A^{-1}x)+k_{55}\},
\]
where $h_i(A^{-1}x)=g(A^{-1}x)+k_{i}$, for $i=1,2,\ldots,43$.
However,
\[
|\{h\in \{h_1,\ldots,h_{43}\} \ | \ |(h+\{h_1,\ldots,h_{43}\})\bigcap S_{16})|\ge 12\}|=42,
\]
which is greater than
\[
|\{k\in \{k_1,\ldots,k_{55}\} \ | \ |(k+\{k_1,\ldots,k_{55}\})\bigcap S_{16})|\ge 12\}|=22,
\]
for any $g(A^{-1}x)\in Fh_{fun_{7}}(25)$. This is a contradiction, and $nl_2(f)\le 39$.
\endproof
By Lemmas 1, 8, 9 and 10, $nl_2(f)\le 40$ for any $f\in B_7$. Therefore, we have the following theorem.
\begin{theorem}
\label{thm2}
The covering radius of the Reed--Muller Code $RM(2,7)$ is 40.
\end{theorem}
Let $f_i\in B_i$, where $i=7,8,9$. Then $nl(f_7)\le 56$, $nl(f_8)\le 120$ and $nl(f_9)\le 244$. Therefore, we have the following corollary.
\begin{corollary}
\label{cor1}
The covering radius of $RM(2,n)$ is at most $96, 216, 460$, for $n=8,9,10$ respectively.
\end{corollary}
In Table~1, we summarize the best known bounds on the covering radius of $RM(2,n)$ \cite{Carlet1,Carlet2,Cohen,Fourquet} for $8\leq n\leq 12$, showing in boldface the contributions of this paper.
\begin{table}[!t]
\caption{The best known bounds on the covering radius of $RM(2,n)$}
\begin{center}\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & 8 & 9 & 10 & 11 & 12 \\
\hline
lower bound & 84 & 196 & 400 & 848 & 1760 \\
\hline
upper bound & \textbf {96} & \textbf {216} & \textbf {460} & 956 & 1946 \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{0.3cm}
\section{Conclusion}
In this paper, we prove that the covering radius of $RM(2,7)$ is 40, and find new upper bounds for $RM(2,n)$, $n=8,9,10$.
\section*{Acknowledgment}
The first author would like to thank the financial support from the National Natural Science Foundation of China (Grant 61572189).
\defReferences{References}
|
{
"timestamp": "2018-09-14T02:07:31",
"yymm": "1809",
"arxiv_id": "1809.04864",
"language": "en",
"url": "https://arxiv.org/abs/1809.04864"
}
|
\section{Introduction}
The nuclear modification factor $\ensuremath{R_{AA}}$ compares the particle production in nucleus-nucleus reaction ($AA$) to pp collisions scaled with the number of binary collision ($\ensuremath{N_\mathrm{coll}}$) or the increased parton flux ($\ensuremath{T_{AA}}$) in nuclear collisions
\begin{equation}
\ensuremath{R_{AA}} = \frac{\ensuremath{\mathrm{d}} N_{X}^{AA}/\ensuremath{\mathrm{d}}\ensuremath{p_\mathrm{T}}}{\ensuremath{N_\mathrm{coll}} \cdot \ensuremath{\mathrm{d}} N_{X}^{pp}/\ensuremath{\mathrm{d}}\ensuremath{p_\mathrm{T}}} = \frac{\ensuremath{\mathrm{d}} N_{X}^{AA}/\ensuremath{\mathrm{d}}\ensuremath{p_\mathrm{T}}}{\ensuremath{T_{AA}} \cdot \ensuremath{\mathrm{d}} \sigma_{X}^{pp}/\ensuremath{\mathrm{d}}\ensuremath{p_\mathrm{T}}},
\end{equation}
where $X$ can be any reconstructed final state object: charged particles, specific hadrons, leptons, gauge bosons or jets.
In the absence of any strong medium effects in the initial and final state, $\ensuremath{R_{AA}}$ is expected to be unity for single particles in the $\ensuremath{p_\mathrm{T}}$ region where hard processes are the dominant source of particle production.
For reconstructed jets, the interpretation and comparison of $\ensuremath{R_{AA}}$ measurements is more sophisticated.
In principle, a jet algorithm aims to recover the full kinematic information on the initial parton.
If this also holds in $AA$ collisions, one would expect a $\ensuremath{R_{AA}}$ of unity, i.e. the full energy is recovered and all medium modification is visible only in a change of the jet structure.
In practice, jet reconstruction in heavy ion collisions is hindered by the large soft background, unrelated to the initial hard scatterings.
This background can be subtracted with an event-wise average of the background density and is in general reduced for jets with smaller radius and larger constituent-$\ensuremath{p_\mathrm{T}}$ thresholds.
Different experimental choices on $\ensuremath{p_\mathrm{T}}$ thresholds and jet radii, together with given detector-specific methods for background corrections and input for jet reconstruction, make a direct comparison between different experiments difficult.
However, in jet $\ensuremath{R_{AA}}$ measurements a general trend is observed by all LHC experiments: the nuclear modification factor in central collisions appears to approach an asymptotic value of $\approx 0.5$ at high $\ensuremath{p_\mathrm{T}}$ similar to the measurements of single charged hadrons.
The data presented here have been collected with the ALICE experiment during various runs of the Large Hadron Collider at CERN.
In particular, the precise measurement of charged particle tracks at central (pseudo-)rapidity ($\left|\eta\right|< 0.9$) in high multiplicity environment and down to low $\ensuremath{p_\mathrm{T}} \approx 0.15$\,GeV allows for a detailed quantification of the jet structure and the characterisation of the underlying background~\cite{Abelev:2014ffa}.
\section{System Size Dependence and Centrality Biases}
\begin{figure}[th]
\unitlength\textwidth
\begin{picture}(1,0.39)
\put(0.1,0.00){\includegraphics[width=.3\textwidth]{pics/1805-04399_fig6}}
\put(0.5,0.00){\includegraphics[width=.4\textwidth]{pics/2018-Jun-01-R_PbPb_pPb_2018}}
\end{picture}
\caption{ Left: Nuclear modification factor for charged particles in three $\ensuremath{p_\mathrm{T}}$ ranges for different colliding systems and energies \cite{Acharya:2018eaq} Right: Nuclear modification in p-Pb\ as well as in peripheral and central Pb-Pb\ \cite{Acharya:2018qsh,ALICE:2012mj}}
\label{fig8-10}
\end{figure}
In the measurement of single charged particles at high $\ensuremath{p_\mathrm{T}}$ the ALICE collaboration recently published the results for Pb-Pb\ collisions at the highest collision energy so far, $\ensuremath{\sqrt{s_\mathrm{NN}}} = 5.02$ TeV, with significantly improved systematic uncertainties and a $\ensuremath{p_\mathrm{T}}$ reach up to 50 GeV \cite{Acharya:2018qsh}.
The nuclear modification factor $\ensuremath{R_{AA}}$ shows only little variation from the measurements at $\ensuremath{\sqrt{s_\mathrm{NN}}} = 2.76$ TeV, which can be understood when considering that the expected larger parton energy loss is compensated by a harder parton spectrum by which the energy loss is filtered.
The harder spectrum at LHC leads in general to an increased importance of the subleading fragments already in the single particle spectrum compared to RHIC.
This directly affects the description of $\ensuremath{R_{AA}}$, where parton energy loss models beyond leading particles perform better at the LHC \cite{Abelev:2012hxa}.
In addition, it is already visible in the comparison of high-$\ensuremath{p_\mathrm{T}}$ identified particles in pp collisions at large $\sqrt{s}$ that leading order and next-to-leading order (LO and NLO) Monte Carlo Models with parton showering provide a better description of the data than NLO perturbative QCD calculation with one dimensional fragmentation functions \cite{Acharya:2017tlv} extracted at a lower collisions energy.
\begin{figure}
\unitlength\textwidth
\begin{picture}(1,0.33)
\put(0.04,0.00){\includegraphics[width=.45\textwidth]{pics/180505212_fig2}}
\put(0.58,0.){\includegraphics[width=.33\textwidth]{pics/180505212_fig1}}
\end{picture}
\caption{Left: Nuclear modification factor for charged particles over the full centrality range \cite{Acharya:2018njl} Right: Comparison of the apparent suppression to the expectation of the \textsc{Hg-Pythia} model \cite{Morsch:2017brb}}
\label{fig11}
\end{figure}
The comparison of the charged particle production in Pb-Pb\ at $\ensuremath{\sqrt{s_\mathrm{NN}}} = 5.02$ TeV \cite{Acharya:2018qsh} to recent ALICE results obtained with lighter Xe-ions at similar energies ($\ensuremath{\sqrt{s_\mathrm{NN}}} = 5.44$ TeV, \cite{Acharya:2018eaq}) is important to disentangle effects of the collision geometry.
In particular this can be done by studying the nuclear modification factor in three distinct $\ensuremath{p_\mathrm{T}}$ ranges as shown in Fig.\,\ref{fig8-10} (left).
The lowest $\ensuremath{p_\mathrm{T}}$ range is dominated by effects induced by the collective expansion of the medium (flow).
In the intermediate range, pronounced differences between baryon and mesons have been observed, which could be explained by quark coalescence, while in the highest $\ensuremath{p_\mathrm{T}}$ range above 10\,GeV the effects of parton energy loss dominate.
Plotted as function of the charged particle multiplicity as measure for the energy density, it is remarkable that all data points agree beyond $\ensuremath{\mathrm{d}} N/\ensuremath{\mathrm{d}}\eta \approx 500$.
Irrespective of the ion size and colliding energy, the driving mechanism in all three $\ensuremath{p_\mathrm{T}}$ ranges appears to be energy density, even though the ranges are dominated by different physics processes.
At low $\ensuremath{\mathrm{d}} N/\ensuremath{\mathrm{d}}\eta$ the nuclear modification factors for Xe and Pb data deviate.
At the same $\ensuremath{\mathrm{d}} N/\ensuremath{\mathrm{d}}\eta$, Xe-Xe\ collisions are more asymmetric, suggesting that the collision geometry now becomes an additional factor.
Interestingly, these deviations are similar in all three $\ensuremath{p_\mathrm{T}}$ ranges, which highlights the importance of geometry as a connection between the various processes involved in particle production in heavy ion collisions.
The search for the system size and/or energy density dependence of parton energy loss can also be viewed from a different perspective, when comparing peripheral Pb-Pb\ with p-Pb\ at similar $\ensuremath{\mathrm{d}} N/\ensuremath{\mathrm{d}}\eta$.
Indeed, as seen in Fig.\,\ref{fig11} (left), even for peripheral Pb-Pb\ reactions the nuclear modification factor at high $\ensuremath{p_\mathrm{T}}$ does not increase beyond 0.8 \cite{Acharya:2018njl}.
This apparent contradiction to the observation of $\ensuremath{R_{AA}} \approx 1$ in p-Pb\ in Fig.\,\ref{fig8-10} (right), can be explained when considering that indeed not the number of binary nucleon-nucleon collisions is the relevant scaling assumption, but the number of \emph{hard} collisions.
This number depends on the probability for multiple parton interactions (MPI), which is not uniform across the nucleus as assumed in geometric Glauber models.
As discussed in \cite{Morsch:2017brb}, the average distance between two colliding nucleons is larger on average in peripheral collisions compared to central, which leads to fewer MPI per $\ensuremath{N_\mathrm{coll}}$.
The geometry bias is further enhanced by the experimental centrality selection on multiplicity, which in turn is increasing with the number of MPIs.
These biases have been evaluated in a hybrid model, coupling the impact parameter dependent number of MPIs from \textsc{Hijing} with \textsc{Pythia}.
As shown in Fig.\,\ref{fig11} (right), the apparent suppression can be very well reproduced in this \textsc{Hg-Pythia}-Model, without the need to invoke any other initial or final state effects.
\begin{figure}
\unitlength\textwidth
\begin{picture}(1,0.39)
\put(0.03,0.01){\includegraphics[width=.55\textwidth]{pics/171205602_fig2}}
\put(0.58,0.){\includegraphics[width=.4\textwidth]{pics/171205602_fig4}}
\end{picture}
\caption{Left: Jet spectra opposite of a trigger hadron with given $\ensuremath{p_\mathrm{T}}$ and difference spectrum between them Right: Comparison of difference spectra in p-Pb\ collisions with different event activity \cite{Acharya:2017okq}}
\label{fig16a}
\end{figure}
An unbiased measure for parton energy loss, or redistribution of jet energy, has been introduced by the ALICE collaboration with the measurement of jet distributions with respect to a recoiling hadron \cite{Adam:2015doa}.
In these measurements a deliberate bias is put on the momentum transfer $Q^2$ in the reaction by requiring a certain hadron trigger $\ensuremath{p_\mathrm{T}}$.
The jet distribution is reconstructed opposite in $\varphi$ to this trigger and is shown as an example in
Fig.\,\ref{fig16a} (left): a clear hardening with increasing trigger $\ensuremath{p_\mathrm{T}}$ is observed.
Now the difference of the spectra can be compared for various multiplicities in p-Pb. Since this is a \emph{per-trigger} quantity (i.e. per hard collision) it is independent of the biases discussed above.
It is seen that the energy loss/spectrum shift in p-Pb\ is compatible with no effect and smaller than 0.4 GeV at 90\% confidence level \cite{Acharya:2017okq}.
\section{Differential Jet Observables}
The differential study of jet structure has recently seen the development of a wealth of observables,
well beyond cross sections ratios for different radii or transverse momentum distributions of jet constituents.
One particular development is to facilitate the widely used sequential recombination algorithms to \emph{decluster} a found jet and essentially rewind the QCD splitting.
This can be done e.g. with the Cambridge--Aachen algorithm, which is based on angular proximity only. Each step in the procedure splits a cluster into two sub-clusters (sub-jets), separated by $\Delta R$ and with momentum fractions
$z$ and $1 -z$, respectively.
The $z$ and $\Delta R$ values can be used to populate a \emph{Lund diagram}, which in principle maps the phase space of all splittings and allows for the isolation of different regions for medium effects.
Certain regions of interest can also be amplified or filtered by \emph{grooming} methods, e.g. the \emph{soft drop} \cite{Larkoski:2014wba} algorithm unwinds the jets, following the largest $z$ (dropping the soft sub-jet), until
$z > z_\mathrm{cut} \Delta R^\beta$.
Preliminary results on the $z$ distribution of the first splitting identified with this procedure are shown in Fig.\,\ref{fig18} (left), using $z_\mathrm{cut} = 0.1, \beta = 0$ in central Pb-Pb\ collisions.
A constituent subtraction scheme has been applied to take into account the background (see also \cite{Andrews:2018wgw} and references therein).
The normalization to the total number of jets and the comparison to \textsc{Pythia} jets embedded into central Pb-Pb\ events reveals that there is no sign of enhanced large angle $(\Delta R > 0.1)$ splitting at any $z$.
Further studies in different $\Delta R$ regions \cite{Andrews:2018wgw}, show that there is an indication for overall enhancement of collimated splitting and suppression of large angle splittings.
This behaviour is also visible in the Lund representation of the difference between Pb-Pb\ data splittings and \textsc{Pythia} embedded level splittings in Fig.\,\ref{fig18} (right).
\begin{figure}
\unitlength\textwidth
\begin{picture}(1,0.39)
\put(0.03,0.03){\includegraphics[width=.55\textwidth]{pics/2018-May-09-ResultJewelPbPbzg_80_120_delr10_FullRange.pdf}}
\put(0.6,0.){\includegraphics[width=.4\textwidth]{pics/2018-May-09-RecursiveLund_DiffData_SDCut1Split}}
\end{picture}
\caption{Left: Inclusive measurement of $z_{g}$ distribution in central Pb-Pb\ collisions at $\sqrt{s}$ = 2.76 TeV with $\Delta R$ cut $> 0.1$ Right: Lund representation of the difference between Pb-Pb data splittings and \textsc{Pythia} embedded level splittings in the 10\% most central collisions at $\sqrt{s}=2.76$ TeV. \cite{Andrews:2018wgw}}
\label{fig18}
\end{figure}
\section{Summary and Outlook}
In summary, the $\ensuremath{R_{AA}}$ observable alone cannot reflect the complexity of jet medium modification but is still highly popular.
A new detailed picture of the parton shower is provided by the Lund diagram and groomed jets have been used as a tool to select jet substructure.
Both methods show a suppression of large angle splittings and enhancement of collinear splittings compared to embedded \textsc{Pythia}.
These results, together with the wealth of ALICE hard probes and jet measurements building on the inclusion of low $\ensuremath{p_\mathrm{T}}$ constituents, provide an important benchmark for the test and development of Monte Carlo frameworks coveringr pp, p$A$ and $AA$ reactions, including the modeling of parton shower modification and the underlying event.
This contribution is dedicated to Oliver Busch (1976 -- 2018); a dear friend and long-term collaborator who has driven many jet analysis within ALICE and whose work will have a lasting impact on the jet programme to come.
\bibliographystyle{JHEP}
|
{
"timestamp": "2018-09-18T02:18:25",
"yymm": "1809",
"arxiv_id": "1809.04936",
"language": "en",
"url": "https://arxiv.org/abs/1809.04936"
}
|
\section{Introduction}
The algorithm of finding manipulator workspace, the set of obtainable 6-D poses given a fixed range of reachable parameters, is a well-studied subject, utilizing various techniques in rigid body kinematics and dynamics. According to the literature \cite{AW14CeccarelliEclipse,AW15LeeYoung,AW1BinaryMapCastelli,AW2WorkSpaceSE3Jin,AW3MonteCarloWorkspacePeidro,AW4WorkspaceParallelAnnTanase,AW5WorkspaceParAnnGenKuzeci,AW6WorkspaceParaCampean,AW7WorkspaceParaAlp,AW8WorkspaceParWang,AW9RobotWorkspaceNumCao,AW10ParallelWrkspcOptHerrero2015,AW11SingMapParallelMacho,AW12HybridRobotModelPisla2013,AW13NonConvexWrkspcParallelHay2002}. , The notion of workspace can be further developed into the following subdivisions:
\begin{itemize}
\item Reachable Workspace: The set of points end effector could reach with at least one orientation
\item Total Orientation Workspace: The set of points end effector could reach with all orientation angles in a given range
\item Dexterous Workspace: The set of points end effector could reach with all orientation angles
\item Orientation Workspace: The set of orientations that result in a fixed end effector location
\item Constant orientation Workspace: The set of points end effector could reach with a specific orientation
\end{itemize}
Thus, a central idea from the current approaches is the controlling of variables: since displaying $\mathbb{R}^{3}\times SO(3)$ is not possible, constraints are added to the $SO(3)$ part so that the graph of $\mathbb{R}^3$ can be drawn \cite{ParallelMerlet}. However, as pointed out by \cite{Liao}, a prevalent issue of the current algorithms is that they require specific software platforms and have bad time complexities. An analysis done in the same work remarked that a MATLAB implementation yields a pathological $O(n^{36})$ complexity with numerical inverse kinematics. Thus, a natural solution comes from the field of pattern recognition and machine learning, where the problem of manipulator workspace could be reframed into a supervised learning problem, and thus be learned by a deep neural network. In this work, we propose such a framework, seen as an improvement of the Subspace Learning algorithm in \cite{Liao}, where the full workspace of a serial-link manipulator can be generated from approximating its Jacobian matrix, if exists, at a given pose.
\section{Background Information}
\subsection{Manipulator Jacobian}
The relationship between the change in joint angles and the change in the orientation and the coordinates of the end-effector is locally linear, which means that the spatial velocity of the end-effector can be approximated by applying a linear transformation to the rate of change of the joint angles. Consider a basic homogeneous transformation representation of a pose $\xi$. A first-order difference relationship exists between the angles that define the pose and the pose itself: \cite{CorkeRobotics}
\begin{equation}\label{poseDerivation}
\begin{split}
\frac{d\xi}{d\textit{\textbf{q}}} & \approx \frac{\xi( \textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - \xi(\textit{\textbf{q}})}{\Delta \textit{\textbf{q}}}
\\ & \approx \frac{1}{\Delta \textit{\textbf{q}}}\begin{bmatrix}
R(\textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - R(\textit{\textbf{q}}) & \begin{matrix}
\Delta x \\
\Delta y \\
\Delta z
\end{matrix} \\
0_{1 \times 3} & 0
\end{bmatrix}
\end{split}
\end{equation}
The first part of the analysis is trying to transform the upper right part into the linear velocity of the end-effector. In this paper, a specific example is examined where the numerical linear velocity at a specific joint angle is calculated. Figure \eqref{fig:mesh9} shows the Fanuc AM120iB/10L, an industrial 6-DOF serial-link manipulator created in \cite{CorkeRobotics} using standard Denvait-Hartenberg parameters. The robot's pose is currently $ \xi \in SE(3) = \begin{bmatrix}
1 & 0 & 0 & 1.02 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & -1.06 \\
0 & 0 & 0 & 1
\end{bmatrix} $, with joint coordinates of $\begin{bmatrix}
0 & 0 &0 &0 &0 &0
\end{bmatrix}$ .
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Ex9.pdf}
\caption{Fanuc Am120iB/10L}
\label{fig:mesh9}
\end{figure}
Now, consider moving the first joint of the robot by an infinitely small angle $dq$, approximated by the value $10^{-9}$. Applying Equation \eqref{poseDerivation} results in
\begin{equation}\label{positionDerivation}
\begin{split}
\frac{\partial \xi}{\partial q_1} & \rvert { {\textit{\textbf{q}} = \begin{bmatrix}
0,0,0,0,0,0
\end{bmatrix}} } \\ & \approx \frac {\mathcal{K}( \begin{bmatrix}
0_{1 \times 6}
\end{bmatrix} + \begin{bmatrix}
10^{-9} & 0_{1 \times 5}
\end{bmatrix} ) - \mathcal{K}(\begin{bmatrix}
0_{1 \times 6}
\end{bmatrix} ) } {10^{-9}} \\
& \approx \begin{bmatrix}
0 & -1 & 0 & 0 \\
1 & 1 & 0 & 1.02 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}
\end{split}
\end{equation}
Equating elements from this matrix to those in Equation \eqref{poseDerivation} gives
\begin{equation}\label{posDe2}
\begin{bmatrix}
\Delta x \\
\Delta y \\
\Delta z
\end{bmatrix} = \begin{bmatrix}
0 \\
1.02 \\
0 \\
\end{bmatrix}\Delta q_1
\end{equation}
Finally, since velocity is the relationship between distance and time, dividing both sides by an infinitely small change of time $\Delta t$ gives
\begin{equation}\label{posDe3}
\begin{bmatrix}
\dot{x} \\
\dot{y} \\
\dot{z}
\end{bmatrix} = \begin{bmatrix}
0 \\
1.02 \\
0 \\
\end{bmatrix}\dot{q_1}
\end{equation}
, where $\dot{x}$ stands for the derivative of $x$ with respect to time. This conversion from the velocity of a single joint to the linear speed of the end-effector is explainable because as a joint that only rotates on a 2-D plane and has the initial position parallel to the x-axis, at the very first instant change in its angle will only result in a rightward end-effector motion.
This process is repeatable for all joints, and once added together, it demonstrates how each of the joints is affecting end-effector linear velocity.
\begin{equation}\label{posDe4}
\begin{split}
& \frac{d\xi}{d\textit{\textbf{q}}} \rvert { {\textit{\textbf{q}} = \begin{bmatrix}
0 , 0 ,0 ,0 ,0 ,0
\end{bmatrix}} } \\ & = \frac{\partial \xi}{\partial q_1} + \frac{\partial \xi}{\partial q_2} + \dots + \frac{\partial \xi}{\partial q_6} \\
& = \begin{bmatrix}
R(\textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - R(\textit{\textbf{q}}) & \begin{matrix}
-1.06q_2 + 1.06q_3 + 0.1q_5 \\
1.02q_1 \\
-0.87q_2+0.1q_3
\end{matrix} \\
0_{1 \times 3} & 0
\end{bmatrix}
\end{split}
\end{equation}
This implies
\begin{equation}\label{posDe5}
\begin{bmatrix}
\dot{x} \\
\dot{y} \\
\dot{z}
\end{bmatrix} = \begin{bmatrix}
-1.06\dot{q_2} + 1.06\dot{q_3} + 0.1\dot{q_5} \\
1.02\dot{q_1} \\
-0.87\dot{q_2}+0.1\dot{q_3}
\end{bmatrix}
\end{equation}
Although the analytic relationship between the joint angle velocity and end-effector linear velocity is still unknown, from equation \eqref{posDe5} we can know that at least when the arm just starts moving, the velocity of joint 1,2,3 and 5 dictates the movement of the end-effector. \par
For angular velocity relationship, consider the sub-matrix on the upper left corner in equation \eqref{poseDerivation}. Differentiating it results in
\begin{equation}\label{rotDe2}
\begin{split}
& \frac{\partial \textit{\textbf{R}}}{\partial q_1}\rvert { {\textit{\textbf{q}} = \begin{bmatrix}
0 , 0 ,0 ,0 ,0 ,0
\end{bmatrix}} } \\ & \approx \frac{ R(\textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - R(\textit{\textbf{q}}) }{\Delta \textit{\textbf{q}}} \dot{q_1} \\
S(\bm{\mathit{\omega}})\textit{\textbf{R}} & \approx \frac{ R(\textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - R(\textit{\textbf{q}}) }{\Delta \textit{\textbf{q}}} \dot{q_1} \\
S(\bm{\mathit{\omega}}) & \approx \frac{ R(\textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - R(\textit{\textbf{q}}) }{\Delta \textit{\textbf{q}}} \textit{\textbf{R}}^{T} \dot{q_1} \\
\omega & \approx \text{vex} \begin{pmatrix}
\frac{ R(\textit{\textbf{q}} + \Delta \textit{\textbf{q}}) - R(\textit{\textbf{q}}) }{\Delta \textit{\textbf{q}}} \textit{\textbf{R}}^{T}
\end{pmatrix} \dot{q_1} \\
\begin{bmatrix}
\omega_x \\
\omega_y \\
\omega_z
\end{bmatrix} & = \begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix} \dot{q_1}
\end{split}
\end{equation}
,where $\text{vex}(\textit{\textbf{S}})$ is the inverse operation of the skew-symmetric matrix.
Finally, after establishing relationships between the joint angle velocities and the translational velocity angular velocities separately, we can arrive at a cumulative representation of this relationship - the manipulator Jacobian. Taking the derivative of the forward kinematics function, we obtain
\begin{equation}\label{jacobian}
\begin{split}
\frac{d \xi}{d\textit{\textbf{q}}} & = \frac{d}{d\textit{\textbf{q}}} \mathcal{K}(\textit{\textbf{q}})\\
\bm{\mathit{\nu}} & = J(\textit{\textbf{q}})\dot{\textit{\textbf{q}}}
\end{split}
\end{equation}
,where
\begin{equation*}
\begin{split}
\bm{\mathit{\nu}} & = \begin{bmatrix} v_x & v_y & v_z & \omega_x & \omega_y & \omega_z \end{bmatrix} \\
J(\textbf{\textit{q}}) & \in \mathbb{R}^{6 \times N}
\end{split}
\end{equation*}
,$N$ being the degree-of-freedom of the robot.
\subsection{Workspace Mapping Seen as a Supervised Learning Problem}
According to \cite{GoodfellowDeep}, a supervised learning problem can be seen as finding a random vector $\textbf{x}$'s corresponding value $\textbf{y}$ by estimating $p(\textbf{y}|\textbf{x})$. This work uses the following steps to transform a workspace mapping problem to a supervised learning problem \cite{Liao}:
\begin{itemize}
\item For a given all-revolute, serial-link manipulator, we first have a left-invertible forward kinematics function:
\begin{equation}\label{fkine}
\begin{split}
\mathcal{K}: \mathbb{S}^n \rightarrow \mathbb{R}^3 \times SO(3)
\end{split}
\end{equation}
, where $\mathbb{S}$ is the set of all angles in the interval $[0, 2\pi)$. This maps a point in the parameter space of the manipulator to a pose in 6-D space. In an object-oriented programming perspective, for a \verb| Manipulator | object we would have two methods: \verb| Manipulator.forwardKine(q)| that corresponds to $\mathcal{K}(\textit{\textbf{q}})$ and \verb| Manipulator.inverseKine(xi)| that corresponds to $\mathcal{K}^{-1}(\xi)$ (we use $^{-1}$ here to denote left inverse). This may be computed numerically of analytically, and, for quick reference, see the Appendices in \cite{Liao}. For a closer look into the details, one may refer to \cite{CorkeRobotics} or \cite{SicilianoRobotics}.
\item We then define the Jacobian mapping function
\begin{equation}\label{jacobmap}
\begin{split}
f: (\mathbb{R}^{m})^{n}\times \mathbb{R}^3 \times SO(3) \rightarrow \mathbb{R}^{6 \times n}
\end{split}
\end{equation}
that maps a manipulator with a corresponding pose to it's corresponding Jacobian matrix, where $n$ is the number of links and $m$ is the number of parameters needed to describe a single link. \cite{Liao} provides a specific from of the above function, but this expression aim to provide a general form.
\item However, remark that $f$ is ill-defined because $\mathcal{K}$ is not right-invertible-a manipulator cannot reach infinite space-and thus $f$ is not left-total.
\item To resolve this issue, we introduce an improved version of $f$ :
\begin{equation}\label{improvedJacob}
\begin{split}
f_1 &: (\mathbb{R}^{m})^{n}\times \mathbb{R}^3 \times SO(3) \rightarrow \begin{Bmatrix}0,1\end{Bmatrix} \\
f_2 &: (\mathbb{R}^{m})^{n}\times \begin{Bmatrix}0,1\end{Bmatrix} \rightarrow \mathbb{R}^{6 \times n}
\end{split}
\end{equation}
They are defined via:
\begin{enumerate}
\item $f_1(\textbf{m},\xi) = 1$ if $\exists \textbf{q} \in \mathbb{S}^n$ such that $\mathcal{K}(q)=\xi$, denoted $P(\textbf{m},\xi)$; $f_1(\textbf{m},\xi) = 0$ if $\neg P(\textbf{m},\xi)$.
\item $f_2(\textbf{m},b) = \inf_{6 \times n}$ if $b=0$; otherwise, $f_2(\textbf{m},f_1(\textbf{m},\xi)) = f(\textbf{m},\xi)$.
\end{enumerate}
\item Design $\textbf{x}$ on a implementation basis: range of scopes that corresponds to $p(\xi)$ and the "manipulator distribution" $p(\textbf{m})$, the latter usually uniform. By sampling from $p(\textbf{x})$ and evaluating $f_2(\textbf{m},f_1(\textbf{m},\xi))$ numerically, we have created a supervised learning problem.
\end{itemize}
Implementation-wise, according to empirical-risk minimization \cite{GoodfellowDeep},
\begin{equation}
\mathbb{E}_{\textbf{x},\textbf{y}\sim \widehat{p}_{data} (\textbf{x},\textbf{y})} [L(f(\textbf{x};\mathbf{\theta}),\textbf{y})] = \frac{1}{m}\sum^{m}_{i=1} L(f(\textbf{x}^{(i)}; \mathbf{\theta}),\textbf{y}^{(i)})
\end{equation}
The above can be done in a discrete manner, where a dataset can be created by sampling from $\textbf{x} \sim p(\textbf{x})$, and numerically compute the corresponding values of each sample, without having to worry about the analytic form of $p(\textbf{y})$.
\section{Related Works}
There are a number of attempts to apply deep learning to problems in manipulator robotics, either to obtain the workspace of robots or to achieve other goals in general. An obvious application is the inverse kinematics itself, due to its non-linearity and the NP-completeness of optimal Jacobian accumulation \cite{NPJacobian}. While early approaches often focus on a particular model with a tailored network architecture \cite{IkineMLPGuez, IkineDEFAnetDaunicht, PathControlIkineAnnLou}, more recent studies begin to generalize so that the deep network can solve the problem for a broader spectrum of kinematic structures \cite{Ikine6rBingul, IkineGenKoker, IkineAnnFeng}. Another fascinating field is the task-learning of humanoid robots, where their arms are seen as kinematically redundant manipulators \cite{FeatFaceRobotManiFinn}. The goal is to learn basic arm actions described in visual state representation from video feeds rather than the pose of the desired object. The anthropomorphic nature of the problem is intriguing, and by utilizing the spatial encoders, the tested robot is capable of performing actions such as scooping a bag of rice with a spatula.
\subsection{Improvements over Subspace Learning}
As a follow-up study of \cite{Liao}, this paper obtains incremental improvements over several aspects:
\begin{enumerate}
\item Not only predicting the existence of inverse kinematics, but the new framework also offers an approximation of the Jacobian matrix at a given pose, which is useful for a spectrum of tasks such as resolved rate motion control \cite{CorkeRobotics} and manipulability estimation \cite{manipulability}.
\item A Python implementation is provided with pretrained weights, making it much more portable than the previous MATLAB implementation.
\item More advanced neural architectures are used, including new activation functions \cite{PReLU}, batch normalization \cite{batchnorm}, and dropout \cite{dropout} that helps the model to generalize better.
\item Newer gradient descent optimizers are tested on the framework, like RMSprop \cite{RMSprop}, Adagrad \cite{Adagrad}, Adam \cite{Adam,convAdam}, Nadam \cite{Nadam}, Adadelta \cite{Adadelta} and Adamax \cite{Adam}, all provided by Keras \cite{Keras}.
\end{enumerate}
\section{M3 Library and Dataset}
Along with the JacobianNet framework, we also release the Manipulability Map for Manipulators (M3) library and datasets, a MATLAB robotics library designed for deep learning related sampling of manipulator parameters and its corresponding outputs. The library functions are based on \cite{Liao}'s released codes, with improvements in readability and extra features.
The main function in the library's API is \verb|NNsample.m|, with the following signature and options:
\begin{lstlisting}[language=octave]
function [features,labels]=NNsample(num,parallel,varargin)
opt.format='csv';
opt.variant='CGAN';
opt.mani='kine';
opt.DOF=6;
opt.type='spherical';
opt.r=0.5;
opt.dist='uniform';
opt.poseR=0.8;
opt.ikine='analytic';
opt.plim=50;
opt.testRatio=0.01;
opt.cutoff=0.03;
\end{lstlisting}
Currently, the library supports dataset generation for generative modeling, inverse kinematics, and dynamics studies. This can be tuned by setting \verb|opt.variant| to different values. \verb|parallel| argument requires a boolean input, which determines if parallel pools would be used to accelerate computing. There are also other scripts in the library, some used by \verb|NNsample.m|, which might also be useful on their own:
\begin{itemize}
\item \verb|cell2csv.m| \cite{cell2csv}, which turns a cell array into a \verb|.csv| file.
\item \verb|threshold.m|, used mainly for generative modeling research, which determines if a sample is considered as belonging to the true distribution.
\item \verb|randPose.m|, which returns a random pose given scope parameters.
\item \verb|genWorkspace.m|, a canonical implementation of the discretized workspace algorithm (both for constant orientation workspace and orientation workspace)
\item \verb|randDyna.m| and \verb|randKine.m|, which returns a randomly sampled manipulator object with its vectorized representation.
\end{itemize}
All codes are available at \url{https://github.com/liaopeiyuan/M3} published under the MIT License, maintained by the Kent Artificial Intelligence Laboratory.
The dataset accompanied by the library, which is used to train the network proposed above, is available along with the models at \url{https://github.com/liaopeiyuan/Jacobian-Estimation}, and consists of the following files:
\begin{itemize}
\item \verb|conf_feature_test.csv|
\item \verb|conf_feature_train.csv|
\item \verb|conf_label_test.csv|
\item \verb|conf_label_train.csv|
\item \verb|conf_feature_dyna_test.csv|
\item \verb|conf_feature_dyna_train.csv|
\item \verb|conf_label_dyna_test.csv|
\item \verb|conf_label_dyna_train.csv|
\item \verb|jacob_feature_test.csv|
\item \verb|jacob_feature_train.csv|
\item \verb|jacob_label_test.csv|
\item \verb|jacob_label_train.csv|
\item \verb|jacob0_feature_test.csv|
\item \verb|jacob0_feature_train.csv|
\item \verb|jacob0_label_test.csv|
\item \verb|jacob0_label_train.csv|
\end{itemize}
Files with \verb|train| prefix are used for train/validation split, and \verb|test| prefix for benchmarking results. Remark that files with \verb|conf| prefix are used to train the confidence net, and files with \verb|jacob| prefix are used to train the confidence nets. Files with \verb|jacob| prefix record the Jacobian matrix in the end-effector frame while files with \verb|jacob0| prefix record the Jacobian matrix in the world frame.
\begin{table}[h]
\caption{Preliminary Data Analysis: Part 1}
\centering
\begin{tabular}{c rrrrrrr}
\hline\hline
File Name& \# Targets & \# Features & \# Samples & Regres./Class. \\ [0.5ex]
\hline
\verb|conf_*_test| & 1 & 24 & 3000 & Classification \\
\verb|conf_*_train| & 1 & 24 & 297000 & Classification \\
\verb|conf_*_dyna_test| & 1 & 96 & 2000 & Classification\\
\verb|conf_*_dyna_train| & 1 & 96 & 198000 & Classification\\
\verb|jacob_*_test| & 36 & 96 & 2000 & Regression\\
\verb|jacob_*_train| & 36 & 96 & 198000 & Regression\\
\verb|jacob0_*_test| & 36 & 96 & 2000 & Regression\\
\verb|jacob0_*_train| & 36 & 96 & 198000 & Regression\\[1ex]
\hline
\end{tabular}
\label{tab:hresult}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{distPos.pdf}
\caption{Ratio of positive samples versus negatives in the confidence net training samples, dynamics version}
\label{fig:mesh10}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Tsne-1.pdf}
\caption{t-SNE plot of test input features for confidence net, stratified by $d3$, $d4$, $a2$, $a3$ respectively}
\label{fig:mesh10}
\end{figure}
\begin{table}[h]
\caption{Preliminary Data Analysis: Part 2 (features)}
\centering
\begin{tabular}{c rrr}
\hline\hline
File Name& 2,3,7,8 (DH params) & Constant Cols. \\ [0.5ex]
\hline
\verb|conf_*_test| &$[0,0.5]$ & $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\
\verb|conf_*_train| &$[0,0.5]$ & $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\
\verb|conf_*_dyna_test| &$[0,0.5]$ & $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\
\verb|conf_*_dyna_train| &$[0,0.5]$ & $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\
\verb|jacob_*_test| &$[0,0.5]$ & $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\
\verb|jacob_*_train| &$[0,0.5]$ & $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$ \\
\verb|jacob0_*_test| &$[0,0.5]$& $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\
\verb|jacob0_*_train| &$[0,0.5]$& $\begin{Bmatrix}0,1,4\sim6,9\sim17\end{Bmatrix}$\\[1ex]
\hline
\end{tabular}
\label{tab:hresult2}
\end{table}
\begin{table}[h]
\caption{Preliminary Data Analysis: Part 3 (feature column range)}
\centering
\begin{tabular}{c rrrrr}
\hline\hline
File Name& 18$\sim$23 (m) & 24$\sim$41 (r) & 42$\sim$59 (I) \\ [0.5ex]
\hline
\verb|conf_*_dyna_test| &$[0,10]$ &$[-0.05,0.05]$ &$[0,1]$ \\
\verb|conf_*_dyna_train| &$[0,10]$ &$[-0.05,0.05]$ &$[0,1]$\\
\verb|jacob_*_test| &$[0,10]$ &$[-0.05,0.05]$ &$[0,1]$ \\
\verb|jacob_*_train| &$[0,10]$ &$[-0.05,0.05]$ &$[0,1]$ \\
\verb|jacob0_*_test| &$[0,10]$&$[-0.05,0.05]$&$[0,1]$ \\
\verb|jacob0_*_train| &$[0,10]$&$[-0.05,0.05]$ &$[0,1]$\\[1ex]
\hline
\end{tabular}
\label{tab:hresult3}
\end{table}
\begin{table}[h]
\caption{Preliminary Data Analysis: Part 4 (feature column range)}
\centering
\begin{tabular}{c rrrrr}
\hline\hline
File Name& 60$\sim$65 (B) & \multicolumn{2}{c}{66$\sim$77 (Tc)} & 78$\sim$83 (G) \\ [0.5ex]
\hline
\verb|conf_*_dyna_test| &$[0,0.005]$ &$[0,0.5]$ &$[-0.5,0]$&$[-50,50]$ \\
\verb|conf_*_dyna_train| &$[0,0.005]$ &$[0,0.5]$ &$[-0.5,0]$&$[-50,50]$\\
\verb|jacob_*_test| &$[0,0.005]$ &$[0,0.5]$ &$[-0.5,0]$ &$[-50,50]$\\
\verb|jacob_*_train| &$[0,0.005]$ &$[0,0.5]$ &$[-0.5,0]$&$[-50,50]$ \\
\verb|jacob0_*_test| &$[0,0.005]$&$[0,0.5]$&$[-0.5,0]$ &$[-50,50]$ \\
\verb|jacob0_*_train| &$[0,0.005]$&$[0,0.5]$ &$[-0.5,0]$&$[-50,50]$\\[1ex]
\hline
\end{tabular}
\label{tab:hresult4}
\end{table}
\begin{table}[h]
\caption{Preliminary Data Analysis: Part 5 (feature column range)}
\centering
\begin{tabular}{c rrrrr}
\hline\hline
File Name& 84$\sim$89 (Jm) & 90$\sim$92 ($\mathbb{R}^3$) & 93$\sim$95 ($\mathbb{S}^3$) \\ [0.5ex]
\hline
\verb|conf_*_dyna_test| &$[0,5\times10^{-4}]$ &$[-0.4,0.4]$ &$[0,2\pi]$ \\
\verb|conf_*_dyna_train| &$[0,5\times10^{-4}]$ &$[-0.4,0.4]$ &$[0,2\pi]$\\
\verb|jacob_*_test| &$[0,5\times10^{-4}]$ &$[-0.4,0.4]$ &$[0,2\pi]$ \\
\verb|jacob_*_train| &$[0,5\times10^{-4}]$ &$[-0.4,0.4]$ &$[0,2\pi]$ \\
\verb|jacob0_*_test| &$[0,5\times10^{-4}]$&$[-0.4,0.4]$&$[0,2\pi]$ \\
\verb|jacob0_*_train| &$[0,5\times10^{-4}]$&$[-0.4,0.4]$ &$[0,2\pi]$\\[1ex]
\hline
\end{tabular}
\label{tab:hresult5}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Tsne.pdf}
\caption{t-SNE plot of test Jacobian matrix (in world frame), stratified by $d3$, $d4$, $a2$, $a3$ respectively}
\label{fig:mesh10}
\end{figure}
\section{Network Architecture}
The confidence network serves the purpose of indicating that whether a numerical solution for Jacobian matrix can be estimated from the input representation. It consists of $8$ dense layers and is compiled with binary cross entropy loss:
\begin{equation}
L_{\text{bce}}(y, \hat{y}) = -\sum_i y_i \log \hat{y}_i
\end{equation}
The estimation network, as its name suggests, is the network that estimates a Jacobian matrix as the output from its input representation. It as well contains $8$ dense layers, and it outputs a $22*1$ vector that is subsequently reshaped into a complete Jacobian matrix. Since the task is treated as regression, the network is compiled with mean squared error loss:
\begin{equation}
L_{\text{mse}}(y, \hat{y}) = \frac{1}{n}\sum_{i=1}^n (y_i-\hat{y}_i)^2
\end{equation}
As suggested in Figure \ref{fig:archnew}, the confidence network, and the estimation share the same encoded representation from the encoder network. Noted that the function of the confidence network is to indicate whether a solution exists for the encoded representation, the activation of the estimation network depends on the result of the confidence network.
To elaborate, if the confidence returns the result indicating that a solution can be estimated from the encoded representation, then will the estimation network be initiated and estimate the Jacobian matrix.
As shown in Figure \ref{fig:archnew} and described above, the combined model will produce a Jacobian matrix only when the confidence network indicates that a solution is possible. This model is constructed because we believe that this model can provide us with few advantages:
\begin{enumerate}
\item \textit{Memory-Efficiency.} Since some encoded representation does not need to be estimated by the estimation net if it is considered as insolvable by the confidence net, this practice might be able to save memory from preventing it to be used to store estimation outcome that will not be able to yield a Jacobian matrix.
\item \textit{Full-Pipeline.} The combined model can offer its users the ability to obtain a Jacobian matrix directly from certain features of a robot arm, assisting them to obtain the manipulator Jacobian matrix more conveniently and speedily.
\end{enumerate}
The input for the combined model consists of a tensor concatenated from two data segment - manipulator features and pose feature, which is subsequently fed into the encoder network shown in $f_2$. The encoder network then encodes the input tensor into a $2150*1$ vector representation that will be used to estimate the resulting Jacobian matrix and the relating confidence.
Remark that the confidence net is a neural network trained to determine whether a numerical solution exists for the encoded representation of a Jacobian matrix based on a manually set threshold shown in Figure \ref{fig:archnew}. If the outcome of the confidence net is below the threshold, the representation is considered insolvable, discarded and never examined by the estimation net.
Lastly, the Jacobian estimation net, originally proposed in \cite{Liao} produces a vector from the input representation. The output vector is subsequently reshaped into the estimated Jacobian matrix representing the manipulability workspace of the robot arm.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{ArchNew.pdf}
\caption{JacobianNet Architecture}
\label{fig:archnew}
\end{figure}
\subsection{Training}
Three datasets are used to train the neural networks correspondingly: the kinematics confidence dataset for confidence network, dynamics Jacobian estimation dataset for estimation network, and dynamics estimation/confidence dataset for the combined model.
For the encoder network, batch normalization\cite{batchnorm} is used after the last four dense layers as well as the output layer with PReLU\cite{PReLU} activation function and dropout\cite{dropout} layer with a dropout rate of $0.5$.
A similar structure of batch normalization, PReLU and dropout are also applied to the first four layers of the estimation and confidence network. For the training of the combined network, Adam optimizer\cite{Adam} is employed with its default parameters ($\beta1-0.9$, $\beta2-0.999$, $\alpha-0.001$, $\epsilon-10^{-8}$), a batch size of 4096 and epoch size of 25.
To prevent the issue of overfitting, a validation split value is set to separate the data set into the training set and validation set. For the training of all the networks, the value for the validation split is kept at $0.2$, meaning that $20\%$ of the dataset is set to be the validation set.
There are different models constructed for comparison:
\begin{enumerate}
\item \textit{Baseline Model.} Baseline model is constructed with default Adam optimizer\cite{Adam} and it mainly serves as a point of comparison for accuracy between the method of generating manipulator Jacobian matrix proposed in this paper and other machine learning methods proposed for similar proposes. Also, the speed of generating manipulator Jacobian matrix is compared between our method that traditional kinematic methods.
\item \textit{Comparison Models.} Comparison models are a set of methods that utilize different types of gradient descent method. We constructed this set of models to observe different gradient step methods' influence on the result of the training of the network and the general outcome/accuracy of the combined model. The detailed comparison between different models will be elaborated in later chapters.
\end{enumerate}
\subsection{Data Augmentation}
To accelerate convergence, several augmentations are employed. All of our data augmentation functions are a part of the Scikit-Learn library\cite{scikit-learn}.
When training each of the individual networks (encoder, confidence, etc.), input features are augmented through scaling to min and unit measure using the function \verb|scale|. In addition, data augmentation is applied to the output of the estimation network for target features by using min-max scaling from $-1$ to $1$. To reiterate, data augmentation is applied to the input of all three networks and the output of the estimation network aiming for faster convergence during the training process.
\section{Experiments}
\subsection{Benchmarking Results}
According to Table \ref{tab:conf} and Table \ref{tab:est}, we can see the following advantages of our proposed model:
\begin{enumerate}
\item \textit{Greater Accuracy.} One purpose that we are seeking to achieve with the combined model proposed in this paper is greater accuracy when compared to other machine learning models aiming to solve similar problems.
\item \textit{Greater Speed.} Another characteristic of the proposed combined model is that when comparing to traditional numerical/analytic inverse kinematics (\verb|ikine.m| / \verb|ikine6s.m| in RTB\cite{CorkeRobotics}) for solving the manipulator Jacobian; it can come up with the result in greater speed.
\end{enumerate}
Specifically, JSC stands for Jaccard similarity coefficient, MCC stands for Matthews Correlation Coefficient, 0-1 stands for the 0-1 loss, EVS stands for explained variance score, and the number in the parenthesis indicates the thresholding value of continuous output.
\begin{table}[h]
\caption{Confidence net benchmark}
\centering
\begin{tabular}{c rrrrrrrr}
\hline\hline
Method & JSC & 0-1 & Prec. & Rec. & MCC & Avg.Time\\ [0.4ex]
\hline
NN(.5) & 0.983 & 0.017 & 0.97 & 0.99 & 0.967 & $2*10^{-4}$s\\
NN(.25) & 0.98 & 0.02 & 0.96 & \textbf{1.00} & 0.961 & $2*10^{-4}$s\\
NN(.75) & \textbf{0.985} & \textbf{0.015} & \textbf{0.98} & 0.99 & \textbf{0.970} & $2*10^{-4}$s\\
SVM(RBF) & 0.930 & 0.070 & 0.92 & 0.95 & 0.861 & $5*10^{-3}$s\\
SVM(Sigmoid) & 0.643 & 0.357 & 0.65 & 0.67 & 0.285 & $5*10^{-3}$s\\
SVM(Lin. L2) & 0.751 & 0.249 & 0.76 & 0.77 & 0.502 & $5*10^{-7}$s\\
LogisticReg. & 0.752 & 0.248 & 0.75 & 0.77 & 0.504 & $3*10^{-7}$s\\
RidgeReg.(0.5) & 0.750 & 0.250 & 0.75 & 0.76 & 0.500 & $\mathbf{1*10^{-7}}$s\\
Naive Bayes & 0.803 & 0.197 & 0.79 & 0.83 & 0.607 & $2*10^{-7}$s\\
Gauss. Process & 0.865 & 0.135 & 0.86 & 0.88 & 0.729 & $3*10^{-4}$s\\
Decision Tree & 0.879 & 0.121 & 0.88 & 0.88 & 0.757 & $4*10^{-7}$s\\
Random Forest & 0.902 & 0.098 & 0.91 & 0.90 & 0.804 & $4*10^{-5}$s\\
AdaBoosted DT & 0.849 & 0.151 & 0.84 & 0.87 & 0.698 & $7*10^{-6}$s\\
XGboost & 0.820 & 0.180 & 0.79 & 0.88 & 0.643 & $2*10^{-5}$s\\
LightGBM & 0.952 & 0.047 & 0.94 & 0.97 & 0.906 & $3*10^{-4}$s\\
\verb|ikine.m| & N/A & N/A & N/A & N/A & N/A & $0.7583$ s\\
\verb|ikine6s.m| & N/A & N/A & N/A & N/A & N/A & $0.0598$ s\\[1ex]
\hline
\end{tabular}
\label{tab:conf}
\end{table}
\begin{table}[htb]
\caption{Estimation net benchmark}
\centering
\begin{tabular}{c rrrrrrrr}
\hline\hline
Method & MAE & MSE & EVS & R2 & Avg.Time\\ [0.4ex]
\hline
NN & 0.0658 & 0.0095 & 0.8799 & 0.8787 & $2*10^{-4}$s \\
Lin.Reg. & 0.4106 & 0.2592 & 0.0794 & 0.0785 & $4*10^{-7}$s \\
RidgeReg. & 0.4106 & 0.2592 & 0.0796 & 0.0787 & $4*10^{-7}$s \\
Decision Tree & 0.5071 & 0.4556 & -0.5953 & -0.5962 & $6*10^{-7}$s \\
Gauss. Process & 0.4905 & 0.3391 & 0.0 & -0.0012 & $8*10^{-4}$s \\
k-NN & 0.4303 & 0.2997 & -0.0617 & -0.0627 & $0.0531$s \\
\verb|jacob0| (num.) & N/A & N/A & N/A & N/A& $0.6146$ s\\
\verb|jacob0| (ana.) & N/A & N/A & N/A & N/A& $0.0626$ s\\[1ex]
\hline
\end{tabular}
\label{tab:est}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{conf_compare.pdf}
\caption{Comparison Between Training Gradient Curves Generated by Different Optimizers for Confidence Network}
\label{fig:conf_compare}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{est_compare.pdf}
\caption{Comparison Between Training Gradient Curves Generated by Different Optimizers for Estimation Network}
\label{fig:est_compare}
\end{figure}
We also compared network's training performance against different gradient step moethods. As demonstrated in Figure \ref{fig:conf_compare}, when training the confidence network, apparently Stochastic Gradient Descent\cite{GoodfellowDeep} ($\theta = \theta - \eta \cdot \nabla_\theta J( \theta; x^{(i:i+n)}; y^{(i:i+n)})$) has the most stable gradient curve without the apparent and dramatic rise and fall caused by the cycle model of training(freezing and unfreezing the parameters of the encoder). The best result and the worst result, given the training time of 100 epochs, are Adamax ($\theta_{t+1} = \theta_{t} - \dfrac{\eta}{\max(\beta_2 \cdot v_{t-1}, |g_t|)} \hat{m}_t$) optimizer and Stochastic Gradient Descent optimizer, both of which actually seem to be promising
leading to a final acceptable loss value. Other optimizers such as:
\begin{itemize}
\item Adam\cite{Adam} $\theta_{t+1} = \theta_{t} - \dfrac{\eta}{\sqrt{\hat{v}_t} + \epsilon} \hat{m}_t$
\item Nadam\cite{Nadam} $\theta_{t+1} = \theta_{t} - \dfrac{\eta}{\sqrt{\hat{v}_t} + \epsilon} (\beta_1 \hat{m}_t + \dfrac{(1 - \beta_1) g_t}{1 - \beta^t_1})$
\item RMSprop\cite{RMSprop} $\theta_{t+1} = \theta_{t} - \dfrac{\eta}{\sqrt{E[g^2]_t + \epsilon}} g_{t}$
\end{itemize}
exhibit unstable gradient curves due to the training cycle and not converging really well.
The benchmark between different optimizers for the training of the estimation network is drastically different from that of confidence network, according to Figure \ref{fig:est_compare}. For the training of the estimation network, RMSprop\cite{RMSprop}, Adamax, and Adam\cite{Adam} all exhibit enough gradient to finally converge at an acceptable loss value with RMSprop being the most stable. The optimizers that performed well in the training of the confidence network including Nadan\cite{Nadam}, Adagrad\cite{Adagrad} and Stochastic Gradient Descent\cite{stochasticgradientdescent}, performed poorly with estimation network and did not converge after 80 epochs of training.
\subsection{Training Paradigms}
A specific way of training the network is employed to ensure the greatest accuracy when estimating the resulting Jacobian. The training of the combined model starts with pre-training the encoder network on the estimation network. Since the encoder network is essentially creating a regression, a $R^{2}$ value is calculated. However, because the decoder in estimation network is not tuned at this point of the training, the $R^{2}$ value would appear to be horrible.
Then in order to progress the training of the confidence network and estimation network at the same time without one being overfitting the encoded representation, we propose the \verb|cycle| model for the training. A \verb|cycle| is defined as training the confidence network two times and the estimation network one time since it takes a longer time for the confidence network to converge compared to the estimation network.
To prevent the training of the confidence network to affect the parameters of the already trained encoder network such that the encoder network does not bias toward the result of the confidence network, we freeze the encoder during the second step of the training where we fit the confidence network to the pre-trained encoder network. After the training, $R^{2}$ value for the confidence network is evaluated, and the network is further fine-tuned and compiled.
Then as we are training the estimation network, the encoder network is, and its parameters are adjusted along with those of the estimation network.
At the end, the combined model's $R^{2}$ value is evaluated. The cycled fashion of training the combined model contribute to the gradient curves observed in Figure \ref{fig:conf_history} and Figure \ref{fig:est_history}.
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{conf_history.pdf}
\caption{Confidence Network Gradient Curve}
\label{fig:conf_history}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{est_history.pdf}
\caption{Estimation Network Gradient Curve}
\label{fig:est_history}
\end{figure}
\subsection{Simulation on PUMA560 Robotic Manipulator}
To demonstrate the effectiveness of the neural network, we used visulization APIs from the Robotics Toolbox \cite{CorkeRobotics} to compare the results between the ground truth (generated by M3 library functions) and the predicted outputs. From Figure \ref{fig:p560} we can see that the results only vary slightly.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{p560_workspace.jpg}
\label{fig:p560}
\caption{Neural network prediction versus ground truth}
\end{figure}
\section{Future Works}
The paradigm of confidence-estimation network collaboration can be applied to different variants of the Jacobian estimation problem, like estimating its elements based on joint angle configurations rather than poses, which can then be used to create resolved rate motion control. Though this proposed framework is also suitable for the task, it still needs a forward kinematics function to obtain the end-effector pose from the joint angles, and further investigation is needed before a production-ready model can be created.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{rrmc.png}
\label{fig:p560}
\caption{Current resolved rate motion control scheme, provided by \cite{CorkeRobotics}. Note that the mapping is from joint angles $\textbf{q}$ to the Jacobian matrix in world frame $\textbf{J}$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{est_rrmc.png}
\label{fig:p560}
\caption{Application of estimation net (trained on world frame dataset) in resolved rate motion control. The joint angles are first converted to end-effector pose $\textbf{T}$; then it is fed into the neural network for output.}
\end{figure}
\section{Conclusion}
In this work, we propose an accurate deep-learning framework that can generate the full workspace of serial-link manipulators by estimating its Jacobian matrix (given pose) and computing the confidence of the estimation. The architecture consists of an estimation network that approximates the Jacobian, either in world frame or in the end-effector frame and a confidence network that measures the confidence of the approximation. M3 (Manipulability Maps of Manipulators) is also introduced; it is a MATLAB robotics library based on Peter Corke's Robotics Toolbox, and it is used to generate the datasets for the neural networks constructed. Results have shown that not only is the proposed network is superior concerning runtime and portability when compared to numerical inverse kinematics, it is also more accurate than other machine learning alternatives.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-09-17T02:04:10",
"yymm": "1809",
"arxiv_id": "1809.05020",
"language": "en",
"url": "https://arxiv.org/abs/1809.05020"
}
|
"\\section{Introduction}\n\\subsection{Spectral data analysis and robustness}\nA large variety of al(...TRUNCATED)
| {"timestamp":"2018-09-14T02:06:13","yymm":"1809","arxiv_id":"1809.04818","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nAn important research direction in the field of quantum fluid clusters is (...TRUNCATED)
| {"timestamp":"2018-09-14T02:06:51","yymm":"1809","arxiv_id":"1809.04846","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nSupermassive black holes (SMBHs) at the center of galaxies undergo perio(...TRUNCATED)
| {"timestamp":"2018-09-14T02:07:14","yymm":"1809","arxiv_id":"1809.04858","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction.} \n\nThe purpose of this short note is to study Radon planes, i.e., two-dim(...TRUNCATED)
| {"timestamp":"2018-11-01T01:12:29","yymm":"1809","arxiv_id":"1809.04760","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nIn \\cite{Kh-Jones}, Khovanov constructed a link homology which categori(...TRUNCATED)
| {"timestamp":"2018-09-14T02:03:45","yymm":"1809","arxiv_id":"1809.04742","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:Introduction}\n\nRelativistic fluid dynamics is important in a(...TRUNCATED)
| {"timestamp":"2018-09-14T02:09:42","yymm":"1809","arxiv_id":"1809.04956","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\noindent\n\nThe optical response of a molecular system to an intense and(...TRUNCATED)
| {"timestamp":"2018-09-14T02:07:54","yymm":"1809","arxiv_id":"1809.04879","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 3