Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 20
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 20098)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 20
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\subsection{Proposed Oscillator}
From the analysis of conventional ring oscillators in section \ref{sec:conv} and simulations shown later in this section (Fig. \ref{fig:line_senstivity}), we know that current starved fully differential structure has both better PSRR and CMRR.
To offer differential inputs, outputs and a positive feedback hysteresis, each stage of the conventional differential RO uses four inverters (i.e. $8$ transistors) except for the current starved transistors (Fig. \ref{fig:Delay_Stage}). Our novelty stems from the observation that only four transistors seem necessary to realize oscillation as shown in Fig. \ref{fig:prop_RO}(a). MP1 and MN2 are the pull-up and pull-down differential I/O pair while MN3 and MP4 serve as their loads respectively. At the same time, MN3 and MP4 constitute the simplest dynamic positive feedback latch. Another way to interpret this proposed structure is as follows: MP1 and MN3 constitute a dynamic inverter since $outn$ and $inp$ are almost the same (with a certain phase shift), similarly, MP4 and MN2 constitute another dynamic inverter since $outp$ and $inn$ are almost the same (with a certain phase shift). Likewise, the two dynamic inverters compose the regeneration cross pair to generate positive feedback and offer extra hysteresis phase shift. Every transistor acts as an active device for itself and a load device for another transistor. The push-pull nature and transistor reuse make the cell compact and efficient.
However, one problem is that $inp$ and $inn$ are strictly inverse only when oscillations have been sustained, while $outn$ and $inn$ are inverse with a phase delay of $180^\circ/N$. If for some reason, the initial condition of $inp$ and $inn$ (Fig. \ref{fig:prop_RO} (a)) are same at say $0$ V, then $outp$ is guaranteed to be pulled up to $V_{dd}$ but the condition of $outn$ stays at initial condition since both MN2 and MP4 are OFF. If initial condition of $outn$ was $V_{dd}$, then the outputs of this stage are both at high voltage and the oscillator gets locked in this state and cannot start oscillating. So a start-up circuit is necessary to maintain robust oscillations.
Only a NMOS or PMOS or both (Fig. \ref{fig:prop_RO}(b)) can eliminate the possible stable state and work as start-up circuit. In the above example, when both $inn$ and $inp$ are $0$ V, now with added MP2, it will force $outn$ to $V_{dd}$ forcing the outputs of this stage to be opposite in polarity thus breaking the lock state earlier. From SPICE simulations, the structure with start-up of both NMOS and PMOS as shown as Fig. \ref{fig:prop_RO}(b), gives the best frequency performance (see Section \ref{sec:discussion}) and is adopted in the rest of the paper.
\subsection{Frequency and Energy Dissipation}
The novelty in our design of reducing the number of transistors should lead to an increased oscillation frequency per unit current. However, the charging currents also change and hence the combined effect is not obvious. Intuitively, the proposed delay stage offers less charging and discharging current by removing MP3 and MN4, which cuts down on power consumption. To precisely compare the charging/discharging current and the effect on the output frequency, we analyze the load capacitance and charging current model of a single stage in more details to present a theoretical prediction.
First, the load capacitor of each stage for the conventional structure (Fig. \ref{fig:Delay_Stage}) is given by:
\begin{align}
\label{eq:c_conv}
{C_{L\_conv}} &= {C_{gdP1}} + {C_{gdN1}} + {C_{gdP3}} + {C_{gdN3}} \nonumber \\
&\qquad {} + {C_{gP4}} + {C_{gN4}} + 2{C_{g\_NextStage}} \nonumber \\
&= 4{C_{gd}} + 4{C_g} \nonumber \\
&= 8{C_{gd}} + 4{C_{gs}}.
\end{align}
In comparison, for the proposed structure (Fig. \ref{fig:prop_RO}(b)), it reduces to
\begin{align}
\label{eq:c_prop}
{C_{L\_prop}} &= {C_{gdP1}} + {C_{gdN1}} + {C_{gdN3}} \nonumber \\
&\qquad {} + {C_{gP4}} + 2{C_{g\_NextStage}} \nonumber \\
&= 3{C_{gd}} + 3{C_g} \nonumber \\
&= 6{C_{gd}} + 3{C_{gs}}.
\end{align}
Thus, ${C_{L\_prop}}=3/4{C_{L\_conv}}$.
It is therefore reasonable to expect that the load capacitor decreases by $25\%$ since the number of transistors of each stage decreases by the same amount.
Next, we analyze the charging process of the node of $outp$ in Fig. \ref{fig:prop_RO}(b), while the discharging process is same as the node of $outn$. Fig. \ref{fig:prop_RO}(c) illustrates the charging period, which is divided into four phases (A, B, C and D) according to the operation of the transistors. $V_{max}$ and $V_{min}$ are the swing range of the oscillation, which is lower than rail-to-rail because of the current starved structure. In our design, $V_{max}\approx 900$ mV, $V_{min}\approx 300$ mV, and $V_{cm}\approx 600$ mV. For simplicity, we assume that $V_{TN}=|{V_{TP}}|=V_{T}$, $\Delta V_{1}=\Delta V_{4}=50$ mV and $\Delta V_{2}=\Delta V_{3}=250$ mV such that $\sum_{i=1}^{4}\Delta V_i=V_{max}-V_{min}=600$ mV.
\begin{table*}[htbp]
\centering
\caption{Comparison of charging and discharging current of each contributory transistors for a charging node in four phases, where $\beta=\mu_{0} C_{ox}W/L$. $\beta$ of NMOS and PMOS are the same assuming proper sizing to nominally maximize noise margin. Region 0, 1 and 2 refer to cut-off, linear and saturation regimes of operation of the MOSFET.}
\label{tab:i_comparison}
\begin{tabular}{llll
\toprule[2pt]
\textbf{Phase} &\textbf{MP1} &\textbf{MP3} &\textbf{MN3} \\
\midrule[1pt]
\multirow{7}*{A} & region=2 & region=0 & region=1 \\
& $v_{sg} \uparrow : V_{max}-V_{cm} \rightarrow $ & & $v_{gs} \downarrow : V_{max} - V_{min} \rightarrow$\\
& \qquad \quad $V_{max}-V_{min}-50mV $ & & \qquad \quad $V_{max}-V_{min}-50mV $\\
& $v_{sd} \downarrow : V_{max}-V_{min} \rightarrow $ & & $v_{ds} \uparrow : 0 \rightarrow$\\
& \qquad \quad $V_{max}-V_{min}-50mV $ & & \qquad \quad $50mV $\\
& $I=1/2\times\beta(v_{sg}-V_{T})^\alpha \uparrow $ & & $I=\beta(v_{gs}-V_{T})^{\alpha/2}v_{ds} \uparrow$\\
& $\overline{I} \approx 0.02\beta$ & & $\overline{I} \approx 0.01\beta$\\
\midrule[1pt]
\multirow{7}*{B} & region=2 & region=0 & region=$1 \rightarrow 3$ \\
& $v_{sg} \uparrow : V_{max}-V_{cm}-50mV \rightarrow $ & & $v_{gs} \downarrow : V_{max} - V_{min} -50mV\rightarrow$\\
& \qquad \quad $V_{max}-V_{min} $ & & \qquad \quad $V_{max}-V_{cm}$\\
& $v_{sd} \downarrow : V_{max}-V_{min}-50mV \rightarrow $ & & $v_{ds} \uparrow : 50mV \rightarrow$\\
& \qquad \quad $V_{max}-V_{min}-V_{cm}$ & & \qquad \quad $V_{min}-V_{cm}$\\
& $I=1/2\times\beta(v_{sg}-V_{T})^\alpha \uparrow $ & & $I=\beta(v_{gs}-V_{T})^{\alpha/2}v_{ds} \uparrow$\\
& $\overline{I} \approx 0.07\beta$ & & $\overline{I} \approx 0.01\beta$\\
\midrule[1pt]
\multirow{7}*{C} & region=$2 \rightarrow 1$ & region=2 & region=0 \\
& $v_{sg}(max) : V_{max}-V_{min} $ & $v_{sg} \uparrow : V_{max} - V_{cm}\rightarrow$ &\\
& \qquad \quad & \qquad \quad $V_{max}-V_{min}-50mV$ &\\
& $v_{sd} \downarrow : V_{max}-V_{cm} \rightarrow $ & $v_{ds} \uparrow : V_{max}-V_{cm} \rightarrow$ &\\
& \qquad \quad $50mV$ & \qquad \quad $50mV$ &\\
& $I=1/2\times\beta(v_{sg}-V_{T})^\alpha \uparrow $ & $I=1/2\times\beta(v_{sg}-V_{T})^\alpha \uparrow$ &\\
& $\overline{I} \approx 0.07\beta$ & $\overline{I} \approx 0.02\beta$ &\\
\midrule[1pt]
\multirow{7}*{D} & region=1 & region=1 & region=0 \\
& $v_{sg}(max) : V_{max}-V_{min} $ & $v_{sg} \uparrow : V_{max} - V_{cm}\rightarrow$ &\\
& \qquad \quad & \qquad \quad $V_{max}-V_{min}-50mV$ &\\
& $v_{sd} \downarrow : V_{max}-V_{cm} \rightarrow $ & $v_{ds} \uparrow : V_{max}-V_{cm} \rightarrow$ &\\
& \qquad \quad $50mV$ & \qquad \quad $50mV$ &\\
& $I=\beta(v_{sg}-V_{T})^{\alpha/2}v_{sd} \uparrow $ & $I=\beta(v_{sg}-V_{T})^{\alpha/2}v_{sd} \uparrow$ &\\
& $\overline{I} \approx 0.02\beta$ & $\overline{I} \approx 0.02\beta$ &\\
\bottomrule[2pt]
\end{tabular}
\end{table*}
\begin{figure}[!t]
\centerline
{\includegraphics[width=0.45\textwidth]{i_f_sim_theory.pdf}}
\caption{Simulation results of oscillation frequency of both conventional and proposed $4$ stage fully differential RO, along with the theoretical prediction for the proposed structure. As expected, the proposed RO has higher frequency for the same current.}\label{fig:i-f_sim}
\end{figure}
\begin{figure}[!t]
\centerline
{\includegraphics[width=0.45\textwidth]{line_sensitivity2.pdf}}
\caption{Comparison of the sensitivity of four different types of CCO topology with power supply and temperature: pseudo differntial, current starved single-ended, conventional and proposed current starved differential CCO. The proposed and conventional current starved differential designs have similar sensitivity that is much less than pseudo-differential or current starved single ended architectures.}\label{fig:line_senstivity}
\end{figure}
\begin{align}
\label{eq:td_conv}
t_{d\_conv} = \frac{C_{L\_conv} \Delta V_1}{I_{1\_conv}} + \frac{C_{L\_conv} \Delta V_2}{I_{2\_conv}} \nonumber\\
+ \frac{C_{L\_conv} \Delta V_3}{I_{3\_conv}} + \frac{C_{L\_conv} \Delta V_4}{I_{4\_conv}}.
\end{align}
\begin{align}
\label{eq:td_prop}
t_{d_prop} = \frac{C_{L\_prop} \Delta V_1}{I_{1\_prop}} + \frac{C_{L\_prop} \Delta V_2}{I_{2\_prop}} \nonumber\\
+ \frac{C_{L\_prop} \Delta V_3}{I_{3\_prop}} + \frac{C_{L\_prop} \Delta V_4}{I_{4\_prop}}.
\end{align}
where, $I_{i\_conv}$ and $I_{i\_prop}$ represent charging current in four different phases of conventional and proposed stage respectively, which are given by
\begin{gather}
I_{i\_conv}=I_{MP1}+I_{MP3}-I_{MN3}, \quad i=1,2,3,4.\\
I_{i\_prop}=I_{MP1}-I_{MN3}, \quad i=1,2,3,4.
\end{gather}
The equations and approximated average values of $I_{MP1}$, $I_{MP3}$ and $I_{MN3}$ in all phases are listed in Table \ref{tab:i_comparison}, where the charging process is divided into the four phases A, B, C and D as mentioned earlier. The regions of operation of the MOSFET are referred to as 0, 1 and 2 for cut-off, linear and saturation respectively.
The drain current of short-channel MOSFETs is assumed to follow the widely used alpha power law ~\cite{alpha_power_1}, ~\cite{alpha_power_2}. The carrier velocity saturation coefficient $\alpha$ is between $1.2$ to $1.5$ for sub-micron CMOS technology.
Then, according to the above equations and the estimated values in the table, we can get the propagation delay relationship between the proposed and conventional RO as $t_{d\_prop} = 88.4\%t_{d\_conv}$. Thus, the relation between the frequencies are:
\begin{align}
\label{eq:f_relation}
\frac{f_{prop}}{f_{conv}} = 1.156.
\end{align}
This means that the frequency of proposed RO increases $15.6\%$ compared with conventional structure for the same input current. From equation \ref{eq:f}, the average current relationship is expressed as:
\begin{align}
\label{eq:i_relation}
\frac{I_{prop}}{I_{conv}} = \frac{f_{prop}C_{L\_prop}}{f_{conv}C_{L\_conv}} = 0.867.
\end{align}
This implies the average current of the proposed RO decreases by $13.3\%$ compared to the conventional structure.
To verify these theoretical models, we conducted the current-frequency (I-F) transfer curve simulations of the conventional and proposed $4$ stage RO using $65$ nm CMOS models in SPICE. Fig. \ref{fig:i-f_sim} shows the simulation results and the theoretical prediction results of the output frequency. As predicted, the proposed oscillator indeed produces higher frequency than the conventional one for the same input current. However, the theoretical values are slightly greater than that of simulation because of the inaccuracy in estimation of the current of the transistor MP3.
\subsection{Robustness and Jitter}
While the increase in oscillation frequency is good, it is not useful if the new oscillator has a degradation in other key metrics such as robustness and jitter. We first evaluate the robustness of the proposed oscillator in comparison to the conventional one through simulations. The simulation results of line sensitivity ($\%/V$) and temperature sensitivity ($\%/T$) of four types of RO-CCO is shown in Fig. \ref{fig:line_senstivity}. As expected, the characteristics of differential current starved structures, both proposed and conventional, are better ($\approx 18\%$ for line sensitivity and $<0.014\%$ for temperature sensitivity) than non current starved pseudo differential RO ($\approx 139\%$ for line sensitivity and $\approx 0.14\%$ for temperature sensitivity). For current starved structures, differential structures are better than single ended ($\approx 61\%$ for line sensitivity and $\approx 0.06\%$ for temperature sensitivity) as expected. It can be seen that the line sensitivity and temperature sensitivity of the proposed differential structure is not degraded compared to the conventional differential structure. The relatively high value of sensitivity to power supply is traced back to the tail current sources coming out of saturation at power supply voltages lower than $0.9$V--this can be solved by reducing the current range. Confined to $1-1.2$V for power supply, the sensitivity is only around $1.5\%$.
Following \cite{Abidi_jitter} , the variance of period jitter for a RO can be expressed as:
\begin{align}
\label{eq:jitter_Abidi}
\sigma_{\tau} ^2 = \frac{KT}{If_{0}} (\frac{2}{V_{DD}-V_t} (\gamma_N + \gamma_P) + \frac{2}{V_{DD}} ).
\end{align}
where,
\begin{align}
\label{eq:f0_Abidi}
f_0 \approx \frac{I/C}{NV_{DD}}.
\end{align}
$N$ is the number of delay stage.
$\gamma_N$ and $\gamma_P$ are technology-dependent noise factors for NMOS and PMOS respectively.
Thus,
\begin{align}
\label{eq:jitter_propto}
\sigma_{\tau} ^2 \propto \frac{C}{I ^2}.
\end{align}
In our implementation, the load capacitor and average current decrease by $25\%$ and $13.3\%$ respectively.
\begin{align}
\label{eq:jitter}
\frac{\sigma_{Prop} ^2}{\sigma_{Conv} ^2} = \frac{(1-0.25)}{(1-0.133)^2} \approx 1.
\end{align}
So, the jitter of the proposed CCO is expected to be approximately the same as the conventional structure. This has been confirmed in simulations and measurement--we present these results in the next section.
\subsection{Frequency to Digital Conversion}
For usage within a neural network system, the raw frequencies of the CCO need to be often converted to a digital word. The easiest method to do this is to pass the CCO output as a clock to a counter\cite{elm_enyi}. However, this method incurs a large conversion time ($\propto 2^N$) and concomitantly large conversion energy. A different approach, following \cite{elmpuf_tcas1} is used in our work as described next.
\begin{figure}[!t]
\centerline
{\includegraphics[width=0.475\textwidth]{counter_diagram.pdf}}
\caption{(a) Time to digital converter (TDC) composed of a four delay stages differential CCO structure and two different counters for coarse and fine conversion separately. (b) The sequence diagrams for Gray code counter and the phase code counter with $45^\circ$ phase shift of each other as well as the corresponding 3-bit complementary phase code for one oscillation cycle.}
\label{counter_diagram}
\end{figure}
\begin{figure}[!htb]
\centerline
{\includegraphics[width=0.4\textwidth]{die_board.pdf}}
\caption{(a) Die photo and (b) testing set-up photo of the fabricated IC in $65$ nm CMOS. The testing board stacks with an FPGA board in charge of data transfer to and from the PC.}\label{fig:die}
\end{figure}
Fig. \ref{counter_diagram} shows the diagram of time to digital converter (TDC) with a four delay stage differential CCO and the sequential counters. Although the four stage differential CCO outputs eight phases totally, only half are unique with separate phase information while the rest are exactly the inverse of the unique phases and offer the same phase information. So only four phases with $45^\circ$ phase shift of each other, as shown in Fig. \ref{counter_diagram} (a), are utilized to generate the phase code counter (PC-CNT). Only $360^\circ /45^\circ =8$ codes are generated by the four stages differential CCO and hence only 3-bits are available in the PC-CNT acting as fine counter. The sequence diagrams of the four choice outputs and the corresponding complementary symmetry phase codes are illustrated in Fig. \ref{counter_diagram}(b). The opposite phase of the last phase of PC-CNT is chosen to clock a Gray code counter (GC-CNT) serving as coarse counter. The number of bits of GC-CNT depends on the input current range while the number of bits in PC-CNT determines the accuracy of the converter. A Gray code counter is selected for higher reliability in neural networks with tightly packed layout.
To enable wide dynamic range testing, a 12-bit GC-CNT is used in our design but the bit width can be optimized in neural network applications.
Both the phase code counter and Gray code counter possess the merits of energy efficiency and reliability since only one bit trips during state transition. Regarding the choice of number of stages in the CCO, we note that more stages of CCO structure ($\times M$) offers more valid output phases ($\times M$) and thus more bits in PC-CNT but has lower frequency and thus less bits in GC-CNT. Overall, the total number of bits is constant but more CCO stages consume more area overhead ($\times M$). Assuming the same input current for both cases, the tail current for CCO core is constant and hence energy per conversion due to CCO is constant. Energy dissipated in the counter is less for more CCO stages, but this is much smaller than the CCO energy dissipation.
Hence, due to the need of small footprint of the CCO, a $4$-stage CCO core is designed in our work.
\section*{Acknowledgment}
The authors would like to thank...
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section{Introduction}
\label{sec:intro}
\input{body/01_introduction.tex}
\section{Conventional RO-CCO Review}
\label{sec:conv}
\input{body/02_conventional.tex}
\section{Proposed Structure and Theoretical Analysis}
\label{sec:prop}
\input{body/03_proposed.tex}
\section{Measurement Results}
\label{sec:results}
\input{body/04_results.tex}
\section{Discussion}
\label{sec:discussion}
\subsection{Neural Network Simulation}
\label{sec:NNSIM}
\input{body/apx_NN_Sim.tex}
\subsection{CCO as Spiking Neuron}
\label{sec:CCO_SNN}
\input{body/apx_snn}
\subsection{Startup Circuit}
\label{sec:CCO_Startup}
\input{body/apx_Startup.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{body/05_conclusion.tex}
\Urlmuskip=0mu plus 1mu\relax
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2020-11-03T02:35:16",
"yymm": "2011",
"arxiv_id": "2011.00743",
"language": "en",
"url": "https://arxiv.org/abs/2011.00743"
}
|
\section{Introduction}
Communication in the Millimeter-Wave (mmWave) band is the new frontier for next generation wireless systems \cite{mag}-\cite{I3}. To provide sufficient link budget for these system, large antenna arrays will be required to enable directional precoding \cite{I3}-\cite{Sarieddeen2019}. Thanks to the small carrier wavelength at mmWave frequencies, multiple antenna elements can be packaged onto a small chip, possibly with other RF components \cite{repa}. The densely packed antenna elements, however, introduces new challenges for these systems. The comparable sizes of blockages, e.g., dirt, water droplets, and ice, can completely (or partially) block mmWave signals incident on a single or multiple antenna elements. Manufacturing imperfections can also lead to antenna element failure. Antenna element blockage or failure randomizes the array's geometry, and as a result, distorts its radiation pattern and causes uncertainties in the mmWave channel \cite{m0}. Therefore, it is crucial to design remote array diagnosis techniques that continuously monitor the performance of mmWave antenna arrays and minimize the effects of antenna element failures.
Several techniques based on sparse signal recovery have recently emerged in the literature to remotely diagnose antenna arrays in a fast and reliable manner \cite{m0}-\cite{de}. These techniques formulate the fault detection problem as a sparse signal recovery problem in compressed sensing. Specifically, a sparse \textit{difference response} vector is generated by subtracting the response of a reference fault-free antenna array from the response of a potentially faulty antenna array, commonly known as the \textit{Array-Under-Test} (AUT). Using this sparse difference response vector, sparse signal recovery algorithms, see e.g. \cite{cs1}-\cite{cs4}, are then applied to recover identity of the faulty antenna elements. We refer to such techniques as \textit{difference based} techniques in this paper. Other techniques adopt a deep learning approach to diagnose mmWave antenna arrays \cite{ml1} \cite{ml2}. These techniques apply machine learning algorithms to identify faulty antenna elements by measuring distortions in the far-field radiation pattern. Despite their excellent performance, the above referenced diagnosis techniques require full and perfect channel state information (CSI) to generate and update the response of the reference fault-free antenna array in a timely manner. This is challenging since perfect CSI estimation is dependent on many factors e.g. link quality, number of scatters, estimation errors, etc., and the faulty array itself distorts the communication channel estimate \cite{d5}. To overcome this limitation, it is curial that new array diagnosis techniques are design to be independent on prior communication channel knowledge.
In this paper, we propose a new technique for remote array diagnosis. The proposed technique only requires knowledge of the set of all possible \textit{Angle-of-Arrivals} (AoAs) the diagnostic signals take, and does not require full channel knowledge. The idea is to design the combining vector (or antenna weights) at the AUT to null diagnostic signals from all incident AoAs. In the presence of antenna faults, the receive beam pattern is distorted, and diagnostic signals can not be nulled. These received (or leaked) diagnostic signals are exploited to formulate the diagnosis problem as a sparse signal recovery in compressed sensing. As we will show in Section III, this technique enables antenna fault detection with just a few diagnostic measurements. The main contributions of this paper can be summarized as follows: (i) We present new array diagnosis formulation that takes the effect of the communication channel into account. Prior work assumes perfect knowledge of the far-field beam pattern and does not take the effect of the communication channel into account. (ii) We present a new and novel array diagnosis technique that relaxes the need for full channel knowledge. To the best of our knowledge, this is the first paper that proposes an array diagnosis technique that requires partial channel knowledge.
The remainder of this paper is organized as follows. In Section II, we formulate the mmWave antenna array diagnosis problem in the presence of multipath. In Section III, we present the proposed array diagnosis technique. In Section IV, we present some numerical results and conclude our work in Section V.
\section{Problem Formulation}
We consider a transceiver equipped with a uniform linear antenna array which consists of $N$ equally spaced elements and $S << N$ possibly faulty elements. A fault is defined as any impairment that causes an antenna element to function abnormally. A fault can result from either physical blockage of an antenna element or circuit failure. While a linear array is adopted in this paper for simplicity, other antenna structures can be equally adopted.
To initiate antenna diagnosis, a probe is used to transmit $M$ diagnosis symbols to the transceiver with the AUT. In the absence of antenna faults, the $m$th received diagnosis symbol can be written as
\begin{eqnarray} \label{y1}
y_m = \mathbf{w}_m^*\mathbf{h} s + z_m,
\end{eqnarray}
where the entries of the combining vector $\mathbf{w}_m\in\mathcal{C}^{N\times 1}$ represent the $m$th complex weights associated with the receive antenna, $\mathbf{h}$ is the mmWave channel between the transceiver and the probe, $s_m$ is the $m$th transmitted diagnosis symbol, and $z_m \sim \mathcal{CN}(0,\sigma^2)$ is the additive noise at the transceiver. A geometric channel model with $L$ scatterers is adopted in this paper \cite{I3} \cite{ahmed} \cite{rap}. Under this model, the channel can be expressed as
\begin{eqnarray}\label{channelk}
\mathbf{h} = \sqrt{\frac{N} {L}} \sum_{\ell=1}^L \alpha_{\ell} {\mathbf{a}}(\theta_{\ell}),
\end{eqnarray}
where $\alpha_{\ell} \sim \mathcal{CN} (0,1)$ is the complex gain of the $\ell$th path, $\theta_{\ell}$ is the $\ell$th path AoA, and the vector ${\mathbf{a}}(\theta_{\ell})$ represents the transceiver's antenna array response corresponding to the $\ell$th AoA $\theta_{\ell}$. For simplicity, we set $s=1$ in (\ref{y1}) and omit it from the subsequent analysis.
In the presence of antenna faults, the received diagnosis symbol becomes
\begin{eqnarray} \label{y2}
\hat{y}_m &=& \mathbf{w}_m^*{\mathbf{Bh}} + z_m \\ \label{y3}
&=& \mathbf{w}_m^*\hat{\mathbf{h}} + z_m
\end{eqnarray}
where $\hat{\mathbf{h}} = {\mathbf{Bh}} $ is the equivalent mmWave channel. The entries of the diagonal matrix $\mathbf{B} \in \mathcal{C}^{N\times N}$ are
\begin{equation}\label{efbp1}
\text{B}_{n,n} = \left\{
\begin{array}{ll}
\alpha_n, & \hbox{ if the $n$th antenna element is faulty} \\
1, & \hbox{ otherwise, } \\
\end{array}
\right.
\end{equation}
where $\alpha_n = \kappa_n e^{j\Phi_n}$, $0 \leq \kappa_{n} \le 1$ and $0 \leq \Phi_{n} \leq 2\pi$. A value of $\kappa_{n} = 0$ represents maximum blockage (or complete failure), and $\Phi_{n}$ captures the phase-shift caused by the fault at the $n$th antenna element. The diagonal matrix $\mathbf{B}$ captures failures that can result from internal circuitry of the antenna element itself, or from external blockages caused by, for example, weather. From equations (\ref{y2}) and (\ref{y3}), we observe that faults modify the antenna array manifold and causes uncertainty the mmWave channel.
To locate the identity of the faulty antenna elements, prior work in the literature proposed several techniques which are based on subtracting $M$ received diagnosis symbols in (\ref{y3}) from $M$ ideal (fault-free) diagnosis symbols in (\ref{y1}). The ideal diagnosis symbols in (\ref{y1}) can be generated offline if the channel $\mathbf{h}$ is fully known at the receiver. This subtraction results in the following difference vector $\mathbf{y}_\text{d} \in \mathcal{C}^{M\times 1}$
\begin{eqnarray} \label{yd}
\mathbf{y}_\text{d} &=& \mathbf{y} - \hat{\mathbf{y}} \\ \label{yd2}
&=& \mathbf{W}^*{\mathbf{h}}-\mathbf{W}^*\hat{\mathbf{h}} + \mathbf{z} \\ \label{yd3}
&=& \mathbf{W}^*{\mathbf{h}}_\text{d} + \mathbf{z}.
\end{eqnarray}
In (\ref{yd2}) and (\ref{yd3}), the matrix $\mathbf{W} = [\mathbf{w}_1, \mathbf{w}_2, ...., \mathbf{w}_M]$, $ \mathbf{z}$ is the additive noise vector, and the vector ${\mathbf{h}}_\text{d}$ is sparse with the non-zero entries corresponding to the identity of the faulty antenna elements. Applying any sparse recovery techniques, see e.g. \cite{cs1}-\cite{cs4}, ${\mathbf{h}}_\text{d}$ can be recovered with overwhelming probability from $ \mathbf{y}_\text{d}$ and $\mathbf{W}$ provided that the number of diagnosis symbols $M> 2S \log{N}$.
While excellent, the requirement of perfect channel knowledge is not practical and poses a greater challenge for difference based techniques. Perfect channel knowledge might not be readily available in practice, and as shown in \cite{d5}, acquiring perfect channel estimation is not possible with faulty antenna hardware.
In the following section, we propose a new approach to identify the faulty antenna elements. The proposed approach relaxes the need for full channel knowledge and only requires a set of all possible angles of arrivals. The receive angles of arrival can be easily obtained by, for example, beam training techniques outlined in \cite{bt1}-\cite{ bt4} and references therein, or provided by finger-printing techniques outlined in \cite{F1}-\cite{F3} and references therein.
\section{Antenna Fault Detection at the Receiver}
In this section we mathematically formulate and outline the proposed antenna fault detection technique. For this technique, we assume that the receiver is only equipped with knowledge of its angles of arrivals $\theta_{\ell} \in \Theta$, where $\Theta$ is a set that contains all possible AoAs. The receiver has no knowledge of the complex paths gains nor their corresponding delays (if any). To mathematically formulate the problem solution, we first rewrite (\ref{y3}) as
\begin{eqnarray} \label{yp1}
\hat{y}_m = \mathbf{w}_m^*(\mathbf{h} + \mathbf{h}_\text{e}) + z_m,
\end{eqnarray}
where $\mathbf{h}_\text{e}$ is the error in the mmWave channel caused by faulty receive antennas. Observe that $\mathbf{h}_\text{e}$ is sparse with the non-zero elements corresponding to the fault locations. If the AoAs are quantized to $N$ points, the channel in (\ref{yp1}) can be expressed in matrix form as
\begin{eqnarray} \label{yp2}
\hat{y}_m &=& \mathbf{w}_m^*(\mathbf{A} + \mathbf{A}_\text{e})\mathbf{x} + z_m\\ \label{yp21}
&=& \mathbf{w}_m^*\mathbf{Ax} + \mathbf{w}_m^*\mathbf{A}_\text{e}\mathbf{x} + z_m,
\end{eqnarray}
where the matrix $\mathbf{A}$ is the DFT matrix with its $i$th column corresponding to the array response of $i$th quantized AoA. The $L$ non-zero entries of the sparse vector $\mathbf{x}$ correspond to the complex gains of the $L$ paths. The matrix $\mathbf{A}_\text{e}$ is row sparse with its non-zero row entries corresponding to the error imposed by the faulty antenna elements.
As the objective of this paper is to detect antenna faults (the second term in (\ref{yp21})), it is imperative that the weights $\mathbf{w}_m$ are designed to be in the null-space of the column vectors of the DFT matrix $\mathbf{A}$ that correspond to the $L$ AoAs. There are two main ways to achieve this. If the AoAs are quantized to $N$ points, then one can exploit the orthogonality property of the DFT matrix $\mathbf{A}$ in (\ref{yp2}) to select the columns that do not correspond to the $L$ AoAs as the receive beamforming (or measurement) weights, i.e. $\mathbf{w}_m \in [\mathbf{A}]_{:,m}, m \not= l$. If the AoAs are not quantized, the vector $\mathbf{w}_m$ needs to be orthogonal to all AoAs. Exploiting the large antenna dimensions available in mmWave systems, one can generate $M$ receive antenna weights (or beam vectors) that are orthogonal to the array response corresponding to directions in $\Theta$. To achieve this, let the matrix $\mathbf{D} = [{\mathbf{a}}(\theta_{1}), {\mathbf{a}}(\theta_{2}),..., {\mathbf{a}}(\theta_{L})]$ contain the array response vectors that correspond to the $L$ AoAs in $\Theta$. Using Householder transformation \cite{householder}, the orthogonal beam matrix $\mathbf{Q} \in \mathcal{C}^{N\times N}$ can be obtained as follows
\begin{eqnarray} \label{HH}
\mathbf{Q} = \mathbf{I} - \mathbf{D}(\mathbf{D}^*\mathbf{D})^{-1}\mathbf{D}^*.
\end{eqnarray}
The combining matrix ${\mathbf{W}}$ is then formed by selecting $M$ columns from the matrix $\mathbf{Q}$.
To this end note that the combining matrix $\mathbf{W}$ is used to receive the $M$ diagnosis symbols as follows
\begin{eqnarray} \label{yp3}
\hat{\mathbf{y}} &=& \mathbf{W}^*\mathbf{Ax} + \mathbf{W}^*\mathbf{A}_\text{e}\mathbf{x} + \mathbf{z}\\ \label{yp31}
&=& \mathbf{W}^*\mathbf{A}_\text{e}\mathbf{x} + \tilde{\mathbf{z}}+ \mathbf{z}\\ \label{yp32}
&=& \mathbf{W}^*\mathbf{h}_\text{e} + \tilde{\mathbf{z}} + \mathbf{z}.
\end{eqnarray}
Note as the columns of $\mathbf{W}$ are orthogonal to the columns in $\mathbf{A}$ corresponding to the $L$ AoAs, the first term in (\ref{yp3}) cancels out. The interference vector $\tilde{\mathbf{z}}$ accounts for the interference that arises when $\mathbf{W}^*$ and $\mathbf{Ax}$ are not orthogonal. This situation could arise due to imperfect channel estimates at the receiver. As the error vector $\mathbf{h}_e$ is sparse, with non-zero entries corresponding to the faulty antenna elements, compressed sensing techniques outlined in \cite{cs1}-\cite{cs4} can be used to recover $\mathbf{h}_\text{e} $ from $ \hat{\mathbf{y}}$ and $\mathbf{W}$ as follows:
\begin{eqnarray}
\nonumber \min && ||\tilde{\mathbf{h}}_\text{e} ||_1 \\
\nonumber \text{s.t.} &&|| \hat{\mathbf{y}} - \mathbf{W}^*\tilde{\mathbf{h}}_\text{e} ||_2 \leq \epsilon.
\end{eqnarray}
For simplicity, we employ the orthogonal matching pursuit (OMP) algorithm \cite{cs4} to solve the above optimization problem. The non-zero entries of the recovered vector $\tilde{\mathbf{h}}_\text{e} \in \mathcal{C}^{N \times 1}$ correspond to the identity of the faulty antenna elements.
\begin{figure}[t!]
\centering
\includegraphics[width=270pt]{figs/fig1.eps}
\caption{Probability of successful fault detection versus the number of diagnostic measurements for different number of faults $S$ and single channel path. Probability of successful detection increases with the number of measurements.} \label{fig1}
\end{figure}
As outlined, the proposed technique only requires knowledge of incident AoAs, and not the full channel knowledge, to recover the identity of the faulty antenna elements. This partial channel requirement reduces the implementation complexity of remote array diagnostic techniques. This, however, comes at the expense of additional complexity in the design of the receive antenna weights.
\begin{figure}[t!]
\centering
\includegraphics[width=270pt]{figs/p_vs_del2.eps}
\caption{Probability of successful fault detection versus AoA estimation error $\Delta \theta$ for $S=6$ random faults, $M=35$ measurements and single channel path. Proposed technique is less sensitive to estimation errors when compared to the difference based technique which requires full channel knowledge.} \label{fig2}
\end{figure}
\section{Numerical Results and Discussions}
In this section, we conduct numerical simulations to evaluate the efficacy of the proposed technique. We consider a receiver with a uniform linear array with half wavelength separation and $S$ faulty antenna elements. We adopt the blockage and channel model presented in Section II. To generate complete antenna element blockages, randomly selected $S$ diagonal entries in the blockage matrix $\mathbf{B}$ in (\ref{y2}) are set to zero. To generate partial blockages, $S$ diagonal entries in the blockage matrix $\mathbf{B}$ are set to have a random phase shift and amplitude (see (\ref{efbp1})). We adopt the probability of success $\text{P}_\text{success} $, i.e. the probability that all faulty antennas are detected, as a performance measure to quantify the error in detecting the faulty antenna locations. This probability is defined as
\[
\text{P}_\text{success} = \text{Pr} ({\mathcal{I}_S = \mathcal{\hat{I}}}_S),
\]
where the entries of the set $\mathcal{I}_S$ represent the \textit{true} identities of the faulty antennas and the entries of the set $\hat{\mathcal{I}}_S$ represent the identities of the \textit{detected} faulty antennas. For benchmarking purposes, we compare the probability of success achieved by the proposed technique with the probability of success achieved by the difference based technique proposed in \cite{m0}. In all simulations, we set $N$ = 128 antennas and consider both single and multi-path channel cases.
For the single path case, the transmitting probe is situated so as to correspond to one of the $N=128$ quantized AoAs in the DFT matrix $\mathbf{A}$ (see (\ref{yp2})). The receive antenna weights are selected from the columns of $\mathbf{A}$ that do not correspond to the receiver's quantized AoA. The performance of the proposed technique for this scenario is illustrated in Fig. \ref{fig1} and Fig. \ref{fig2}.
In Fig. \ref{fig1}, we plot the probability of success versus the number of measurements (or diagnosis time) for different number of antenna faults. Fig. \ref{fig1} shows that the proposed technique is able to successfully detect antenna faults without additional diagnostic measurements when compared to difference based techniques. This is achieved without the need for prior knowledge of the receiver's channel gain (or path-loss).
In Fig. \ref{fig2}, we study the effect of AoA estimation errors on the performance of the proposed technique. Specifically, we plot the probability of success versus the AoA estimation error when the array is subjected to both complete and partial blockages. In the presence of partial blockages, Fig. \ref{fig2} shows that both the proposed and difference based techniques experience a slight loss in the detection performance when compared to complete blockages. This is mainly due to the fact that the magnitude of the errors in the error vector $\mathbf{h}_\text{e}$ in (\ref{yp32}) are smaller when compared to complete blockages. This effectively reduces the detection capability in the presence of noise when compared to the case of full blockages, which results in higher error magnitudes. Fig. \ref{fig2} also shows that both the proposed and difference based technique are sensitive to AoA estimations errors. Nonetheless, the proposed technique is superior in the sense it can tolerate significantly higher AoA errors.
\begin{figure}[t!]
\centering
\includegraphics[width=270pt]{figs/ps_snr.eps}
\caption{Probability of successful fault detection versus the receive SNR for $S=6$ faults, $L=3$ channel paths and $M=35$ diagnostic measurements. The proposed technique is robust against system noise compared to the difference based technique.} \label{fig3}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=270pt]{figs/pssnr2.eps}
\caption{Probability of successful fault detection versus the variance of the channel estimation error for $S=6$ faults, $L=3$ channel paths, SNR = 30dB and $M=45$ diagnostic measurements. The proposed technique is agnostic to path gain errors.} \label{fig4}
\end{figure}
In Fig. \ref{fig3} and Fig. \ref{fig4} we study the performance of the proposed technique in the presence of multi-path, complete and partial blockages, and non-quantized AoAs. Specifically, each random path corresponds to a random AoA and complex gain. Based on knowledge of all AoAs, and using Householder transformation, the measurement matrix $\mathbf{W}$ is designed to result in a beam pattern that is orthogonal to the receiver's AoAs (see (\ref{HH})-(\ref{yp32})). In Fig. \ref{fig3} we plot the success probability versus the receive signal-to-noise ratio (SNR) in the presence of complete and partial blockages. Fig. \ref{fig3} shows that both the proposed and the difference technique require high SNR to successfully detect antenna faults, and the proposed technique is superior in the sense that it is less sensitive to the system noise. Note that noise affects both the path gains and the AoAs. As the proposed technique is mainly sensitive to AoAs errors, and not path gain errors, it experiences less performance degradtaion when compared to the difference technique.
To draw some insights into the effect of channel estimation errors on the performance of the proposed technique, we plot the probability of success versus the variance of the path gain and AoA estimation errors at the receiver in Fig. \ref{fig4}. The estimation errors are assumed to be Gaussian distributed with zero mean and variance as indicated in Fig. \ref{fig4}. Interestingly, and as evident from Fig. \ref{fig4}, the proposed technique permits successful antenna fault detection irrespective of the path gain noise error magnitude. This performance gain is attributed to the fact that the proposed technique does not require knowledge of channel gain for fault detection. Hence channel gain estimation errors do not affect the performance of the proposed technique. Nonetheless, the proposed technique is shown in Fig. \ref{fig4} to be sensitive to AoA mismatch. As the mismatch increases, the orthogonality between the designed beamforming weights and the true channel response diminishes. This increases the noise at the receiver. Fig. \ref{fig4} shows that the probability of success for the difference based technique deteriorates drsticlly with slight path gain or AoA mismatch. The reason for this is that any mismatch between the generated channel and the true channel would destroy the sparsity property of the difference channel response $\mathbf{h}_\text{d}$, and hence sparse recovery would not be possible in this case.
\section{Conclusion}
In this paper, we proposed a novel array diagnosis technique for mmWave systems with large antenna arrays. The proposed technique is able to identify the locations of antenna faults with only partial channel knowledge. For both single path and multipath cases, the proposed technique is shown to be less sensitive to channel estimation errors when compared to the widely adopted difference based technique. This improvement comes at the expense of additional complexity in the design of the receive beamforming weights. Due to its robustness against channel estimation errors, the proposed technique can be deployed to perform real-time array diagnosis. Future work will focus on array diagnosis in the absence of any channel knowledge and on applying this technique on a practical set-up.
\section*{Acknowledgment}
This material is based upon work supported in part by the Sacramento State Research and Creative Activity Faculty Awards Program.
|
{
"timestamp": "2020-11-03T02:38:54",
"yymm": "2011",
"arxiv_id": "2011.00828",
"language": "en",
"url": "https://arxiv.org/abs/2011.00828"
}
|
\section{Introduction}
Large Hadron Collider \cite{lhc} is able to accelerate and
collide various beams. The machine was successfully run in the
proton-proton, proton-lead and lead-lead modes.
The size and the evolution of the medium created in the heavy
ion interactions depend on the collision geometry. In heavy
ion interactions the impact parameter, $b$, is defined as the
distance between directions of motion of the colliding
ions. Its value is related to the centrality classes with
central collisions characterised by $b \approx 0$, peripheral
ones by $0 < b < 2\cdot R$ and the ultra-peripheral with $b > 2\cdot R$
where $R$ is the radius of each (identical) ion treated as a rigid sphere.
One should also observe that central and peripheral collisions are dominated by
strong interactions while the ultra-peripheral by electromagnetic exchanges.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/b-dpmjet.png}
\caption{Impact parameter probability distribution
calculated with DPMJET \cite{dpmjet} for Pb-Pb
collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:b-dpmjet}
\end{figure}
From geometric considerations one expects that the probability
of a certain $b$ value grows linearly with $b$ increasing from
0 to $2R$. For ultra-peripheral collisions with $b>2\cdot R$
this probability decreases rapidly with increasing $b$ value.
The distribution of the $b$ value as calculated in
the case of Pb-Pb collisions at the nucleon-nucleon centre of mass energy $\sqrt{s}_{NN} = 5.04$~TeV
using DPMJET Monte Carlo~\cite{dpmjet} is shown in Figure \ref{fig:b-dpmjet} and clearly
confirms predictions of simple geometric considerations.
However, one may consider the structure of a nuclei and
describe the heavy ion collision in terms of the number of nucleons
taking part in the interaction, $N_{part}$, or the number of
binary collisions, $N_{coll}$. Then, one expects that
peripheral processes, having large $b$-values, lead on average
to smaller values of $N_{part}$ or $N_{coll}$ than those observed
for the central ones characterised by small $b$-values.
Figure \ref{fig:cbNpart} presents the relation between $N_{part}$ and $b$
as predicted by the mentioned DPMJET Monte Carlo.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/bvsNpart.png}
\caption{Correlation of the impact parameter and
$N_{part}$ calculated with DPMJET \cite{dpmjet} for Pb-Pb
collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:cbNpart}
\end{figure}
This figure shows a very strong and anticipated correlation.
It also confirms that the majority of collisions are of (ultra)peripheral nature.
A qualitatively similar picture can be seen (not shown here) in case of the $N_{coll}$ dependence
on $b$. Since, both $N_{part}$ and $N_{coll}$ are not directly measurable then
a way of the estimation of the impact parameter value using information
on the forward moving, non-interacting spectator system is proposed.
In an AA experiment one introduces the centrality classes which are defined on the basis of the
multiplicity of the centrally produced hadrons or the energy measured in the forward directions.
In \cite{tarafdar} the authors discuss the use of the Cherenkov radiation detectors to estimate the centrality
in Au-Au collisions at RHIC. The measurement of the debris was also discussed in \cite{hera}
in the context of the potential $eA$ interactions at HERA.
Below, the use of the forward proton detectors to estimated the impact parameter in Pb-Pb collisions
at the LHC is analysed.
The paper is organised as follows. Section \ref{sec:fpdet}
introduces the considered experimental apparatus. Section
\ref{sec:methods} discusses foundations of the method proposed in
the present article. It is followed by the discussion of the
apparatus acceptance influence in Sec. \ref{sec:accept}. The
impact parameter dependence on the registered debris mass and
atomic numbers is described in Sec. \ref{sec:depen}. The
method of the impact parameter determination is described in
Sec. \ref{sec:b-estimation} which is followed by a Summary.
\FloatBarrier
\section{Forward proton detectors}
\label{sec:fpdet}
In the following the ATLAS Forward Proton (AFP) detectors
\cite{afptdr} are considered as the main
registering devices. These detectors are foreseen to
register protons emitted or scattered at very small
angles and thus escaping registration in the ATLAS
main detector. Such protons traverse the magnetic
lattice of the accelerator which serves as a magnetic
spectrometer.
The detector uses the Roman Pot technique (RP) which allows
for a precise positioning of the active parts in the
immediate vicinity of the beam. It is quite obvious
that the detector acceptance depends on the properties
of the machine (magnetic spectrometer) and those of
the detector as well as its position quantified by the
distance between the detector active part and the beam
which plays a crucial role. A typical distance is
about 2-3~mm which covers 15 widths of the beam at the
AFP position and about 0.5~mm of the dead space due to
the experimental infrastructure. The AFP detectors
take data during usual running of the LHC -- the
so-called collision optics\footnote{The machine optics is
typically quantified with the value of the $\beta^*$
function at the IP which is a measure of the distance,
along the beam orbit, after passing which the
beam doubles its transverse dimensions.}.
There are four Roman Pot stations which are positioned
symmetrically with respect to the ATLAS Interaction Point (IP)
at the distances of about 205~m and 217~m. The stations allow
for horizontal, i.e. in the LHC plane, motion of the pots.
Each station contains a silicon tracker (SiT) made of four
precise silicon pixel planes. The planes are tilted w.r.t. the
$x$-axis (horizontal direction) and staggered in the $y$-axis
(vertical) direction. The resulting spatial resolution of the
scattered proton track measurement is about 10~$\mu$m and
30~$\mu$m in the horizontal and vertical direction,
respectively. The detector area as seen by the scattered
protons is about 16~mm by 20~mm. The scattered proton energy
can be reconstructed with precision better 10~GeV
\cite{afprecresol}. The outer stations contain also the
time-of-flight counter providing the timing resolution of the
order of 20-30~ps. These counters are not to be used in the
present analysis.
Important variables describing the scattered proton are: its
transverse momentum, $p_T$, and its relative energy loss,
$\xi = (E_{beam}-E')/E_{beam}$ where $E_{beam}$ is the beam
energy and $E'$ denotes the scattered proton energy.
In the case of the collision optics the scattered proton is registered by the AFP
detectors with high acceptance if its relative energy loss, $\xi$, is within the interval of
(0.02; 0.12) and the transverse momentum $p_T < 3$~GeV \cite{afpacceptance}.
Considerations presented below required simulation of the particle/debris transport
through the magnetic lattice of the LHC. These calculations were performed using the
MaD-X \cite{madx} code and the machine delivered optics files describing
the standard ion-ion option. For simplicity particles were transported to the middle point
between the AFP stations i.e. up to 211~m from the ATLAS IP.
\FloatBarrier
\section{Method of centrality determination}
\label{sec:methods}
Experimentally, the centrality classes can be defined using
the multiplicity of particles created in the mid-rapidity
region or the forward emitted energy -- see \cite{alice-b} and \cite{atlas-b} for description of the methods.
These methods rely on Glauber type calculations of the
geometrical properties of an ion-ion collision and naturally
take into account also the details of the experimental apparatus.
In the case of the forward proton detectors the energy
measurement of a debris is, generally speaking,
excluded\footnote{One should keep in mind that the estimation
of the scattered proton momentum is feasible via unfolding
of its trajectory measurements.}. However, to the zeroth-order
all spectator nucleons (not taking part in the interaction) are of the
same energy so energies of the fragments are quantised. Therefore, the
multiplicity of the forward emitted nucleons could be in principle used
to estimate the forward
energy. This can be achieved by measuring the sum of the mass numbers,
$\sum A_{forward}$, or the sum of the atomic numbers, $\sum Z_{forward}$
of the forward moving spectator ensemble.
The former corresponds to a calorimetric measurement and the latter can be achieved
with a tracking detector since the energy deposit is proportional to $Z^2$ of the fragment.
Therefore, it is worth searching for a correlation
of $\sum A_{forward}$ or $\sum Z_{forward}$ and the impact parameter
or the number of binary collisions, $N_{coll}$, in a given event.
Such correlations are presented in
\textcolor{red}{Figures \ref{fig:SumAtotvsb} -- \ref{fig:SumZtotvsNcoll}} below.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/GenSumAtotvsb.png}
\caption{Left: Correlation of the sum of the mass numbers of
the nuclear debris emitted in the forward direction,
$\sum A_{forward}$, and the impact parameter, $b$
calculated with DPMJET \cite{dpmjet} for Pb-Pb
collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:SumAtotvsb}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/GenSumZtotvsb.png}
\caption{Correlation of the sum of the atomic numbers of
the nuclear debris emitted in the forward direction,
$\sum Z_{forward}$, and the impact parameter, $b$,
calculated with DPMJET
\cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:SumZtotvsb}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/GenSumAtotvsNcoll.png}
\caption{Left: Correlation of the sum of atomic numbers of
nuclear debris in the forward direction,
$\sum A_{forward}$, and $N_{coll}$
calculated with DPMJET
\cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:SumAtotvsNcoll}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/GenSumZtotvsNcoll.png}
\caption{Correlation of the sum of charges of nuclear
debris in the forward direction, $\sum Z_{forward}$, and
$N_{coll}$ calculated with DPMJET \cite{dpmjet} for Pb-Pb
collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:SumZtotvsNcoll}
\end{figure}
A clear, strong and anticipated correlation pattern can be observed in
these figures. Moreover, the shape of the correlations is similar
if one considers the mass or atomic numbers. In the case of
$\sum Z_{forward}$ the correlations are a bit wider.
These correlations can be used to
determine/estimate the value of
the impact parameter of the actual collision.
\FloatBarrier
\section{Acceptance for nuclear fragments}
\label{sec:accept}
To achieve the above sketched goal the first step of the
present analysis was devoted to the determination of the AFP
response to the nuclear debris originating from the
non-interacting, forward moving ensemble of nucleons.
The calculations, performed in a model independent way, followed
the lines of an earlier study \cite{acta}.
They considered all known nuclei. At first it was checked that the
ion life-times (proper times) allow
for their potential registration at the AFP positions.
Later, using the Mad-X~\cite{madx} description of the LHC, the
transport of these nuclei was simulated. Projections of
trajectories of the ions in $(x, z)$ and $(y, z)$
planes\footnote{$x$-axis points outside the ring, $y$-axis is
perpendicular to the ring plane and points upwards.} are
presented in Fig.~\ref{fig:HI_trajectories}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/HI_trajectories.png}
\caption{Projections of the trajectories of nuclear
fragments in $(x, z)$-plane (left panel) and
$(y, z)$-plane (right panel).
}
\label{fig:HI_trajectories}
\end{figure}
As can be observed the LHC magnetic lattice filters out
spectator protons (green lines), deuterium (blue), tritium (yellow)
and beryllium (pink). The spectator neutrons can be registered
in the Zero Degree Calorimeters which are symmetrically positioned about
140~m away from the interaction point at the accelerator
beam-pipe bifurcation. These devices are routinely used during
the data collection periods related to the heavy ion
interactions delivering a valuable information on forward emitted
neutrons.
In \cite{acta} a study of the geometric acceptance of the AFP
detectors was carried out. Large acceptance value, close to 100\%,
was observed for a broad range of ions.
Kinematic properties of a nuclear debris emerging from ion-ion
collision are influenced by the beam related effects as well
as the Fermi motion of the nucleons belonging to the considered fragment.
Influence of the beam emittance\footnote{The beam emittance is
a measure of the beam particle spread in the
momentum-coordinate phase space for example $(x, p_x)$ or
$y, p_y)$.} and that of the Fermi motion of the debris constituent
nucleons is illustrated in
Figure \ref{fig:smearingSn} for
beryllium, boron and tin ions. The distributions of the horizontal
position of a selected ion is shown including the mentioned effects
into the calculations.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\columnwidth]{figures/SmearingEffectsBe.png}
\includegraphics[width=0.45\columnwidth]{figures/SmearingEffectsSn.png}
\caption{Effects of the beam emittance and Fermi motion on
the ion position at the AFP detector. Upper panel -- beryllium and boron, lower panel -- tin. From
\cite{acta}.}
\label{fig:smearingSn}
\end{figure}
As can be observed the beam emittance plays a very small role
leading to a minuscule broadening of the position distribution
and was neglected in the following analysis. On the contrary, the
Fermi motion strongly affects horizontal positions of the ions at the middle point between
the two AFP stations. As it was anticipated the longitudinal component has a
much stronger impact. This effect is magnified by a large value of the
Lorentz factor of a nucleon. For lead-lead collisions at
$\sqrt{s}_{NN} = 5.02$~TeV its value is $\gamma \approx 2700$
leading to the potential smearing of the nucleon momentum up
to nearly 1400~GeV and hence to the enhanced smearing of the horizontal position
a fragment. Moreover, as also expected, this influence
is much weaker for heavier ions due to the averaging over a
larger ensemble of chaotically moving nucleons.
The impact of the above discussed effects on the AFP ability
to register various nuclei is summarised in
Figure~\ref{fig:smearedAcceptance} showing the detector
acceptance as a function of $(Z, \Delta)$, where $\Delta$,
calculated as $\Delta = A-2\cdot Z$, is the net number neutrons
(the surplus/deficit of neutrons with respect to the protons)
in a nucleus.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/smeared.png}
\caption{The AFP detector acceptance calculated
including the beam emittance and Fermi motion effects.
From \cite{acta}.}
\label{fig:smearedAcceptance}
\end{figure}
The AFP acceptance is smeared, however, the region of high
acceptance value persists and is clearly visible for a broad range of
nuclei.
\FloatBarrier
\section{Dependence of centrality on the registered fragments}
\label{sec:depen}
As it was mentioned the standard methods of the centrality
determination rely on the energy emitted in the forward
direction. Since the AFP detectors cannot provide such
information (due to their construction) for fragments moving
within the accelerator beam pipe another method was considered.
The following results are based on the DPMJET II \cite{dpmjet} simulated
Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.
The transport of the fragments was calculated using the
Mad-X \cite{madx}.
Figure \ref{fig:MeasSumA_fvsSumA_b} shows the correlation
between the sum of the mass numbers of debris registered by
the AFP detectors located on both sides of the IP, $\sum A_{backward}$
vs. $\sum A_{forward}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.85\columnwidth]{figures/MeasSumA_fvsSumA_b.png}
\caption{Correlation of the sum of mass numbers of
nuclear debris recorded by the forward,
$\sum A_{forward}$, and in the backward,
$\sum A_{backward}$, detectors calculated with DPMJET \cite{dpmjet}
for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:MeasSumA_fvsSumA_b}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{figures/MeasSumZ_fvsSumZ_b.png}
\caption{Correlation of the sum of atomic numbers of the
nuclear debris recorded by the forward,
$\sum Z_{forward}$, and the backward,
$\sum Z_{backward}$, detectors calculated with DPMJET \cite{dpmjet}
for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:MeasSumZ_fvsSumZ_b}
\end{figure}
The plot shows that there are three classes of
events. One class contains events for which there are large
values of $\sum A$ observed on both IP sides. The other class
consists of events with an asymmetric configuration: quite
large value of $\sum A$ on one side and very small on the
opposite one. The third class includes events in
which both AFP detectors register very little/light fragments.
Since the measurement of $\sum A$ is rather unrealistic using the AFP detectors then it is worth
looking at the correlation of sums of charges, $\sum Z$, of fragments reaching them.
It is shown in Figure~\ref{fig:MeasSumZ_fvsSumZ_b}.
The picture similar to that presented in Fig. \ref{fig:MeasSumA_fvsSumA_b} is observed. One
can distinguish three classes of events: (a) detectors on both sides of the
IP register large charge; (b) quite substantial charge is seen on one side
of the IP while on the other the registered charge is small; (c) small
charge seen on both IP sides.
Recalling result shown in Fig.~\ref{fig:SumZtotvsb}
one may
construct the correlation of the sum of charges of
fragments produced into the beam pipe and reaching the AFP
detectors versus the actual collision impact parameter value.
Results of such calculations including acceptance of the AFP
detectors are shown in ~\ref{fig:MeasSumZtotvsb}.
A strong, multi-component picture is observed.
However, to the region of small $\sum Z_{forward}$ values contribute
interactions having a broad range of $b$. A similar picture is observed also in the
case of the small $\sum A_{forward}$ values where the B-distribution is even bimodal.
Therefore, in the following only fragments with $Z>2$ were accepted for further analysis.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/MeasSumZtotvsb.png}
\caption{Correlation of the sum of charges of the nuclear
debris recorded by the forward detectors,
$\sum Z_{forward}$, and the impact parameter calculated
with DPMJET \cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:MeasSumZtotvsb}
\end{figure}
One may consider efficiency of registering debris by the AFP detectors. In the calculations
the detectors were positioned 3~mm away from the beam centre. Efficiency is related to the
above mentioned classes however, it was calculated taking into account only the single and double tagged
events. Results of the calculations are presented in Fig. \ref{fig:effi} as a function of the actual impact parameter
value.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/Efficiency.png}
\caption{Efficiency as a function of the impact parameter.
Red crosses -- at least one fragment ($Z>2$) seen in the
AFP on one side of the IP, blue crosses -- fragments
($Z>2$) on both sides, black crosses -- sum. See text for
details.}
\label{fig:effi}
\end{figure}
Quite complicated pattern can be observed. For $b<6$~fm the total efficiency is below 20\%. Then it
increases reaching a local maximum of about 30\% around 7~fm and again grows up to $\sim$96\%
for $b\sim13.5$~fm. For higher b-values it decreases to about 70\% for b close 18~fm.
The single tag efficiency shows a tri-modal structure showing local maxima at $b$ of 7~fm, 12~fm and 17~fm.
Its value is below 50\%. The double tag efficiency is below 50\% for $b<12$~fm and $b>16$~fm abd reaches a maximum
of about 80\% for $b\approx 14$~fm. It is worth stressing that the details of these curves depend strongly on the
apparatus configuration and geometry, its position with respect to the beam and along the beam line and the accelerator
optics and hence, may largely differ for different realisations.
\section{Impact Parameter Estimation
\label{sec:b-estimation}
Taking into account all discussed above a path towards the impact parameter value estimation
is sketched.
Figure \ref{fig:prof_cZb} shows for single tag events the dependence of the impact parameter on
the sum of charges registered on one side requiring that the fragment charge is $Z>2$.
In fact this drawing presents a profile of the $b$ -- single side $\sum Z$ correlation plot.
The standard deviation, $\sigma_b$ versus
the single side $\sum Z$ is depicted in Fig. \ref{fig:sig_prof_cZb}.
These two dependencies for the double tag events are presented in Figures \ref{fig:prof_cZTb} and \ref{fig:sig_prof_cZTb},
respectively.
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/hprof_cZb.png}
\caption{Impact parameter b vs. single side $\sum Z$ - profile. Calculated with DPMJET
\cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:prof_cZb}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/hsig_prof_cZb.png}
\caption{Standard deviation $\sigma_b$ vs. single side $\sum Z$ - profile. Calculated with
DPMJET \cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:sig_prof_cZb}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/hprof_cZTb.png}
\caption{Impact parameter b vs. both sides $\sum Z$ - profile. Calculated with DPMJET
\cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:prof_cZTb}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.90\columnwidth]{figures/hsig_prof_cZTb.png}
\caption{Standard deviation $\sigma_b$ vs. both sides $\sum Z$ - profile. Calculated with
DPMJET \cite{dpmjet} for Pb-Pb collisions at $\sqrt{s}_{NN} = 5.04$~TeV.}
\label{fig:sig_prof_cZTb}
\end{figure}
Sum of the charges of debris with $Z>2$ well correlates with the impact parameter value for both single and double tag events.
For the latter the both sides $\sum Z$ shows fluctuations at low values.
In the single tag case accuracy of the impact parameter estimation improves with growing value of $\sum Z$ and is about 2~fm for
$\sum Z <20$ and 1~fm for $\sum Z > 40$. For the double tag events the b estimation precision is about 1~fm if the sum of the
charges of fragments measured on both sides is larger than 80 ($\sum Z > 80$) and for smaller $\sum Z$ it grows to about 2~fm.
This confirms a possibility of the impact parameter estimation on the event-by-event basis with help of
the forward proton detectors as realised by the AFP set-up.
It was checked that a simple simulation of the fragment charge measurement (Gaussian width of 2) and the
fragment trajectory position (spacial resolutions) does not alter considerably
the obtained results. It leads to the worsening of the b value estimation at low values of $\sum Z$ -- below 20 by about 0.5~fm
for the single tag events and also by about 0.5~fm for $\sum Z < 50$ for double tag ones.
In that respect one should note that even such ``inaccurate'' measurement of the fragment charge would imply
an upgraded readout electronics of the AFP pixel detectors.
\FloatBarrier
\section{Summary and Conclusions}
A possibility of an application of the ATLAS Forward Proton detectors in heavy ion collision for
the impact parameter determination on the event-by-event basis was discussed. The discussion is based on
simulation of the lead-lead collisions at $\sqrt{s}_{NN} = 5.04$~TeV at the LHC. The event sample was
obtained with the DPMJET Monte Carlo. Calculations demonstrated that the AFP detectors have large acceptance
to a wide range of known ions. It was found that the Fermi motion of the nucleons belonging to a fragment very strongly impacts its
lateral position within the AFP while the beam emittance plays negligible role.
In simulations the AFP detectors were used to tag the forward emitted high-Z debris ($Z>2$) on one or both sides of the
ATLAS Interaction Point.
The performed analysis suggests that the charge measurement of the debris delivers 1-2~fm precision of the b-value estimation.
Two facts have to be stressed.
One, the above discussed results depend on the accelerator optics, i.e. magnetic lattice properties and hence, the calculations
have to be repeated for each case separately. Second,
the nucleon Lorentz $\gamma$-value, following from $\sqrt{s}_{NN}$, magnifies of the longitudinal Fermi momentum smearing.
This is reflected in the smearing of the debris position at the detector and of its energy (longitudinal momentum).
Small $\gamma$-value, as for example in the Au-Au collision case at RHIC, would lead to a relative narrow,
``quantised'' energy (momentum)
distribution of the fragments and hence to their ``quantised'' range in the accelerator which in turn could be used to determine
positions of additional RP detectors delivering both the charge and energy of the registered fragment.
\section*{Acknowledgements}
This work was supported in part by Polish Ministry of Science
and Higher Education grant no. DIR/WK/2016/13 and Polish
National Science Center grant no. 2015/19/B/ST2/00989.
\printbibliography
\end{document}
\section{Introduction}
The Large Hadron Collider \cite{lhc} is able to accelerate and collide various beams.
The machine has successfully run in the proton--proton, proton--lead, lead--lead modes.
In addition, several test runs involving xenon ions have been performed.
In an interaction of two ions at the LHC energies, the size and the evolution of the created medium depend on the collision geometry.
In the present paper, the impact parameter, $b$, will be used for describing this geometry.
Events characterised by $b \approx 0$ are called central collisions while those with $b$ close to $2R$ are called the peripheral ones.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\columnwidth]{figures/b-dpmjet.png}
\caption{Impact parameter probability distribution.}
\label{fig:b-dpmjet}
\end{figure}
From simple geometric considerations, one expects that the probability density for having a collision with a certain $b$ value grows linearly with $b$ between $b=0$ and $b=2R$ and for $b>2R$ immediately drops to zero (for rigid spheres).
The distribution of the $b$ value calculated from 100000 lead--lead collisions at the nucleon--nucleon centre-of-mass energy $\sqrt{s}_{NN} = 5.04$~TeV produced using DPMJET-III Monte Carlo generator~\cite{dpmjet} is shown in Figure \ref{fig:b-dpmjet}.
For $b<2R$\footnote{$R \approx 7$~fm for lead ions.}, the DPMJET distribution follows simple expectations.
For $b>2R$, one observes a steep but smooth drop due to a more realistic treatment of the effects related to the nucleus edge.
This is an example of a more general phenomenon originating from a complex structure of the colliding particles -- the impact parameter is not sufficient to describe the full geometry of the interaction.
An alternative description of the geometry can be obtained considering the structure of a nucleus and describing the heavy ion collision in terms of the number of nucleons taking part in the interaction, \ensuremath{N_\text{part}\xspace}, or the number of binary collisions, $N_{coll}$.
Then, one expects that peripheral processes lead on average to smaller values of \ensuremath{N_\text{part}\xspace} or $N_{coll}$ than those observed for the central ones.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\columnwidth]{figures/bvsNpart.png}
\caption{Correlation of the impact parameter and \ensuremath{N_\text{part}\xspace}.}
\label{fig:cbNpart}
\end{figure}
Figure \ref{fig:cbNpart} presents the relation between \ensuremath{N_\text{part}\xspace} and $b$ as predicted by DPMJET-III Monte Carlo.
It shows a very strong and anticipated correlation.
The non-negligible width of the correlation indicates the importance of fluctuations of the nucleus shape in the initial state of the collision.
It also confirms that the majority of collisions are of peripheral nature.
A qualitatively similar picture can be seen (not shown here) in the case of the $N_{coll}$ dependence on $b$.
Since neither $b$ nor \ensuremath{N_\text{part}\xspace} nor $N_{coll}$ are directly measurable, the centrality of an event is usually experimentally defined on the basis of some easily measured observable sensitive to the geometry of the collision.
For example, one often uses the multiplicity of centrally produced hadrons or the energy measured in forward calorimeters -- see \cite{alice-b} and \cite{atlas-b} for description of the methods.
These methods rely on the Glauber model and are sensitive to the details of the experimental apparatus.
In \cite{tarafdar}, the authors considered an alternative method of estimating \ensuremath{N_\text{part}\xspace} in gold--gold collisions at RHIC.
They proposed a new system of Cherenkov detectors capable of measuring the majority of spectator fragments scattered at small angles.
The proposed method is based on an observation that for a full acceptance detector system, the measurement relies only on the energy conservation and therefore is model independent.
The measurement of the debris was also discussed in \cite{hera} in the context of the potential $eA$ interactions at HERA.
In the present paper, a different approach is considered.
It is investigated whether detectors that do not provide a full coverage, and hence can register the spectator fragments only partially, can deliver any valuable information about the collision geometry.
The problem is studied for the forward proton detectors already operating at the LHC.
The paper is organised as follows. Section \ref{sec:fpdet} introduces the considered experimental apparatus.
This is followed by the discussion of the apparatus acceptance in Sec. \ref{sec:accept}.
The impact parameter dependence on the registered debris mass and atomic numbers is described in Sec.~\ref{sec:depen}.
A study of the asymmetry of the geometry is discussed in Sec. \ref{sec:asymmetry} which is followed by a Summary.
\FloatBarrier
\section{Forward proton detectors}
\label{sec:fpdet}
In the following, the ATLAS Forward Proton (AFP) detectors \cite{afptdr, Aad:2020glb} are considered as the registering devices.
These detectors measure protons emitted or scattered at very small angles and thus escaping registration in the ATLAS main detector.
Such protons traverse the magnetic lattice of the accelerator, which serves as a magnetic spectrometer.
The detector uses the Roman pots technique, which allows for precise positioning of the active parts in the immediate vicinity of the beam.
It is quite obvious that the detector acceptance depends on the properties of the LHC machine.
In addition, the position of the detector, quantified by the distance between the detector active part and the beam, plays a crucial role.
A typical distance is about 2 -- 3~mm which covers around 15 widths of the beam at the detector position and about 0.5~mm of the dead space due to the experimental infrastructure.
The AFP detectors take data during standard operation of the LHC.
The four Roman pot stations are positioned symmetrically with respect to the ATLAS interaction point at the distances of about $|z| = 205$~m and $|z|=217$~m.
The stations allow horizontal, i.e. in the accelerator plane, motion of the Roman pots.
Each station contains a silicon tracker made of four precise 3D pixel planes.
The planes are tilted w.r.t. the $x$-axis (horizontal direction) and staggered in the $y$-axis (vertical direction).
The resulting spatial resolution of the scattered proton track measurement is about 10~$\mu$m and 30~$\mu$m in the horizontal and vertical direction, respectively.
The detector area as seen by the scattered protons is about 16~mm by 20~mm.
The scattered proton energy is obtained indirectly using unfolding of the trajectory measurement leading to the reconstruction resolution better than 10~GeV \cite{afprecresol}.
The stations at $|z=217$~m contain also time-of-flight counters providing the resolution of the order of 20 -- 30~ps.
These counters can be used for rejecting combinatorial background rejection originating from large pile-up present in LHC proton-proton runs, see \cite{Staszewski:2019yek} for more details.
Since such backgrounds are not relevant to the present study, the use of time-of-flight detectors is not considered here.
The important variables describing a scattered proton are its transverse momentum, $p_T$, relative energy loss, $\xi = (E_{beam}-E')/E_{beam}$ where $E_{beam}$ is the beam energy and $E'$ denotes the scattered proton energy and the azimuthal angle $\varphi$.
The scattered proton is registered by the AFP detectors with high acceptance if its relative energy loss is within the interval of $(0.02, 0.12)$ and the transverse momentum $p_T < 3$~GeV \cite{afpacceptance}.
The considerations presented below required simulation of the particle/debris transport through the magnetic lattice of the LHC.
These calculations were performed using the MaD-X \cite{madx} code and the optics files describing the accelerator settings used in lead--lead runs.
For simplicity, the particles were transported to the middle point between the AFP stations i.e. up to 211~m from the ATLAS interaction point.
More details about the simulation can be found in \cite{acta}.
\FloatBarrier
\section{Acceptance for nuclear fragments}
\label{sec:accept}
The discussed method of centrality estimation is based on a simple fact that the collision geometry affects both the number of participants and the number of spectators.
Therefore, the measurement of the spectators should provide information about this geometry.
In the case of the AFP detectors, a direct measurement of neither the total multiplicity of nucleons nor the total energy of the produced fragments is possible%
\footnote{%
The measurement of the total energy would be of interest since all spectator nucleons have, to some approximation, energy equal to their energy before the interaction.
The approach used for energy reconstruction used in the proton--proton case cannot be used for nuclear debris because of an additional degree of freedom -- the unknown charge of the given ion.
}.
However, since the silicon detectors are sensitive to the amount of ionisation caused by passing particles, the measurement of the spectator electric charge could be possible with appropriately tuned sensors.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{figures/GenSumAtotvsb.png}
\includegraphics[width=0.49\textwidth]{figures/GenSumZtotvsb.png}
\caption{Left: Correlation of the sum of the mass numbers of
the nuclear debris emitted in the forward direction,
\SA, and the impact parameter.
Right: Correlation of the sum of the atomic numbers of
the nuclear debris emitted in the forward direction,
\SZ, and the impact parameter.}
\label{fig:SumAtotvsb}
\label{fig:SumZtotvsb}
\end{figure}
In the present paper, the following scenarios will be considered:
\begin{itemize}
\item an ideal case -- the measurement of nucleon multiplicity, \SA,
\item a more realistic case -- the measurement of the charge multiplicity,
\SZ, of the forward moving spectators.
\end{itemize}
It is educative to compare correlations between the above observables and the impact parameter, see Figure \ref{fig:SumAtotvsb}.
While there are some differences between these two cases, the correlation patterns are actually quite similar.
The one involving \SZ is a little wider but this effect is not large.
Therefore, one may expect that the measurement of \SZ should provide information about the impact parameter comparable to that delivered by the measurement of \SA.
The next step of the present analysis was devoted to the determination of the AFP response to the nuclear debris originating from the non-interacting, forward moving system of spectators.
The calculations, performed in a model independent way, followed the lines of an earlier study \cite{acta} and considered all known nuclei.
At first it was checked that the life times of produced fragments allow for their potential registration at the AFP positions before decaying.
Later, using the Mad-X~\cite{madx} description of the LHC, the transport of these nuclei was simulated.
It was assumed that the detectors operate at a distance of 3~mm from the beam.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{figures/HI_trajectories.png}
\caption{Projections of the trajectories of nuclear
fragments in $(x, z)$-plane (left panel) and
$(y, z)$-plane (right panel).
}
\label{fig:HI_trajectories}
\end{figure}
The projections of the trajectories of the ions in $(x, z)$ and $(y, z)$ planes are presented in Fig.~\ref{fig:HI_trajectories}.
The accelerator magnetic lattice filters out spectator protons (marked with the green lines), deuterium (blue), tritium (yellow) and helium (red).
The spectator neutrons, not shown in the plot, are neglected in the present analysis.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\columnwidth]{figures/SmearingEffectsBe.png}
\includegraphics[width=0.49\columnwidth]{figures/SmearingEffectsSn.png}
\caption{Effects of the beam emittance and Fermi motion on
the ion position at the AFP detector. Upper panel -- beryllium and boron, lower panel -- tin. From
\cite{acta}.}
\label{fig:smearingSn}
\end{figure}
The momentum of a nuclear fragment emerging from an ion--ion collision is predominantly driven by its mass number.
However, it is influenced by the Fermi motion of the nucleons belonging to the considered fragment and effects related to the finite emittance%
\footnote{The beam emittance is a measure of the spread of the beam particles in the position--momentum phase space: $(x, p_x)$ or $(y, p_y)$.}
of the colliding beams.
Without these effects, nuclear fragments of a given type would always be observed in the detector at the same position.
The Fermi motion and the finite emittance introduce a spread of this position, which is illustrated in Figure \ref{fig:smearingSn} for beryllium, boron and tin ions.
The beam emittance plays a very small role leading to a minuscule broadening of the position distribution and was therefore neglected in the following analysis.
On the contrary, the Fermi motion strongly affects the ion horizontal position.
Its longitudinal component has a much stronger impact because its influence is magnified by a large value of the Lorentz factor of the colliding beams.
For collisions at $\sqrt{s}_{NN} = 5.02$~TeV, $\gamma \approx 2700$ leading to the potential smearing of the nucleon momentum up to nearly 1400~GeV and hence to the enhanced smearing of the horizontal position of the fragment.
In the present study, this influence was found much weaker for heavier ions as an effect of a non-correlated Fermi motion of different nucleons assumed.
The impact of the above discussed effects on the AFP capability to register various nuclei is summarised in Figure~\ref{fig:smearedAcceptance} showing the detector acceptance as a function of $(Z, \Delta)$, where $\Delta$, calculated as $\Delta = A-2\cdot Z$, is the net number of neutrons (the surplus/deficit of neutrons with respect to the protons) in a nucleus.
The AFP acceptance is smeared but the region of high acceptance value is clearly visible for a broad range of nuclei.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth,page=2]{figures/smeared.pdf}
\includegraphics[width=0.49\textwidth,page=5]{figures/smeared.pdf}
\caption{
Left: half-life times of known nuclids.
Right: the AFP detector acceptance calculated including the beam emittance and Fermi motion effects.
From \cite{acta}.}
\label{fig:smearedAcceptance}
\end{figure}
\FloatBarrier
\section{Dependence of centrality on the registered fragments}
\label{sec:depen}
The considered forward proton detectors cannot observe all possibly created nuclear fragments.
This raises a question of how much the limited acceptance affects the possibility to estimate the collision geometry.
This type of analysis cannot be carried out in a model-independent way and the following results are again based on the DPMJET-III simulation with the transport of the fragments calculated using the Mad-X.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\columnwidth]{figures/MeasSumZ_fvsSumZ_b.png}
\caption{Correlation of the total charge the nuclear debris recorded by the forward proton detector on the two sides.}
\label{fig:MeasSumZ_fvsSumZ_b}
\end{figure}
Figure \ref{fig:MeasSumZ_fvsSumZ_b} shows the correlation between the total charge of debris registered by the detectors located at both beams: \SZ[p_z>0] vs. \SZ[p_z < 0].
One can distinguish three classes of events with different signatures.
The first class contains events for which large values of \SZ are observed on both sides.
The second one contains events with an asymmetric configuration:
a large value of \SZ on one side and a very small \SZ on the opposite side.
The third class includes events in which both detectors register small total charge.
A correlation between the signals on both sides is observed for the first class of events.
The \SZ values separating the classes can be estimated looking at the projection of Fig. \ref{fig:MeasSumZ_fvsSumZ_b} on one of the axes.
A minimum around $\SZ = 10$ is observed and this value was selected and is used in the following for defining the no-tag, single-tag and double-tag events.
Naturally, the signature of an event depends on the collision geometry, see Figure \ref{fig:effi} (left).
Most central events have no tag in the forward detectors. Single-tag events are most likely for $b\approx 12$~fm and $b>16$~fm.
For $b$ around 14 fm, the double-tag signature becomes the most likely class.
One may also ask a question if the signature alone provides information about the event centrality.
This can be deduced from Figure \ref{fig:effi} (right), showing the distribution of $b$ for the three classes.
One can observe that no-tag events are predominantly central ones, double-tag events are rather peripheral.
Single-tag events are on average in between no- and double-tag ones but their distribution is quite wide and have a considerable coverage with the other classes.
\begin{figure}[htb]
\centering
\includegraphics[width=.49\textwidth]{figures/Efficiency.png}
\includegraphics[width=.49\textwidth]{figures/Efficiency1.png}
\caption{
Left: Probability of observing different classes as a function of impact parameter.
Right: distribution of impact parameter for events with different classes.
}
\label{fig:effi}
\end{figure}
In order to check the possible sensitivity of the registered signal to the collision geometry, a correlation between the total charge and the actual impact parameter of the collision was investigated.
Figure~\ref{fig:MeasSumZtotvsb} shows this correlation for the single- and double-tag classes (note different vertical ranges in both plots).
Comparing this result to Figure \ref{fig:SumZtotvsb} one can immediately observe that limiting the acceptance results in an increased width of the correlation.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{figures/MeasSumZtotvsb_class1.png}
\includegraphics[width=0.49\textwidth]{figures/MeasSumZtotvsb_class2.png}
\caption{Correlation of the sum of charges of the nuclear debris recorded by the forward detectors, \SZ, and the impact parameter.}
\label{fig:MeasSumZtotvsb}
\end{figure}
Summarising, it is clear that the considered detectors, while able to register only a part of the produced spectator fragments and measuring only their charge, can provide useful information about the geometry of the collision.
The next step is to understand the possible performance of the method, namely the resolution with which the impact parameter can be reconstructed.
The resolution of the impact parameter reconstruction is driven mainly by the randomness in the formation of the nuclear fragments that eventually reach the detectors.
This effect is responsible for the non-zero width of the correlation presented in Figure \ref{fig:SumAtotvsb}.
It is further enhanced by the limited acceptance of the detectors, which can be seen in Figure \ref{fig:MeasSumZtotvsb}.
Extracting the width along the $b$ direction and interpreting it as a possible measurement resolution at a given $b$ allows a comparison of different methods and different assumptions.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\columnwidth]{figures/Resol.png}
\caption{Resolution of impact parameter reconstruction based on spectator fragments for single- and double-tag events as well for as an ideal situation assuming full acceptance forward detectors.}
\label{fig:sigma}
\end{figure}
Figure \ref{fig:sigma} presents the resolution of the impact parameter reconstruction obtained for single- and double-tag events.
For central events with $b < 8$~fm, the results for double-tag class are heavily influenced by the limited statistics of the generated events as well as for the ideal case of full acceptance detectors.
In all other cases, the resolution between 1 and 2 fm is observed.
These values are of the order of 10\% of the range in which $b$ varies in lead--lead collisions.
For events with $b<12$~fm, single-tag events offer a better resolution, while for $b>12$~fm double-tag events lead to a more precise estimation.
For $b>14$~fm, the resolution for double-tag events is close to the ideal case.
It is worth pointing out that the widths of the correlations in Figure \ref{fig:MeasSumZtotvsb} along the \SZ axes are of the order of $\sigma(\SZ) \approx 10$.
This value can be interpreted as the maximum magnitude of resolution in the \SZ that would not dominate the $b$ resolution.
\FloatBarrier
\section{Collision asymmetry}
\label{sec:asymmetry}
All previous considerations assumed that there is one parameter describing the geometry of the heavy-ion collision.
This assumption is true for colliding symmetric objects with smooth internal structure.
In the case of realistic interactions, important asymmetries can be present already in the initial state.
They can be quantified, for example, by the difference in the number of participating nucleons from each of the ions.
Figure \ref{fig:asym_all} shows how the distribution of the asymmetry depends on the impact parameter.
The biggest absolute differences, $\Delta \ensuremath{N_\text{part}\xspace}$, can reach even a few tens of nucleons and occur for medium centralities with $b$ around 8~fm.
However, when one considers relative asymmetries, i.e. $\Delta \ensuremath{N_\text{part}\xspace}/\ensuremath{N_\text{part}\xspace}$, the highest values are observed for the most peripheral collisions, where the total number of participants is the smallest.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth,page=2]{figures/asym.pdf}
\includegraphics[width=0.49\textwidth,page=1]{figures/asym.pdf}
\caption{Distributions of absolute (left panel) and relative (right panel) collision asymmetry as a function of the impact parameter.}
\label{fig:asym_all}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth,page=6]{figures/asym.pdf}
\includegraphics[width=0.49\textwidth,page=8]{figures/asym.pdf}
\caption{Left: distribution of collision asymmetry as a function of the impact parameter for single-tag events with the tag on the $z<0$ side.
Right: average collision asymmetry as a function of the impact parameter for single-tag events with the tag on each side.}
\label{fig:asym_single}
\end{figure}
Since forward proton detectors can measure fragments originating from both ions, they could provide information not only about the centrality but also about the asymmetry of the collision.
The way this information can be extracted will depend on the event signature.
For single-tag events, the information about the asymmetry can be extracted from the information on which side the event was tagged.
This can be observed in Figure \ref{fig:asym_single}, where the asymmetry distribution as a function of $b$ is shown for events tagged on one side.
It is interesting to observe that the shape of the asymmetry distribution changes with the impact parameter.
In fact, even a change of the sign of the mean value of the asymmetry is observed.
This is a non-trivial consequence of the way the spectator fragments are formed and of the limited acceptance of the detectors.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth,page=9]{figures/asym.pdf}
\includegraphics[width=0.49\textwidth,page=10]{figures/asym.pdf}
\caption{Correlation between the true asymmetry and the asymmetry measured with \SA (left) or \SZ (right).}
\label{fig:asym_double}
\end{figure}
For double-tag events, one may apply a more direct approach and calculate the experimental asymmetry using the measured \SA values on both sides.
Figure \ref{fig:asym_double} presents the correlation between the true and the measured asymmetry of the collision.
While the distribution is rather wide, this method does provide some sensitivity%
\footnote{The correlation coefficient between the true and the measured asymmetry was found to be close to $0.5$.}
to the true asymmetry.
\FloatBarrier
\section{Summary and Conclusions}
A possibility of the application of the forward proton detectors in heavy ion collision at the LHC was investigated.
The impact parameter determination on the event-by-event basis was studied using DPMJET-III generated Monte Carlo events and Mad-X calculations of particle trajectories in the accelerator magnetic fields.
The calculations demonstrated that the existing detectors (ATLAS Forward Proton detectors were used as an example) have a significant acceptance to a wide range of known nuclei.
It was found that the Fermi motion of the nucleons belonging to a fragment very strongly impacts the position measured in the forward detectors while the beam emittance plays a negligible role.
In simulations, the detectors were used to tag forward emitted debris on one or both sides of the collision.
The performed analysis suggests that the charge measurement of the debris delivers 1 -- 2~fm precision of the impact parameter reconstruction.
It was also shown that the method can be used for a rough determination of the collision asymmetry.
Two facts have to be stressed.
First, the above discussed results depend on the properties of the accelerator magnetic lattice, and hence the calculations have to be repeated for each case separately.
Second, the present results rely on the physics model of the spectator system fragmentation used.
Therefore, the present work can be extended towards the understanding of the uncertainty of this modelling and how this translates into a possible uncertainty on the collision geometry reconstruction.
\section*{Acknowledgements}
This work was supported in part by Polish Ministry of Science and Higher Education grant no. DIR/WK/2016/13 and Polish National Science Centre grant no. 2015/19/B/ST2/00989.
\printbibliography
\end{document}
|
{
"timestamp": "2020-11-03T02:40:39",
"yymm": "2011",
"arxiv_id": "2011.00872",
"language": "en",
"url": "https://arxiv.org/abs/2011.00872"
}
|
\section{Introduction}
Deep reinforcement learning (DRL) integrates deep neural networks with reinforcement learning principles, e.g.,Q-learning and policy-gradient, to create a more efficient agent.
Recent studies have shown a great success of DRL in numerous challenging real-world problems, e.g., video games and robotic control \cite{mnih2015human}.
Although promising, existing DRL algorithms still suffer from several challenges including sample complexity, instability, and temporal credit assignment problems \cite{sutton1985temporal,henderson2018deep}.
One popular research line of DRL is policy-gradient based on-policy methods attempting to evaluate or improve the same policy that is used to make decisions \cite{sutton2011reinforcement}, e.g., trust region policy optimization (TRPO) \cite{schulman2015trust} and proximal policy optimization (PPO) \cite{schulman2017proximal}.
Recent works \cite{noauthor_undated-bh,Liu2019-lj} have proved that policy-gradient based methods can converge to a stationary point under some conditions, which theoretically guarantees their stability.
However, they are extremely sample-expensive since they require new samples to be collected in each gradient step \cite{sutton2011reinforcement}.
On the contrary, Q-learning based off-policy methods, which is another research line evaluating or improving a policy different from the one that is used to generate the behavior, can improve sample efficiency by reusing past experiences \cite{sutton2011reinforcement}.
Existing off-policy based methods include deep Q-learning network (DQN) \cite{mnih2015human} and Soft Actor-Critic (SAC) \cite{haarnoja2018soft} etc.
These methods involve the approximation of some high-dimensional and nonlinear functions, usually through deep neural networks, which poses a significant challenge on convergence and stability \cite{bhatnagar2009convergent,henderson2018deep}.
It is also well known that off-policy Q learning is not to converge even with linear function approximation \cite{baird1995residual}.
Moreover, recent studies \cite{Kumar2019-tc,Fujimoto2018-yd} identify some other key sources of instability for off-policy methods, i.e., bootstrapping and extrapolation errors.
As shown in \cite{Kumar2019-tc}, off-policy methods are highly sensitive to data distribution, and can only make limited progress without exploiting additional on-policy data.
In addition to the pros and cons discussed above, on-policy and off-policy methods based on temporal difference learning suffer from some common issues.
The one that received much research attention is the so-called \emph{temporal credit assignment} problem \cite{sutton1985temporal}.
When rewards become sparse or delayed, which is quite common in real-world problems, DRL algorithms may yield an inferior performance as reward sparsity downgrades the learning efficiency and hinders exploration.
To alleviate this issue, evolutionary algorithms (EAs) \cite{fogel1995toward,spears1993overview} have recently been introduced to DRL \cite{pourchot2018cem,khadka2018evolution}.
The usage of a fitness metric that consolidates returns across the entire episode makes EAs indifferent to reward sparsity and robust to long time horizons \cite{salimans2017evolution}.
However, EAs suffer from high sample complexity and struggle to solve high-dimension problems involving massive parameters.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{images/CHDRL.png
\caption{The high-level structure of CHDRL for one iteration}
\label{fig:chdrl}
\end{figure}
In this paper, we are interested in an algorithm that takes the essence and discards the dross of different DRL algorithms to achieve high sample efficiency and maintain good stability in various continuous control tasks.
To do so, we propose a framework called CHDRL.
Specifically, CHDRL works on an agent pool containing three classes of agents: an off-policy agent, an on-policy agent, and a opulation-based EAs agent.
All the agents cooperate based on the following three mechanisms.
Firstly, all agents collaboratively explore the solution space following a hierarchical policy transfer rule.
As the off-policy agent is sample-efficient, we take it as the global agent to obtain a relatively good policy or value function at the beginning.
The on-policy agent and the population-based EAs agent are taken as local agents and start their exploration with the prior knowledge transferred from the global agent.
As the EAs agent is population-based, we further allow it to accept policies from the on-policy agent.
Secondly, we employ a local-global memory replay to enable global (off-policy) agents to replay the newly generated experiences by local (on-policy) agents more frequently so that global agents can benefit from local search.
Note that, with policy transfer as stated above, local agents start exploration with a policy transferred from global agents, and thus their generated experiences can be taken as close to the on-policy data of global agents' current policy \cite{Kumar2019-tc,Fujimoto2018-yd}.
By allowing global agents exploits more often from these local experiences, we can alleviate the bootstrapping or extrapolation error and further boost global agents' learning.
Consequently, global agents provide a better starting point for local agents who in turn generate more diverse local experiences for global agents' replay, which forms a good win-win cycle.
Thirdly, although we encourage the cooperation among agents in exploration, we also tend to maintain the independence of each agent; that is, we do not want the learning of local agents to be completely dominated by that of global agents.
This is to enable each agent to still maintain its policy updating scheme and preserve its learning advantage.
To do so, we firstly develop a loosely coupled hierarchical framework with global agents at the upper-level and local agents at the lower-level\footnote{Policy transfer only happens from upper-level agents to lower-level agents.}.
Such a framework not only makes each agent generally run in a relatively independent environment with different random settings, but also achieves the easy and flexible deployment or replacement of the agent candidates used in the framework.
Secondly, to avoid over-policy-transfer, i.e., policy transfer happening too frequently thus interrupting the learning stability of local agents, we set a threshold to control the frequency of policy transfer.
The high-level structure of CHDRL is shown in Figure \ref{fig:chdrl}.
In this work, we instantiated a CHDRL with PPO, SAC, and Cross-Entropy-Method (CEM) based EA \cite{stulp2012path}, named CPSC.
Experimental studies showed the superiority of CPSC to several state-of-the-art baselines in a range of continuous control benchmarks.
We also conducted ablation studies to verify the three mechanisms.
\section{Preliminaries}
In this section, we review the representation of on-policy method, off-policy method, and EAs, namely, PPO \cite{schulman2017proximal}, SAC \cite{haarnoja2018soft}, and Cross-Entropy based EA \cite{stulp2012path}.
\subsection{Proximal Policy Optimization (PPO)}
PPO is an on-policy algorithm that trains a stochastic policy.
It explores by sampling actions according to the latest version of its stochastic policy.
During training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found.
PPO tries to keep new policies close to old.
\subsection{Soft Actor-critic (SAC)}
SAC is an off-policy algorithm that incorporates an entropy measure of the policy into the reward to encourage exploration.
The idea is to learn a policy that acts as randomly as possible while still being able to succeed in the task.
It is an off-policy actor-critic model that follows the maximum entropy RL framework.
The policy is trained with the objective of maximizing the expected return and entropy at the same time.
\subsection{Evolutionary Algorithms and CEM-ES}
EAs \cite{fogel1995toward,spears1993overview} are a class of black-box search algorithms that apply heuristic search procedures inspired by natural evolution.
Among EAs, Estimation of Distribution Algorithms (EDAs) are a specific family where the population is represented as a distribution using a covariance matrix \cite{larranaga2001estimation}.
CEM is a simple EDA where the number of elite individuals is fixed at a certain value.
After all individuals of a population are evaluated, the top fittest individuals are used to compute the new mean and variance of the population.
\section{Related Works}
Experience replay mechanism \cite{lin1992self} is widely used in off-policy reinforcement learning to improve sample efficiency.
DQN \cite{mnih2015human} randomly and uniformly samples experience from a replay memory.
\cite{schaul2015prioritized} subsequently expands DQN to develop a prioritized experience replay (PER), which uses a temporal difference error to prioritize experiences.
Zhizheng Zhang et al. \cite{zhang2019asynchronous} introduce an episodic control experience replay method to quickly latch on to good trajectories.
Our local-global memory uses a different strategy: let the off-policy agent learn more from effective on-policy experiences.
CHDRL's cooperative learning mechanism can be discussed in terms of guided policy search (GPS) \cite{ghosh2017divide,jung2020population} or evolutionary reinforcement learning (ERL) \cite{pourchot2018cem,khadka2018evolution,khadka2019collaborative}.
For GPS, they generally need to use the KL divergence to guide how policies are improved.
ERL \cite{khadka2018evolution} directly transfers the RL agent's policy to the EA population, while Pourchot et al. \cite{pourchot2018cem} uses the RL's critic to update half of the EA population using the gradient-based technique.
The proposed CHDRL is related to GPS and ERL in the sense that multiple polices work in a hybrid way.
However, the main difference between CHDRL and other similar methods is how heterogeneous agents cooperate.
Moreover, CHDRL can benefit not just from off-policy and EA learning schemes but also from the on-policy learning scheme.
Another related area of work is in the training architectures.
A3C \cite{mnih2016asynchronous} introduce an asynchronous training framework for deep reinforcement learning, showing parallel actor-learners have a stabilizing effect on training. Babaeizadeh et al. \cite{babaeizadeh2016reinforcement} adapt this approach to make efficient use of GPUs.
IMPALA \cite{pmlr-v80-espeholt18a} uses a central learner to run SGD while asynchronously pulling sample batches from many actor processes.
Horgan et al. \cite{horgan2018distributed} proposes a distributed architecture for training DRL that employs many actors to explore using different policies and prioritizing the generated experiences.
Han Zheng et al. \cite{c2hrl} introduces a training method to select the best agent for different tasks.
All these methods only focus on one learning scheme, and/or all actors involved are treated equally.
On the contrary, CHDRL distinguishes actors as global actors and local actors that serve for different purposes respectively.
Moreover, CHDRL focuses on the cooperation of diverse learning schemes.
\section{Cooperative Heterogeneous Deep Reinforcement Learning(CHDRL)}\label{chdrl}
In this section, we firstly introduce the proposed CHDRL framework and then suggest a practical algorithm based on it.
Our CHDRL mainly follows three mechanisms to achieve cooperative learning of heterogeneous agents: cooperative exploration (CE), local-global memory relay (LGM) and distinctive update (DU).
\textbf{Cooperative Exploration (CE)}. The key idea of CE is to utilize a sample-efficient agent, such as an off-policy agent, to guide the exploration of the agent with a relatively lower sample efficiency, e.g., an on-policy agent.
This is done by transferring policies across agents.
More precisely, the sample-efficient agent acts as a global agent and conducts a global search first.
In every iteration, we want to use the policy and/or value function obtained by the global agent as the prior knowledge to re-initialize local agents so that they can start to exploit from a relatively better position.
To do so, we need to solve three key points: what to transfer, how to transfer, and when to transfer, following the basic mechanism of transfer learning\cite{pan2009survey,wei2016deep}.
\emph{What to Transfer.} Different agents may have different policy architectures.
The policy could be deterministic, where it is denoted by $a\doteq\mu_\phi(s)$, or stochastic, where it is denoted by $a \sim \pi_\phi(\cdot | s)$.
In continuous control tasks, the stochastic policy is usually assumed to be sampled from a Gaussian distribution, and thus it can be represented as:
\begin{equation} \nonumber
a \doteq \mu_{\phi}(s) + \Sigma
\end{equation}
where $\mu_{\phi}(s)$ is the mean action, $\Sigma$ represent a covariance matrix.
Typically, $\Sigma$ may have different forms, e.g., PPO uses a state-independent $\Sigma$ while SAC utilizes a state-dependent one.
However, a similar mean policy architecture $\mu_\phi(s)$ is used in different methods.
Inspired by this, we propose to use the structurally identical mean function $\mu_\phi(s)$ to establish a link between the deterministic and stochastic policies.
Then the policy among heterogeneous agents can be shared by transferring $\mu_\phi(s)$.
\emph{How to Transfer.} As shown in Figure \ref{fig:chdrl}, policies are transferred following a hierarchical manner.
The principle is that policies are transferred from upper-level agents with higher sample efficiency to the lower-level agents with lower sample efficiency.
More specifically, policies are transferred (1) from off-policy agents to both on-policy agents and EAs agents, and (2) from on-policy agents to EAs agents. Note that EAs agents are population-based, and thus we allow them to accept the on-policy agent's policy to maximize the transfer capacity.
To avoid collisions, we use different individuals of EAs' population to accept policies from different upper-level agents.
As EAs agents accept policies from both off-policy and on-policy agents, they naturally serve as a pool that stores all the transferred policies.
\emph{When to Transfer.} Policy transfer happens only when upper-level agents find a better policy than the current one of lower-level agents.
Lower-level agents then re-initialize the exploration with the policy transferred from their upper-level agents as the new starting point.
In order to compare the performance of policies, we use the average return as the evaluation metric.
To be statistically stable, we use the average return over five episodes as the policy's performance score.
Moreover, to avoid that policy transfer happens too frequently to interrupt the learning stability of lower-level agents, we enable policy transfer only when the performance gap is larger than a predefined threshold.
\textbf{Local-Global Memory Relay (LGM)}: Off-policy agents can make more progress when considering on-policy data in their learning \cite{Kumar2019-tc,Fujimoto2018-yd}.
Following this observation, we employ a local-global memory replay mechanism to enable global off-policy agents to benefit from diverse local experiences from both on-policy agents and EAs agents.
In particular, we propose two memory buffers -- a global one and a local one -- to store the generated exploration experiences.
The global memory serves to store the entire exploration experiences of all the agents, while the local memory only stores the recently generated ones.
Thus, we set an expandable global memory size increasing while learning, but a fixed shared memory size with a first-in-first-out rule.
Whenever new experiences arrive, the earliest saved experiences in local memory are overridden.
We aim to use the experience saved in the local memory to simulate on-policy data.
However, instead of exploiting a brute-force storage that indiscriminately saves every new episode experience, we set an intuitive rule to determine whether to store an experience in local memory or not.
Specifically, we only save a newly generated episode from a local agent when (1) the local agent successfully accepts a policy from the global agent \footnote{It ensures the local experiences are close to the on-policy data of the global agent's current policy.}, and (2) when its episode return is not worse than the minimum of all agents' current performance.
By doing so, we can avoid out-of-distribution data being saved in local memory to some extent, so as to reduce variance and stabilize learning \cite{Kumar2019-tc}.
We then allow global agents to replay experiences from the two memories drawn from a Bernoulli distribution, that is, sample experiences from the local memory with a probability $p$, and from the global memory with a probability $1-p$.
Such a Local-Global Memory Relay mechanism plays a very important role in guaranteeing global agents to consistently benefit from on-policy data as, if only a single global memory buffer is used, the probability of sampling a newly generated experience in it becomes lower and lower with more and more experiences saved alongside learning.
\begin{table}
\begin{minipage}{0.49\linewidth}
\begin{algorithm}[H]
\footnotesize
\caption{CSPC}
\label{alg:CSPC}
\begin{algorithmic}[1]
\REQUIRE~~\\
$G_s$ with policy $\pi_s\doteq \mu_{\phi_s}(s)+\Sigma_s$ and value $\psi_s$; $L_p$ with $\pi_p\doteq\mu_{\phi_p}(s)+\Sigma_p$ and value $\psi_p$; local memory $M_l$,global memory $M_g$;
Iteration steps $T$;
$L_c$ with policies as $\mu_{\phi_{c_0}}(s),...,\mu_{\phi_{c_n}}(s)$; initial steps $T_g$; gap $f$, terminate step $T_m$, and initial test score $S_s,S_p$ $S_c$. Initialize transfer label $A_p,A_c$ to False.\\
\REPEAT
\STATE \textbf{TRAIN}($G_s,T_g$), $t\gets t+T_g$
\FOR{Agent $a$ in $G_s,L_p,L_c$}
\STATE \textbf{TRAIN}($a,M_l,M_g,T$)
\IF{$a$ is not $G_s$}
\STATE \textbf{UPDATE}($\phi_s,M_l,M_g,T$)
\ENDIF
\STATE $t\gets t+T$
\ENDFOR
\STATE Update test scores $S_s,S_p$ and $S_c$
\IF{$S_s-S_p>f$}
\STATE $\phi_p \gets \phi_s,\psi_p\gets\psi_s,A_p\gets$True
\ENDIF
\IF{$S_s-S_c>f$}
\STATE $\phi_{c0}\gets\phi_s,A_c\gets$True
\ENDIF
\IF{$S_p-S_c>f$}
\STATE $\phi_{c1}\gets\phi_p$
\ENDIF
\UNTIL{$t>T_m$}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hspace{10pt}
\begin{minipage}{0.49\linewidth}
\begin{algorithm}[H]
\footnotesize
\caption{\textbf{TRAIN}}
\label{alg:train}
\begin{algorithmic}[1]
\REQUIRE~~\\
Input agent $a$,\\
training steps $T_a$,\\
episode reward $R=0$, \\
$R_m\gets\min(S_s,S_p,S_c)$,\\
step $t=0,t_e=0$, \\
global memory $M_g$,local memory $M_l$\\
episode memory $M_e$.
\REPEAT
\STATE Observe state $s$ and select action $a\sim\mu_{\phi_s}(s)+\Sigma_s$ or $a\sim\mu_{\phi_p}(s)+\Sigma_p$ or $a\doteq\mu_{\phi_{c_i}}(s)$
\STATE Execute $a$ in the environment
\STATE Observe next state $s'$,reward $r$,and done signal $d$
\STATE Store $(s,a,r,s',d)$ in $M_e$, $R\gets r+R$
\STATE $t\gets t+1,t_e\gets t_e+1$
\IF{$s'$ is terminal}
\STATE $\phi'\gets \textbf{UPDATE}(\phi,M_g,M_l,t_e)$ where $\phi \in \{\phi_s, \phi_p, \phi_c\}$
\IF{$R>R_m$ and ($a$ is $G_s$ or $A_p$ or $A_c$ is True)}
\STATE Store $M_e$ in $M_l$ and $M_g$
\ELSE
\STATE Store $M_e$ in $M_g$
\ENDIF
\STATE $R\gets0,M_e\gets [],t_e\gets 0$
\ENDIF
\UNTIL{$t>T_a$}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{table}
\textbf{Distinctive Update (DU)}: Although global agents guide local agents for exploration, each agent still maintains its own policy updating schemes to preserve learning advantages.
When an agent accepts a policy from its upper-level agent, it keeps updating using its update algorithms, e.g., policy gradient, starting from the accepted policy.
This is naturally achieved by the hierarchical framework stated above as well as by the performance gap determining when to transfer.
To understand CHDRL better, we provide a CHDRL instantiation, which employs a state-of-the-art off-policy agent SAC, an on-policy agent PPO and EAs agent CEM, called Cooperative SAC-PPO-CEM (CSPC).
The pseudo code of the instantiated CSPC is presented in detail in algorithms \ref{alg:CSPC} to \ref{alg:update-CSPC}.
$G_s,L_p,L_c$ represent global off-policy agent SAC, local on-policy agent PPO and EA agent CEM respectively.
Algorithm \ref{alg:CSPC} shows the general learning flow of CSPC.
Firstly, global agent $G_s$ is trained for specific steps $T_g$.
This is to ensure the off-policy agent reaches a relatively good solution.
Afterwards, we orderly train $G_s$, $L_p$, and $L_c$ to search the solution space for one iteration step $T$.
Note that $G_s$ keeps learning from the experiences when other agents explore.
After that, we evaluate the updated agent to get its new policy score $S_s,S_p$ and $S_c$.
We then transfer policies based on these updated scores following the above principle of policy transfer.
Specifically, if the score of $S_s$ is better than those of $S_p$ and $S_c$ with at least $f$ improvement, we re-initialize $L_p$ and one individual of $L_c$ with $G_s$'s policy.
A similar transfer is done from $L_p$ to $L_c$.
Algorithm \ref{alg:CSPC} shows what, how, and when to transfer policies, which are the three key factors in \textbf{CE}.
Lines 9-13 in Algorithm \ref{alg:train} show how generated experiences are stored in global memory or local memory.
Lines 3-8 in Algorithm \ref{alg:update-CSPC} show how global agents replay experiences from the global and local memories.
These lines combined consist of the implementation of \textbf{LGM}.
Lastly, lines 9, 14, and 17 reflect \textbf{DU}, where each agent updates following its own update rules.
The above procedure proceeds iteratively until termination.
Note that CHDRL also accepts the same type of agents.
In this case, cooperation only exists between the global agent and local agent, not across local ones.
In the ablation study, we test a case where three off-policy agents are used in CHDRL.
Moreover, our CHDRL is loosely coupled in the sense that it is flexible enough to involve any other agents, e.g., DQN \cite{mnih2015human} and TRPO \cite{schulman2015trust} etc., into it.
\begin{algorithm}[t]
\footnotesize
\caption{\textbf{UPDATE}}
\label{alg:update-CSPC}
\begin{algorithmic}[1]
\REQUIRE~~\\
Agent $a_\phi$, update steps $t_u$, step $t=0$, sample probability $p$;
Global shared memory $M_g$, local memory $M_l$;
\IF{$a$ is $G$}
\WHILE{$t<t_u$}
\STATE $o\gets\ \emph{Bernoulli}(k,p)$ with $k \in \{0,1\}$
\IF{$o = 1$}
\STATE Randomly sample a batch $B$ from $M_l$
\ELSE
\STATE Randomly sample a batch $B$ from $M_g$
\ENDIF
\STATE Update agent's policy $\phi_s$ and value function $\psi_s$ following \cite{schulman2017proximal}
\STATE $t\gets t+1$
\ENDWHILE
\ENDIF
\IF{$a$ is $L_p$}
\STATE Update agent's policy $\phi_p$ and value function $\psi_p$ following \cite{haarnoja2018soft}.
\ENDIF
\IF{$a$ is $L_c$}
\STATE Update agent's new mean $\pi_{\mu_c}$ and covariance matrix $\sum_c$ following \cite{stulp2012path}.
\STATE Draw the current population $L_c$ from $\mathcal{N}(\pi_{\mu_c},\Sigma_c)$,
\ENDIF
\end{algorithmic}
\end{algorithm}
\section{Experiments}
We conducted an empirical evaluation to verify the performance superiority of CSPC to other baselines, and ablation studies to show the effectiveness of each mechanism used in CHDRL.
\subsection{Experiment Setup}
All the evaluations were done on a continuous control benchmark: Mujoco \cite{todorov2012mujoco}.
We used state-of-the-art SAC, PPO and CEM to represent the off-policy agent, on-policy agent, and EA, respectively.
Note that other off-policy (e.g., TD3), on-policy (e.g., TRPO) and gradient-free agents (e.g. CEM-ES), are applicable to our framework.
For SAC, PPO and CEM, we used the code from OpenAISpinningUp for the first two, and code from CEM-RL for CEM \footnote{OpenAISpinningUp: github.com/openai/spinningup; CEM-RL:github.com/apourchot/CEM-RL}.
For hyper-parameters in these methods, we followed the defaults specified by the authors.
For CSPC, we set the gap $f$ as 100, global agent initial learning steps $T_g$ as $5e4$, iteration time steps $T$ as $1e4$, global memory size $M_g$ as $1e6$, local memory size $M_l$ as $2e4$, and sample probability from local memory $p$ as $0.3$.
\subsection{Comparative Evaluation}\label{experiments:soat}
We evaluated CSPC on five continuous control tasks from Mujoco in comparison to three baselines: SAC, PPO, and CEM.
We also used SAC, CEM, and PPO as our candidate agents in CSPC.
We ran the training process for all the methods over one million time steps on four tasks with five different seeds, and for the Swimmer-v2 task, we ran it for four million time steps.
Time steps are accumulated interaction steps with the environment.
For a fair comparison, we used the accumulated time steps of three algorithms used in CSPC. Specifically, we summed up each agent’s time steps so that the total time-steps stayed consistent with the other baselines.
The final performance was reported as the max average return of 5 independent trials for each seed.
We reported the scores of all the methods compared against the number of time steps.
Figure \ref{fig:soat} shows the comparison results for all methods on five Mujoco learning tasks.
From the results, we first observe that there is no clear winner among the existing state-of-the-art baselines SAC, PPO, and CEM in terms of stability and sample efficiency.
No one consistently outperforms the others on the five learning tasks.
Specifically, it can be seen that, for four of five tasks (except for Swimmer task), SAC yields better results than PPO and CEM, which verifies its sample efficiency for a long run.
However, we can also observe a significant variance of SAC, which indicates its high instability, especially in Ant task.
In contrast, PPO and CEM have a lower variance but achieve unsatisfactory average returns.
A special case is Swimmer task where both SAC and PPO fail to learn a good policy but CEM succeeds.
Figure \ref{fig:soat} also demonstrates that our proposed CSPC performs consistently better or with comparable results to the best baseline methods on all tasks.
This verifies the capability of CSPC to improve the performance of each individual agent by utilizing the cooperation among them.
On Swimmer task where both gradient-based methods fail, CSPC still achieves a comparable result with CEM.
This is because CSPC does not benefit from SAC and PPO, and only maintains the capacity of CEM.
Table \ref{table:MaxReturn} shows the maximum average return for each method.
\begin{figure}[t]
\centering
\resizebox{\linewidth}{!}{
\subfloat[Hopper-v2]{\includegraphics[width = 2in,height=1.25in]{images/Hopper.png}}
\subfloat[Walker2d-v2]{\includegraphics[width = 2in,height=1.25in]{images/Walker2d.png}}
\subfloat[Ant-v2]{\includegraphics[width = 2in,height=1.25in]{images/Ant.png}}} \\
\subfloat[Humanoid-v2]{\includegraphics[width = 1.8in,height=1.1in]{images/Humanoid.png}}
\subfloat[Swimmer-v2]{\includegraphics[width = 1.8in,height=1.1in]{images/Swimmer.png}}
\caption{Training curves on Mujoco continuous control tasks.}
\label{fig:soat}
\end{figure}
\begin{table}[t]
\footnotesize
\begin{minipage}{0.45\linewidth}
\caption{The max average return.}
\label{table:MaxReturn}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccc}
\toprule
Task & CSPC & PPO & SAC & CEM \\
\midrule
Humanoid-v2 & \textbf{5412}$\pm$239 & 626$\pm$23 & 5142$\pm$133 & 616$\pm$88 \\
Ant-v2 & \textbf{5337}$\pm$220 & 1169$\pm$207 & 3766$\pm$2359 & 1019$\pm$33 \\
Walker2d-v2 & \textbf{5317}$\pm$256 & 1389$\pm$387 & 4222$\pm$290 & 1041$\pm$65 \\
Hopper-v2 & \textbf{3619}$\pm$52 & 2923$\pm$88 & 3558 $\pm$139 & 1057$\pm$53 \\
Swimmer-v2 & 261$\pm$117 & 68$\pm$31 & 44$\pm$3 & \textbf{274}$\pm$118 \\
\bottomrule
\end{tabular}}
\end{minipage}
\hspace{1pt}
\begin{minipage}{0.49\linewidth}
\caption{The elite agent.}
\label{table:elite_agent}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{cccccc}
\toprule
Task & Humanoid-v2 & Ant-v2 & Walker2d-v2 & Hopper-v2 & Swimmer-v2 \\
\midrule
seed 0 & SAC & SAC & PPO & CEM & CEM \\
seed 1 & CEM & SAC & CEM & CEM & CEM \\
seed 2 & PPO & CEM & CEM & CEM & CEM \\
seed 3 & PPO & CEM & CEM & SAC & CEM \\
seed 4 & SAC & PPO & CEM & CEM & CEM \\
\bottomrule
\end{tabular}}
\end{minipage}
\end{table}
One may wonder about the possible computation cost of CPSC. In our experiments, it mainly comes from the global agent, as it keeps learning for other agents’ experiences in the background. The local agents run much faster than global agents, especially the CEM agent, as it is gradient-free. The total running time of CSPC is only slightly longer than the SAC agent.
\subsection{Local Agent vs Global Agent}
The main motivation of this study is to figure out whether local agents really help in finding the best final policy in different random settings.
To do so, we show the elite agent, that is, the agent yielding the best performance among heterogenous agents after training has terminated, in different random seeds.
The results are shown in Table \ref{table:elite_agent}.
It can be seen that CSPC could obtain different elite agents on the same task under different random seeds.
Such an observation indicates that local search agents do help to find a better policy around the global guided agent.
Surprisingly, the EA-based CEM agent performs better than other local agent (PPO) in most cases.
However, on the complex task, Humanoid-v2, the gradient-based agents perform much better than CEM.
\begin{figure}[t]
\resizebox{\linewidth}{!}{
\centering
\subfloat[Walker2d-v2\label{figure:Walker2d-hm-cl-gm}]{\includegraphics[width = 2in,height=1.15in]{images/CSPC-LM-CE-GM.png}}
\subfloat[Walker2d-v2\label{figure:Walker2d-ppo-cem-sac}]{\includegraphics[width = 2in,height=1.15in]{images/CSPC-PPO-SAC-CEM.png}}
\subfloat[Walker2d-v2\label{figure:Walker2d-3s}]{\includegraphics[width = 2in,height=1.15in]{images/Walker2d_C3SAC.png}}} \\
\resizebox{\linewidth}{!}{
\subfloat[Swimmer-v2\label{figure:Swimmer-LM-CL-HM}]{\includegraphics[width = 2in,height=1.15in]{images/Swimmer-LM-CL-HM.png}}
\subfloat[Swimmer-v2\label{figure:Swimmer_PPO_CEM_SAC}]{\includegraphics[width = 2in,height=1.15in]{images/Swimmer_PPO_CEM_SAC.png}}
\subfloat[Swimmer-v2\label{figure:swimmer_c3sac_3sac_sac}]{\includegraphics[width = 2in,height=1.15in]{images/Swimmer_3SAC_C3SAC.png}}}
\caption{Ablation study on two tasks: Walker2d and Swimmer.}
\label{figure:CHDRL-ab}
\end{figure}
\subsection{Ablation Studies}\label{experiments:as}
In this section, we conducted ablation studies to understand the contributions of each key component of CSPC.
To do this, we built three variants of CSPC: CSPC without cooperative exploration (CE), i.e., CSPC-CE, CSPC without local memory(LM), i.e., CSPC-LM, and CSPC without global memory (GM), i.e., CSPC-GM.
Specifically, in CSPC-CE, we stopped the policy transfer and let each agent explore and exploit by itself.
In CSPC-LM, the off-policy agent SAC replays from all experiences uniformly.
In CSPC-GM, the off-policy agent SAC only learns from its own experiences.
We further analyzed the influence of each individual agent to CSPC.
To do so, we developed a CSPC without PPO, called CSPC-PPO, a CSPC without CEM, called CSPC-CEM, and a CSPC without SAC, called CSPC-SAC.
As CHDRL also allows the same types of agents, to verify that heterogeneous agents indeed matters, a variant of CHDRL consisting of only one type of agent was proposed.
In this case, we introduced two variants: three SAC agents with CE and LGM, and three SAC agents without them.
We called the former C3SAC and the latter 3SAC.
For 3SAC, the three agents only shared global memory and no policy transfer existed.
We evaluated all the variants on Walker2d-v2 and Swimmer-v2.
The results are shown in Figure \ref{figure:CHDRL-ab}.
As shown in Figure \ref{figure:CHDRL-ab}, for task Walker2d-v2, CSPC achieved the best among all the ablation variants in terms of final average performance.
From Figure \ref{figure:CHDRL-ab}(a), it is easy to deduce that LGM and CE indeed matter in CSPC, as without these two elements, the final performance drops quickly.
From Figure \ref{figure:CHDRL-ab}(b), we can see that the results of CSPC-PPO and CSPC-CEM are satisfactory and only slightly worse than that of CSPC, while the result of CSPC-SAC dramatically decreases.
This implies that the global agent has a more significant impact on the final performance than local agents.
This is reasonable as the global agent determines the starting position of CSPC, and highly affects the following search efficiency.
Note that CSPC-PPO and CSPC-CEM are CSPC without one specific local agent, but still follow CHDRL's core mechanism: CE and LGM.
From the fact that their performances are much higher than CSPC-CE and CSPC-LM/GM, we again verify the significance of LGM and CE.
From Figure \ref{figure:CHDRL-ab}(c), we can see that C3SAC performs better than 3SAC and SAC.
Even though the three agents are with the same type, local agents still provide a diverse local search as they explore in different random settings.
However, our CSPC performs much better than C3SAC, while 3SAC performs only slightly better than SAC.
With this, we deduce that CHDRL still improves the performance when using the same type agents, but using heterogeneous agents would further boost the performance.
For the Swimmer-v2 task, the results are different as SAC and PPO agents typically fail on this task.
In other words, the global agent is incapable of finding a relatively good position, and only the CEM agent works.
The most likely explanation is that in Swimmer-v2, existing DRL methods provide deceptive gradient information that is detrimental to convergence towards efficient policy parameters \cite{pourchot2018cem}.
Hence, LM/GM/CL cannot enhance the final performance, which is shown by CSPC-LM,CSPC-GM and CSPC-CL in Figure \ref{figure:CHDRL-ab} (d). In such a case, the learning curves of the three methods mostly overlap.
On the other hand, CSPC-PPO and CSPC-SAC gain a better final performance than CSPC, which is also reasonable as the CEM agent has more iterations leading to a better final performance, as shown in Figure \ref{figure:CHDRL-ab}(e).
For the same reason, C3SAC and 3SAC both fail.
\section{Conclusion}
In this paper, we present CHDRL, a framework that incorporates the benefits of off-policy agents, policy gradient on-policy agents and EAs agents.
The proposed CHDRL is based on three key mechanisms, i.e., cooperation exploration, local-global memory and distinctive update.
We also provide a practical algorithm CSPC by using SAC, PPO, and CEM.
Experiments in a range of continuous control tasks show that CSPC achieves a better or comparable performance compared with baselines.
We also note that CHDRL introduces some new hyper-parameters which may have a crucial impact on performance, however, we do not tune that too much.
Moreover, we should carefully select the agents, as the final performance highly depends on the agents used, particularly the global one.
\section*{Broader Impact}
The DRL agent that learns from an incompletely known environment runs the risk of making wrong decisions. This could lead to catastrophic consequences in practice, such as automated driving, the stock market, or medical robots.
One approach to alleviate this risk is to combine with other techniques or involve human beings' supervision.
In terms of benefits, DRL can be deployed in a safe environment where a wrong decision will not lead to a significant loss, e.g., the recommendation system.
Moreover, in some environments that we can simulate well, it would be very promising to develop an intelligent robot to work in such an environment.
\begin{ack}
This research is partially funded by the Australian Government through the Australian Research Council (ARC) under grant LP180100654.
\end{ack}
\bibliographystyle{unsrt}
|
{
"timestamp": "2020-11-03T02:37:45",
"yymm": "2011",
"arxiv_id": "2011.00791",
"language": "en",
"url": "https://arxiv.org/abs/2011.00791"
}
|
\section{Background\label{sec:background}}
\input{images/restrictions/restrictions}
We develop an algorithm which grows sparse roadmaps over fiber bundles to efficiently exploit high-dimensional planning problems. As background for this task, we review the topics of optimal motion planning, multilevel abstractions (modelled using fiber bundles) and sparse roadmaps.
\subsection{Optimal Motion Planning}
Let $X$ be an $n$-dimensional state space and let $\x_I$ and $\x_G$ be two states in $X$ which we call the initial and the goal state. To each state space, we associate a metric function $d: X \times X \rightarrow \R$ and a constraint function $\phi: X \rightarrow \{0,1\}$ which evaluates to zero if a state is feasible and to one otherwise. The state space thus splits into two components, the constraint-free subspace $\X_{\text{free}} = \{x \in X \mid \phi(x) = 0\}$ and its complement. We define the optimal motion planning problem as the tuple $A = (\X_{\text{free}}, \x_I, \x_G, J)$, which requires us to design an algorithm to find a continuous path from $\x_I$ to $\x_G$ while (1) staying exclusively inside $\X_{\text{free}}$ and (2) minimizing the cost functional $J$ which maps paths in $\X_{\text{free}}$ to real numbers.
We define a motion planning algorithm (a planner) as a mapping from $A$ to a path through $\X_{\text{free}}$. A planner can have different desirable properties. First, we like a planner to be \emph{probabilistically complete}, meaning the probability of finding a solution path if one exists approaches one as time goes to infinity. Second, we like a planner to be \emph{asymptotically near-optimal}, meaning the probability of finding a path is at least $\epsilon$ worse than the optimal solution path (under cost functional $J$). Third, we like a planner to be \emph{asymptotically sparse}, meaning the probability of adding new nodes and edges converges to zero if time goes to infinity \cite{dobson_2014}.
\subsection{Multilevel Motion Planning}
Because state spaces are often too high-dimensional to plan in, we use multilevel abstractions which we model using fiber bundles \cite{steenrod_1951, lee_2003}. A fiber bundle is a tuple $(X, B, F, \pi)$, consisting of a bundle space $X$, a base space $B$, a fiber space $F$ and a projection mapping $\pi$ from $X$ to B. We assume that both state space and base space have associated constraint functions $\phi$ and $\phi_B$ and that the projection mapping $\pi$ is admissible w.r.t. the constraint functions, i.e. $\phi_B(\pi(x)) \leq \phi(x)$ for any $x$ in $X$ \cite{Orthey2019}. The admissibility condition ensures that we preserve feasible solution paths under projection. While we exclusively use product spaces in this work, we model them using fiber bundles since they provide a useful vocabulary (restrictions and sections) and since they are required for extensions to task-space projections.
Our approach uses the following three concepts. First, we define fibers over a base element $b$ in $B$ as $F(b) = \{x \in X\mid \pi(x) = b\}$, which is the set of points in $X$ projecting onto $b$. Please see Fig. \ref{fig:restriction:fiber} for an example of a fiber on the torus $T^2 = S^1 \times S^1$ with base space $S^1$. We additionally define the method $\textsc{Lift}: B \times F \rightarrow X$, which takes a base element $b$ and a fiber element $f$ in $F(b)$ to the bundle space. In the case of product spaces, we can define $\textsc{Lift}(b,f) = (b,f)$. Second, we define path restrictions over a base path $p: I \rightarrow B$ as $r(p) = \{x \in X \mid \pi(x) \in p[I]\}$, whereby $I$ is the unit interval and $p[I]$ is the image of the base path in $B$. Please see Fig. \ref{fig:restriction:path}. Third, we define graph restrictions over a graph $G_B = (V_B, E_B)$ on $B$ as $r(G_B) = \{x \in X \mid \pi(x) \in e[I], e \in E_B\}$ whereby $V_B$ are vertices in $B$, $E_B$ is the set of edges in $B$ and $e[I]$ is the image of an edge on the base space. Fig. \ref{fig:restriction:graph} provides a visualization of a graph restriction (individual edge restrictions have different distances from torus for better visualization). For more details, please see \cite{Orthey2020IJRR} or \cite{steenrod_1951}.
\subsection{Sparse Roadmaps}
To grow a sparse roadmap, we use the algorithm by Dobson and Bekris \cite{dobson_2014}. The sparse roadmap planner is similar to probabilistic roadmaps \cite{Kavraki1996, Karaman2011}, but uses a visibility region $\ensuremath{\delta}$, which consists of all feasible states in the hypersphere of radius $\ensuremath{\delta}$ around a state, to prune samples. To implement the pruning step, we add a new feasible sample if and only if it fulfills a sparseness condition.
The sparseness condition consists of four elementary tests \cite{dobson_2014}. First, we test for coverage, meaning we add the sample if it does not lie in the visibility region of any sample in the graph. Second, we test for connectivity, meaning we add the sample, if it lies in multiple visibility regions, which belong to disconnected components of the sparse graph. Third, we test for interfaces, meaning we add the sample, if it lies in multiple visibility regions, which are not yet connected by an edge. Fourth and finally, we test for shortcuts, meaning we add the sample, if it provides proof of a shorter path through the free state space. We terminate the algorithm, if we either find a feasible path or if we fail $M$ consecutive times to add a sample to the sparse roadmap. For more details please see \cite{dobson_2014}.
The sparse roadmap planner is probabilistically complete and asymptotically near-optimality \cite{dobson_2014} and depends on the following parameters. First, the visibility region $\ensuremath{\delta}$, which is usually a fraction of the measure of the state space. Second, the maximum number of consecutive failures $M$. $M$ is important in the analysis of the algorithm, because it provides a probabilistic estimation of the free state space covered, which is defined as the percentage $1-\frac{1}{M}$ \cite{simeon_2002}. As an example, if we stop with $M=100$, our probabilistic estimate of the free state space covered is $99\%$. Finally, we have an additional parameter for testing for shortcuts, which provides a trade-off between optimality and efficiency \cite{dobson_2014}.
\section{Conclusion}
We presented the sparse multilevel roadmap planner (SMLR\xspace), which we believe to be the first algorithm to generalize sparse roadmap spanners \cite{dobson_2014} to fiber bundles \cite{Orthey2020IJRR}, which are models of multilevel abstractions. Our algorithm exploits multilevel abstraction using the notion of restriction sampling with visibility regions. We have shown SMLR to be asymptotically near-optimal and asymptotically sparse by showing restriction sampling to produces a dense sampling sequence. In evaluations, we showed SMLR to efficiently and correctly terminate on feasible and infeasible problems, even when those problems have narrow passages, intricate geometries or state spaces with dimensions of up to $34$-dof.
\section{Evaluation\label{sec:evaluation}}
\input{src/evaluations_table}
\input{images/evaluations/scenarios}
To evaluate SMLR\xspace, we compare its performance on eight scenarios against the algorithms SPARS and SPARS2 from the open motion planning library (OMPL). Both SPARS and SPARS2 are the only algorithms in OMPL we know of which can return on infeasible scenarios while not timing out. To ensure a fair comparison, we set the parameters of SMLR, SPARS and SPARS2 all to $M=1000$, $\ensuremath{\delta} = 0.25\mu$ with $\mu$ being the measure of the state space (removing effects stemming from different parameter values). For SMLR, we use the parameter $\eta = 1000$ which designates how fast we expand the graph visibility region for restriction sampling.
While we like our algorithm to correctly declare an infeasible problem as infeasible, we also like to make sure that the algorithm does not show false negatives, i.e. declaring a feasible problem to be infeasible. To ensure correctness, we always use two similar scenarios, one which is feasible and one which is infeasible. For all scenarios, we run each algorithm $10$ times with a time limit of $60$s. Our setup is a 8GB RAM 4-core 2.5GHz laptop running Ubuntu 16.04.
\subsection{6-dimensional Bugtrap}
Our first scenario is the classical narrow-passage Bugtrap scenario, where a cylindrical robot (the bug with $6$ degrees of freedom (dof)) has to escape a spherical object with a narrow exit (the trap), as shown in Fig.~\ref{fig:scenarios}. We use two versions, a feasible one with a bug which barely fits through the exit, and an infeasible one where the bug does not fit. As a simplification, we use an inscribed sphere which we describe using the fiber bundle $SE(3) \rightarrow \R^3$. We show the results in Table~\ref{table:evaluation}. While SMLR can solve (on average) both scenarios in $4.37$ and $2.47$s, respectively, both SPARS and SPARS2 time out after $60$s.
\subsection{6-dimensional Drone}
In the second scenario, we use a free-floating drone with $6$-dof. The drone has to traverse a room which is separated by a net. In the first version of the problem, we make the net large enough to let the drone fly trough (the feasible problem). In the second version, we make the net finely woven to prevent the drone from passing (the infeasible problem). As a simplification, we use a sphere at the center of the drone. We model this situation with the fiber bundle $SE(3) \rightarrow \R^3$. For the feasible scenario, all three planners solve the problem with SPARS2 taking $0.16$s, SMLR taking $0.23$s and SPARS taking $0.37$s. In the infeasible scenario, only SMLR solves the problem in $0.72$s, while SPARS and SPARS2 both time out.
\subsection{7-dimensional KUKA LWR}
In the third scenario, we use a fixed-base KUKA LWR robot with $7$-dof, which has to transport a windshield through a gap in a wall (Fig.~\ref{fig:scenarios}). We create two versions, a feasible one with the gap in the wall and an infeasible one where we close the gap. As a simplification, we use a projection onto the first two links of the manipulator arm, which we describe using the fiber bundle $\R^7 \rightarrow \R^3$. With our algorithm SMLR, we can solve both scenarios in $1.42$s and $5.34$s. For the feasible scenario, SPARS requires $33.66$s (but times out in $4$ cases) and SPARS2 requires $34.86$s (but times out in $3$ cases). Both SPARS algorithms time out for the infeasible scenario in all runs.
\subsection{34-dimensional PR2}
In the fourth scenario, we use the mobile-base PR2 robot with $34$-dof, which has to enter a room with a small opening as shown in Fig.~\ref{fig:scenarios}. We use again two scenarios, the feasible one with the opening and an infeasible one where we close the opening. As a simplification, we use two projections, first we remove the arms of the robot and second we project onto the mobile base. We model this situation by the fiber bundle sequence $\R^{34} \rightarrow \R^{7} \rightarrow \R^{2}$. Our algorithm SMLR requires $9.25$s to solve the feasible scenario (but times out in $1$ case) and it requires $0.32$s to terminate on the infeasible scenario. Both SPARS and SPARS2 cannot solve any of the runs in the time limit given.
\section{Introduction}
Sparse roadmaps \cite{dobson_2014} are essential in motion planning tasks to reduce model complexity and terminate motion planning in finite time, thereby providing (probabilistic) infeasibility proofs. Such infeasibility proofs are essential if we like to use a motion planner as building block for larger action skeletons \cite{Kaelbling2011} or symbolic planning systems \cite{Toussaint2018}. However, sparse roadmaps often operate on the full state space of the robot(s), thereby taking too much time to converge---making them often inapplicable for higher-dimensional systems.
To address this problem, we propose to use sparse roadmaps \cite{dobson_2014} in conjunction with multilevel abstractions of the state space \cite{Orthey2020IJRR}. By exploiting multilevel abstractions---which we model using fiber bundles \cite{steenrod_1951}---we can often terminate the algorithm significantly faster than state-of-the-art sparse roadmap planners operating on the full state space.
While multi-resolution roadmaps exists \cite{Ichnowski2019, Saund2020}, we are not aware of any algorithm to compute sparse roadmaps over multilevel abstractions. We therefore believe to be the first to combine both concepts into one concise algorithm. Let us summarize our contributions as follows.
\begin{enumerate}
\item We present the Sparse MultiLevel Roadmap planner (SMLR\xspace), which generalizes sparse roadmaps \cite{dobson_2014} to efficiently exploit fiber bundle structures \cite{Orthey2020IJRR}
\item We evaluate SMLR\xspace on eight challenging feasible and infeasible motion planning problems involving high-dimensional state spaces up to $34$-degrees of freedom (dof)
\end{enumerate}
\section{Sparse Multilevel Roadmaps}
\input{src/pseudocode}
Let $(\x_I, \x_G, X_1,\ldots,X_K)$ be a fiber bundle sequence with $\x_I$ and $\x_G$ being start and goal state. Our task is to generalize the sparse roadmap planner \cite{dobson_2014} to fiber bundle sequences by growing $K$ graphs $(G_1, \ldots, G_K)$ on the bundle spaces $(X_1,\ldots,X_K)$, whereby we grow the $k$-th graph using restriction sampling \cite{Orthey2020IJRR} of the $(k-1)$-th graph. We call our algorithm the sparse multilevel roadmap planner (SMLR). SMLR depends on three parameters, the two parameters $\ensuremath{\delta}$ and $M$ from sparse roadmaps, and the additional parameter $\eta$, which we detail later.
We show the algorithm in Alg.~\ref{alg:smlr}. We start to create a priority queue (Line \algref{alg:smlr}{alg:smlr:priorityqueue}), which orders bundle spaces depending on an importance criterion $i$, which we detail later. We sort the queue such that the space with the maximum value is on top. We then iterate over the bundle spaces from $X_1$ to $X_K$ (Line \algref{alg:smlr}{alg:smlr:forcur}) and push the current space onto the priority queue with an importance of $1$ (Line \algref{alg:smlr}{alg:smlr:pushcur}). We then execute a section test (Line \algref{alg:smlr}{alg:smlr:section}), where we search for a feasible solution over the path restriction of the solution path (if any) on the previous bundle space $X_{\text{cur}-1}$. The \textsc{SectionTest} method helps to overcome narrow passages, but is not essential for the understanding of this paper -- we use it as a black box within SMLR. Please see our previous publication \cite{Orthey2020TRO} for more information.
We then grow the roadmaps $(G_1,\ldots,G_{\text{cur}})$ as long as the planner terminate condition (PTC) of the current bundle space $\X_{\text{cur}}$ is not fulfilled (Line \algref{alg:smlr}{alg:smlr:while}). In our case, we terminate if a solution is found or if we reach either the infeasibility criterion or a time limit. Inside the while loop, we take the top bundle space $\X_{\text{top}}$ with the largest importance value (Line \algref{alg:smlr}{alg:smlr:poptop}) and sample a random point using \textsc{RestrictionSampling} (Line \algref{alg:smlr}{alg:smlr:restrictionsampling}). We then add the point to the graph with \textsc{AddConditional} (Line \algref{alg:smlr}{alg:smlr:addconditional}), if it fulfills the sparseness condition \cite{dobson_2014}, which we detail in Sec.~\ref{sec:background}. Finally, we recompute the importance of the bundle space (Line
\algref{alg:smlr}{alg:smlr:importance}) and push the space back onto the queue (Line \algref{alg:smlr}{alg:smlr:pushtop}).
The two methods \textsc{RestrictionSampling} and \textsc{ComputeImportance} are further detailed in the next two subsections. To facilitate understanding, we give first a brief overview of each. First, in \textsc{RestrictionSampling}, we restrict sampling on the bundle space by using information from the graph on its base space. We differ from dense roadmaps by using the visibility region of the sparse graph which depends on the visibility range $\ensuremath{\delta}$. Second, in \textsc{ComputeImportance}, we use the sampling density of the sparse graph together with the number of consecutive failures to estimate the importance of the bundle space and thereby its position in the priority queue. Next, we discuss each method in more detail and provide an analysis of the algorithm.
\subsection{Restriction Sampling with Visibility Regions}
Let $X_k$ be a bundle space with graph $G_k$, and let $X_{k-1}$ be its base space with graph $G_{k-1}$. To grow the graph $G_k$, we use the framework of restriction sampling \cite{Orthey2020IJRR}. In restriction sampling, we sample states on $X_k$ by uniformly sampling from the graph restriction of $G_{k-1}$ (see Sec.~\ref{sec:background}). To give guarantees on asymptotic optimality, we would need the vertices of $G_{k-1}$ to become dense in the free state space.
To avoid using a dense graph for sampling \cite{dobson_2014} while giving guarantees on asymptotic near-optimality, we opt to exploit the graph visibility region. The visibility region of a graph $G$ is the set $V(G, \ensuremath{\delta}) = \{x \in X \mid d(x,e[I]) \leq \ensuremath{\delta} \text{ for some } e \text{ in } G\}$, whereby $d$ is the metric on $X$, $e$ is an edge from $G$ and $e[I]$ is the image of the edge in $X$.
\begin{wrapfigure}{r}{0.5\linewidth}
\includegraphics[width=0.95\linewidth]{images/graphvisibility.pdf}
\caption{Visibility region $V(G, \ensuremath{\delta})$ of a graph $G$.\label{fig:visibilityregion}}
\end{wrapfigure}
To sample the graph visibility region, we use the restriction sampling algorithm depicted in Alg. \ref{alg:restriction_sampling}.
The algorithm requires an existing base graph $G_{k-1}$ (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:exists}), then samples a random state on a random edge (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:sampleedge}).
Sampling the visibility region directly would be too uninformative.
We thus use a smoothly varying parameter $\visRegion_{\text{bias}} \in [0,\ensuremath{\delta}]$, which first restricts sampling to the sparse graph ($\visRegion_{\text{bias}}=0$), then smoothly increase in each iteration until the whole visibility region $\delta$.
This situation is visualized in Fig.~\ref{fig:visibilityregion}. To control the rate of change of $\visRegion_{\text{bias}}$, we use the parameter $\eta$.
In particular for narrow passages, it is often crucial to sample directly on the graph restriction. We thus sample the visibility region (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:uniformnear}) only in a certain percentage of cases, depending on $\visRegion_{\text{bias}}$. Once a base element is chosen, we sample a corresponding fiber space element (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:samplefiber}), lift the states (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:lift}) and return the state (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:return}). If no base graph exists, we revert to a uniform sampling of the space (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:nobase}).
\subsection{Importance and Ordering of Bundle Spaces}
To grow sparse multilevel roadmaps, we need to decide which roadmap on which level we should grow next, i.e.~we need an ordering of bundle spaces. In prior work \cite{Orthey2020IJRR}, we advocated the use of an exponential importance criterion $i(X_k) = 1/(|V_k|^{1/{n_k}}+1)$, with $|V_k|$ being the vertices on the graph $G_k$ on $X_k$ and $n_k$ being the dimensionality of $X_k$, which was motivated by the sampling density of the graph which is proportional to $|V_k|^{1/n_k}$ \cite{Hastie2009}.
However, sampling density is not good criterion for sparse roadmaps, because we care more about the coverage of the free space. To account for the coverage of the free space, we advocate an importance criterion using $M_k$, the number of consecutive sample failures. The number $M_k$ provides an estimate of the free space coverage, namely as the percentage $1-\frac{1}{M_k}$ \cite{Simeon2000}. The higher $M_k$, the less often we should sample $X_k$. We formulate the importance criterion thus as
\begin{equation}
i(X_k) = \frac{1}{M_k+1}.\label{eq:importance}
\end{equation}
Note that we stop the algorithm only if $M_k > M$ \emph{and} $X_k$ is the current bundle space $\X_{\text{cur}}$. Since $i(X_k)$ will eventually converge to zero, we ensure that every bundle space up until $k$ would be chosen infinitely many times. This is an important requirement to provide asymptotic guarantees of the algorithm.
\subsection{Analysis of Algorithm}
To prove SMLR to be asymptotically near-optimal and asymptotic sparse, we need to prove that restriction sampling with visibility regions is dense in the free state space of the last bundle space $X$. Since the importance criterion in Eq.~\eqref{eq:importance} eventually converges to zero, we can thus ensure that we produce an infinite sampling sequence on the free state space $\X_{\text{free}}$. Therefore, when using sparse roadmap spanner \cite{dobson_2014} to grow the roadmap on $X$, we retain all their properties, which include asymptotic near-optimality and asymptotic sparseness. However, we might reduce the number of vertices considerably.
Let us prove that restriction sampling with visibility regions is dense in the \emph{free} state space $\X_{\text{free}}$ on the fiber bundle $(X,B,F,\pi)$. This argument can be applied recursively to prove the same for fiber bundle sequences \cite{Orthey2020IJRR}. Note that we use the set-theoretic definition of dense, which states that a set $A$ is dense in a space $X$ if the intersection of $A$ with any non-empty open subset $U$ of $X$ is non-empty \cite{munkres_1974}.
\begin{theorem}
Restriction sampling with visibility regions on $X$ produces a sampling sequence $A = \{x_m\}$, which is dense in $\X_{\text{free}}$.
\end{theorem}
\begin{proof}
Let $U$ be an arbitrary open set in $\X_{\text{free}}$. Since $\pi$ is admissible, the projection $\pi(U)$ of $U$ onto $B$ is an open subset of the free base space \cite{Orthey2019}. Since uniform sampling on $B$ with visibility regions will eventually cover the free base space \cite{Simeon2000}, $\pi(U)$ will be a subset of the visibility region of the graph on $B$. When the number of samples goes to infinity, we revert to uniform sampling of the graph restriction and will thus sample $\pi(U)$ infinitely many times. By sampling the fiber over $\pi(U)$, we thus eventually obtain a sample $x$ in $U$. Since $U$ was arbitrary, the sequence is dense in $\X_{\text{free}}$.
\end{proof}
\section{Related Work}
We review two aspects of (sampling-based) motion planning \cite{lavalle_2006}. First, we discuss multilevel motion planning, where we plan over multiple levels of abstraction. Second, we discuss sparse roadmaps on general state spaces. We will investigate both topics in detail in Sec. \ref{sec:background}.
\subsection{Multilevel Motion Planning}
To efficiently solve high-dimensional motion planning problems, we can use the framework of multilevel motion planning
\cite{Ferbach1997, Sekhavat1998, Reid2020, Vidal2019, Orthey2020IJRR}, where (admissible) lower-dimensional projections are used to simplify the state space of a robot. We can construct multilevel abstractions either manually \cite{Reid2019, Orthey2019} or learn them from data \cite{Ichter2019, Brandao2020}. Our approach is complementary, in that we assume a multilevel abstraction to be given and we concentrate on computing sparse roadmaps over those abstractions.
Once we fix a multilevel abstraction, we can utilize classical motion planning algorithms to exploit them. A popular choice is the rapidly-exploring random tree algorithm \cite{Kuffner2000}, which we can generalize to selectively grow samples towards regions informed by lower-dimensional abstractions \cite{Ichter2019, Orthey2019} or workspace information \cite{Rickert2014}. While algorithms often show speed-ups of two to three orders of magnitude \cite{Rickert2014, Tonneau2018}, they usually lack guarantees on asymptotic optimality \cite{Karaman2011}. There are, however, two planner which provide those guarantees. First, the quotient-space roadmap planner (QMP*)
\cite{Orthey2020IJRR, Orthey2018}, which generalizes the probabilistic roadmap planner (PRM*) \cite{Karaman2011}. Second, the hierarchical bi-directional fast marching tree (HBFMT*) \cite{Reid2019, Reid2020}, which generalizes the fast marching trees algorithm (FMT*) \cite{Janson2015}. While both guarantee asymptotic optimality \cite{Orthey2020IJRR, Reid2020}, they support, however, either only euclidean spaces \cite{Reid2020} or rely on dense roadmaps \cite{Orthey2020IJRR}. Our approach differs significantly, in that we are the first to compute sparse roadmaps over general multilevel abstractions---while providing guarantees on asymptotic near-optimality.
\subsection{Sparse Roadmaps}
The history of sparse roadmaps essentially begins with the pioneering work by Sim{\'e}on et al. \cite{Simeon2000}, who were the first to prune states based on visibility regions. With visibility regions, we try to find a minimal set of states from which the full state space is visible, similar to the concept of guards in the art gallery problem \cite{Orourke1987}. However, visibility roadmaps often sacrifice on path quality. As remedies, we could introduce cycles \cite{Schmitzberger2002, Nieuwenhuisen2004} or use edge visibility \cite{jaillet_2008} to improve path quality.
While cycles and edge visibility can improve path quality, there are no guarantees on optimality. This changed with the advent of near-optimal sparse roadmaps \cite{Marble2013}. Using dense asymptotic optimal roadmaps \cite{Karaman2011}, we can use graph spanners to sparsify a dense roadmaps while providing guarantees on path quality. We can achieve this by either removing edges \cite{Marble2013, Wang2015} or edges and vertices \cite{Salzman2014}. Computing dense roadmaps before sparsification is, however, computationally expensive. Later work introduces incremental sparse graph spanners, with which we can remove dependence on dense roadmaps altogether \cite{dobson_2014}. Our work is complementary to sparse graph spanners, in that we also use incremental sparse graph spanners \cite{dobson_2014}. We differ, however, in building not one, but multiple sparse roadmaps on different abstraction levels.
When using sparse roadmaps, we often face the problem of explicitly defining a visibility or connection radius to define the sparseness of the graph.
To handle this trade-off between optimality and efficiency, we can often create multi-resolution roadmaps \cite{Du2020}. Multi-resolution roadmaps are sets of roadmaps which differ in how sparse they are. To vary roadmap sparsity, we could change the connection radius \cite{Saund2020} or we can selectively remove edges, either evenly distributed \cite{Ichnowski2019} or based on a reliability criterion \cite{Murray2020}. To exploit those multi-resolution roadmaps, we could plan on the highest resolution roadmap and selectively refine the roadmap whenever we hit an obstacle \cite{Saund2020}. Such a strategy is efficient, because solutions on sparser roadmaps act as admissible heuristics for planning \cite{Aine2016, Du2020}. While multi-resolution roadmaps exist on the same state space, our approach is complementary, in that we create sparse multilevel roadmaps on different state spaces, whereby each state space represents a relaxed planning problem.
|
{
"timestamp": "2021-10-08T02:10:28",
"yymm": "2011",
"arxiv_id": "2011.00832",
"language": "en",
"url": "https://arxiv.org/abs/2011.00832"
}
|
"\\section{Introduction}\n\nTo date, Einstein's General Relativity (GR) strongly stands against myri(...TRUNCATED)
| {"timestamp":"2020-11-03T02:37:59","yymm":"2011","arxiv_id":"2011.00805","language":"en","url":"http(...TRUNCATED)
|
"\n\n\n\\subsection{KB Module for Inferential Reasoning}\n\\label{subsec:KGN}\nThe KB module aims to(...TRUNCATED)
| {"timestamp":"2020-11-03T02:37:07","yymm":"2011","arxiv_id":"2011.00777","language":"en","url":"http(...TRUNCATED)
|
"\\section{Preliminaries}\n\nIn this section, we provide the basic LP relaxations for load balancing(...TRUNCATED)
| {"timestamp":"2021-02-23T02:36:36","yymm":"2011","arxiv_id":"2011.00817","language":"en","url":"http(...TRUNCATED)
|
"\\section*{Abstract}\n\\begin{abstract}\n Magnetic skyrmions were thought to be stabilised only in(...TRUNCATED)
| {"timestamp":"2020-11-03T02:37:27","yymm":"2011","arxiv_id":"2011.00785","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nZero-shot and few-shot learning are notoriously challenging for neural n(...TRUNCATED)
| {"timestamp":"2020-11-03T02:41:28","yymm":"2011","arxiv_id":"2011.00890","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 5