Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 6
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 40219)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 6
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
The terms ``Smart Industry'', ``Industry 4.0'', or ``Fourth Industrial Revolution'' \cite{schwab2017fourth} have been coined to describe a vision that includes a wide range of emerging technologies that, when used collaboratively, have the potential to contribute to highly optimized production processes \cite{Lasi2014,Vaidya2018}.
Several fields are of central importance in the development of the so-called \textit{Smart Factory}, including
sensor technology, Cyber Physical Systems, the Internet of Things, advanced communication technology, big data analytics, \ac{ML}, \ac{AI} and cloud computing \cite{Chen2017,Castelo2019}.
Various examples exist in which the successful implementation of these technologies results in higher efficiency, better human decision making and less waste \cite{Vaidya2018}.
Realizing the large potential of Smart Industry has been recognized as a key factor by governments and industries in ensuring economic competitiveness and sustainability in the next decades.
The main machinery in the production line that we study is a high-speed stamping press that is able to operate on the strip steel at a frequency of 180 strokes per minute.
It is crucial that all strip steel that enters the production process is of sufficient quality.
Insufficient material quality results in poor quality of the final product or expensive damage to the production machinery and corresponding production downtimes.
The current quality requirements for the material are given as upper limits of stress [MPa] of the material properties, the so-called \ac{USL}.
Currently, the material quality is checked by performing destructive tests on samples of the steel.
By means of a tensile test on the sample, key material properties such as yield strength and tensile strength are measured. Although the material properties of the sampled steel can be measured reliably using these methods, the process is manual, slow, produces material waste and is only possible to be performed on a tiny fraction.
Furthermore, it is not a solution for detecting changes in
material properties over the full length of a steel coil, because more frequent sampling slows down production.
Destructive tests are therefore not suitable for continuous quality control and
detecting highly local changes in material properties.
In implementations of \textit{soft sensors}, easily obtainable process variables are measured inline, which are converted using statistical or machine learning models to quantities that otherwise have to be measured in expensive, time-consuming lab tests \cite{Jiang2021}.
An important component of soft sensing in smart industry is \ac{NDT} \cite{Sophian2020}.
In the steel-based manufacturing industry, \ac{NDT} sensors perform contactless and non-destructive measurements on the steel in real-time and can therefore be used in a high throughput production line to measure all strip steel that enters the process \cite{Garcia2011}. By combining the real-time stream of measurements with appropriate machine learning models, advanced online fault detection and quality control systems can be developed. For instance, in settings where temporal patterns are relevant, Long Short-Term Memory and Gaussian Processes have been used \cite{Malhotra2015,Berns2020}, which can be too computationally expensive for a high-throughput production line. Latent variable models have also been used in industrial settings, such as supervised factor analysis \cite{Zhiqiang2015} and partial least squares \cite{Rosipal2005}. A successful implementation of a real-time quality control system leads to fewer defects in products, improved quality, less production downtime and less material waste. Furthermore, the real-time model estimation of material quality from the inline measurements can be used in the active control of production parameters, which adapts the machinery settings to be optimal for the current specifics of the material \cite{Heingartner2010,Zhiqiang2015,Jiang2021}.
In this contribution, we develop a real-time quality control and fault detection solution for the high-throughput production line. The measurements are performed at the start of the production line and exactly at the location where the press operates further down the line. Our contributions are three-fold:
1) A model is developed for estimating material properties in real-time from the inline contactless sensor measurements. We use the ground truth material properties of several production coils to fit the model. 2) The model is used for the early detection of insufficient material quality. We show a case where the model estimation of the material properties can detect faulty material in order to prevent production faults. This is shown on a coil from a faulty batch of coils that had already caused product faults. 3) We study the model estimations on 108 km of processed steel and we link the model estimations to reported product faults that occurred during production. These product faults are caused by a crack arising in the product while in the press and it is hypothesized that insufficient material quality is one of the causes.
The paper is organized as follows: in Sec.~\ref{sec:Data} the relevant details of the new industrial datasets are introduced. Subsequently, in Sec.~\ref{sec:Methods} the methods used for the analysis of the data and for the estimation of material properties are discussed. In Sec.~\ref{sec:Results} we present the results of our experiments and discuss them in Sec.~\ref{sec:Discussion}. Lastly, the work is summarized in Sec.~\ref{sec:Conclusion} and an outlook for future work is provided.
\section{Data description and analysis} \label{sec:Data}
The production coils of small strip steel are associated with a ``Heat" number, which identifies the specific elements used in the steel production batch.
The \ac{NDT} sensor measurements are based on Eddy Currents and the reader is referred to \cite{Garcia2011} for details about these sensors. Here, the measurements are performed at 10 testing frequencies and are denoted by $\bm{x}_i \in \mathbb{R}^{20}$. The first half of the components contain the amplitude gains of each frequency and the second half the phase shifts. Hence, the amplitude gain and phase shift of measurement frequency $j$ are in $x_{ij}$ and $x_{i|j+10}$, respectively. We will also call individual sensor variables ``SV $i$".
\subsection{Controlled experiment: measuring modified steel samples
} \label{sec:Data:LabModified}
In order to establish an expected lower and upper bound for each sensor variable, steel created with extreme material properties was measured with the contactless sensor and compared to the reference steel.
A selection of nine steel strips was divided
into three groups of three.
The first group was modified to be ``harder'' and the second group to be ``softer'', i.e. towards larger and lower values of yield- and tensile strength, respectively. The remaining group was left unmodified to serve as reference material.
Twohundred sensor measurements were taken at the start, middle and end of the strips.
We normalized the original sensor measurements $\bm{x}_i \in \mathbb{R}^{20}$ as follows:
\begin{equation} \label{eq:normalize_sensor}
\bm{x}_i := \frac{\bm{x}_{i} - \bm{P}_{10\%}^H}{\bm{P}_{90\%}^S - \bm{P}_{10\%}^H}\, , \quad
\bm{x}_i := \bm{x}_i - \bm{\mu}^R\, .
\end{equation}
where $\bm{P}_{10\%}^H \in \mathbb{R}^{20}$ are the 10th percentiles of the values within the \textbf{H}ard group and
$\bm{P}_{90\%}^S \in \mathbb{R}^{20}$ are the 90th percentiles of the values within the \textbf{S}oft group. The vector $\bm{\mu}^R \in \mathbb{R}^{20}$ contains the means of the reference steel samples after the first transformation.
As the hard strips had lower measurement values, the transformation \eqref{eq:normalize_sensor} is effectively min-max normalization, with percentiles
being estimates of the min and max.
The shift moves the reference material close to zero.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{HardZacht_PCA_loadings}
\caption{Loadings of the first two \ac{PCA} components computed on the standardized hard and soft measurements. Only dissimilar variables are labeled.}
\label{fig:HardZacht_PCA_loadings}
\end{figure}
Fig.~\ref{fig:HardZacht_PCA_loadings} shows the loadings of the first two principal components computed on the standardized measurement data.
Due to large mutual positive correlations between the variables, the first principal component has large loadings on the majority of the variables and explains already 86\% of the variance in the data.
Combined with the second principal component, which is loaded mostly on SV 11 and SV 4, 96\% of the variance is explained.
\begin{figure}
\centerline{\includegraphics[width=0.65\linewidth]{figures/HardZacht_PCA_scores.pdf}}
\caption{Sensor measurements on steel with different material properties projected on the the first two \ac{PCA} components of the standardized data. 15\% of the total number of measurements is shown, uniform randomly chosen.}
\label{fig:HardZacht_PCA_scores}
\end{figure}
Fig.~\ref{fig:HardZacht_PCA_scores} shows the projection of all data points on the first two principal components.
In general, the first principal component scores separate the different material properties well.
Remarkably, among the points labeled as hard material, there are two outlier groups of measurements.
A tensile test of the corresponding strip revealed that the strip had similar material properties to the reference material and therefore this likely indicates a failure in the modification of this strip.
\subsection{Production setting: continuous measurements in the lin
}
In this section we discuss the dataset obtained to relate the non-invasive 20 dimensional sensor measurements to material properties obtained by destructive testing.
\subsubsection{Sensor data during productio
}
The sensor was installed at the start of the production line to continuously measure the production steel coils.
This produced a stream of measurements $\bm{x}_i \in \mathbb{R}^{20}$ with a timestamp and the current steel coil identification.
From each coil a variable number of products is made, thus the range of measurements varies from a few hundred to tens of thousands of products.
In some instances a production stop caused the sensor to produce physically impossible values or no values at all.
These faulty measurements were removed from the dataset.
\subsubsection{Destructive Tensile tests}
From 47 selected coils a sample at the start of the coil was taken to measure material properties with destructive testing.
Three tensile tests were performed on each sample to measure yield strength and tensile strength, in the following denoted by ``t1'' and ``t2''.
During the testing period
one production coil resulted in many products with cracks, hypothesized to be caused by
insufficient steel quality.
Hence, it was decided that a related coil from the same heat should be rejected for production and instead be fully measured by the non-invasive sensor as well as frequently sampled for tensile testing.
In total, at nine locations distributed over the full length of this coil two samples were taken for tensile testing.
We label this particular coil as ``Testcoil'' to distinguish it from the rest of the 47 production coils.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{testCoil_sP7}
\caption{\textit{Blue points}: sensor variable 17 measurements made on the testcoil. \textit{Solid orange line}: moving average over 50 measurements. \textit{Dashed black lines}: locations of the destructive test samples.}
\label{fig:testCoil_sP7}
\end{figure}
Fig.~\ref{fig:testCoil_sP7} shows the value of sensor variable 17 over the full length of this coil, along with the nine locations of the samples at which two tensile tests were performed.
Instances of products that contained cracks were logged during the months of the experiment.
In 17 cases the identification code could be logged of the corresponding inline measured material.
In 25 cases of product faults that occurred over six production coils the hour during which a crack had occurred was logged.
We normalized the sensor measurements using Eq.~\eqref{eq:normalize_sensor}, such that
values close to zero indicate
measurements
similar to the reference material of the experiment in Sec.~\ref{sec:Data:LabModified}.
Furthermore, negative values are closer to the measurements of the hard material while positive values are closer to the measurements of the soft material.
\begin{figure}
\centerline{\includegraphics[width=0.65\linewidth]{t1t2vssP7}}
\caption{Material properties t1 and t2 against sensor variable SV 17 for the 42 production coils and the testcoil.
Values for t1 and t2 denote the mean of three tensile tests. Values of SV 17 denote the mean
of the first 200 \ac{NDT} sensor measurements for the production coils and for the testcoil the mean around the 18 samples.
Standard deviations are on average about two times the size of the markers.
The dashed red line denotes the \ac{USL} of the respective material properties.}
\label{fig:t1t2vssP7}
\end{figure}
We standardized both material properties t1 and t2 that were obtained from the tensile tests on the 48 coils.
To relate the tensile tests to the sensor measurements, the mean and standard deviation were computed from the first 200 sensor measurements on the 47 production coils.
Coils with less than 200 measurements were dropped from the data, leaving 42 production coils.
For the 18 tensile tests performed on the nine samples spanning the full length of the testcoil we computed the mean and standard deviation of the five sensor measurements in the direct neighbourhood.
Fig.~\ref{fig:t1t2vssP7} shows the resulting values of the destructively tested material properties against non-invasive sensor variable 17 for the 43 coils (42 production coils + testcoil, corresponding to 42 tensile tests + 18 tensile tests).
The \ac{USL} of the material properties is marked in both figures. As can be seen, several points measured on the testcoil had material properties far exceeding the \ac{USL}.
The corresponding values for SV 17 were also very different from the rest. Some production coils slightly exceeded the \ac{USL} too.
We observe a negative linear correlation between material properties and sensor measurements.
In general, coils from the same heat exhibited
similar material properties and
sensor measurements.
\begin{figure}
\centerline{\includegraphics[width=0.65\linewidth]{productioncoils_PCA_loadings}}
\caption{\ac{PCA} loadings of the \ac{NDT} sensor variables computed on the standardized production coil dataset. Only dissimilar variables are labeled.}
\label{fig:productioncoils_PCA_loadings}
\end{figure}
Fig.~\ref{fig:productioncoils_PCA_loadings} shows the loadings of the first two principal components on the sensor variables, obtained from \ac{PCA} on the full 20-dimensional sensor measurements.
In general, the loadings and the variation explained by the principal components are similar to those of the controlled experiment of Fig.~\ref{fig:HardZacht_PCA_loadings}.
In Fig.~\ref{fig:productioncoils_PCA_vs_Rm}, the values of material property t2 obtained from the tensile tests on the samples are shown against the projections of the corresponding sensor measurements on the first two principal components.
Note that the points with outlier t2 measurements are only separated from the rest of the sensor measurements on the first principal component. From the scores on the second principal component the different material properties cannot be distinguished.
\begin{figure}
\centering
\includegraphics[scale=0.75]{productioncoils_PCA_vs_Rm
\caption{The production coil dataset: Material property t2 against the scores on the first two principal components computed by \ac{PCA} on the standardized \ac{NDT} sensor measurements.}
\label{fig:productioncoils_PCA_vs_Rm}
\end{figure}
As seen in Fig.~\ref{fig:testCoil_sP7} the signal is characterized by a band of values and for the test coil it exhibited a large transition in the middle of the coil.
Because of the high redundancy of the variables we ranked the quality of the individual variables by estimating the measurement noise.
The standard deviation between sample number 2000 and 4000 was computed and divided by the total transition difference of the signal, i.e. for each variable the difference between the first and last value of the moving average of Fig.~\ref{fig:testCoil_sP7}.
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{sensor_noise}}
\caption{The fraction of the standard deviation with respect to the transition difference in Fig.~\ref{fig:testCoil_sP7}, as an estimation of the measurement noise.}
\label{fig:sensor_noise}
\end{figure}
The value of this fraction is shown in Fig.~\ref{fig:sensor_noise} for each variable. SV 17 had one of the lowest estimated noise values, while SV 3, 4 and 11 had high estimated noise values. Hence, as the second principal component was significantly loaded onto these variables in both datasets, the variance explained by this component was mainly measurement noise.
\begin{table}
\caption{Correlation matrix: Principal Components and material properties without
(left) and including testcoil points (right)}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& \textbf{t1} & \textbf{t2} \\
\hline
\textbf{PC1} & 0.31 & 0.42 \\
\hline
\textbf{PC2} & 0.45 & 0.04 \\
\hline
\textbf{t1} & 1.00 & 0.38 \\
\hline
\end{tabular}
\quad
\begin{tabular}{|c|c|c|}
\hline
& \textbf{t1} & \textbf{t2} \\
\hline
\textbf{PC1} & 0.97 & 0.97 \\
\hline
\textbf{PC2} & 0.03 & -0.01 \\
\hline
\textbf{t1} & 1.00 & 0.99 \\
\hline
\end{tabular}
\label{tab:correlationPC_MP}
\end{center}
\end{table}
Table~\ref{tab:correlationPC_MP} contains the Pearson correlation for this dataset computed with and without the testcoil points. Excluding the testcoil points, the correlation with the principal components was much smaller, but still significant.
\section{Methods}
\label{sec:Methods}
The change in material properties is not considered to result from periodic time variations, but rather local fluctuations in the production of the steel. The analysis in the previous sections demonstrates linear correlations and relationships in our datasets. Therefore a linear model, \ac{PLS}, is considered
for estimating the material quality and fault detection.
For \ac{PLS} regression it is assumed that the data is generated by a smaller number of latent variables than the number of observed variables. Let $n$ be the number of data points, $m$ the number of observed variables and $o$ the number of target variables.
Then for the predictor matrix $\bm{X} \in \mathbb{R}^{n \times m}$, target matrix $\bm{Y} \in \mathbb{R}^{n \times o}$ and assuming $k$ number of latent variables, the \ac{PLS} assumption can be written as follows \cite{Rosipal2005}:
\begin{align}
\begin{split}
\bm{X} = \bm{T} \bm{P}^T + \bm{E}\enspace , \\
\bm{Y} = \bm{U} \bm{Q}^T + \bm{F}\enspace ,
\end{split}
\end{align}
where $\bm{T} \in \mathbb{R}^{n \times k}$ and $\bm{U} \in \mathbb{R}^{n \times k}$ are the score matrices containing the scores on the $k$ latent variables for each datapoint's input and target, respectively.
The matrix $\bm{P} \in \mathbb{R}^{m \times k}$ contains the original input variable loadings on the $k$ latent input variables and the matrix $\bm{Q} \in \mathbb{R}^{o \times k}$ contains the original target variable loadings on the $k$ latent target variables.
Lastly, $\bm{E} \in \mathbb{R}^{n \times m}$ and $\bm{F} \in \mathbb{R}^{n \times o}$ are the residuals.
The optimization procedure finds the $k$ latent variables in $\bm{X}$ and $\bm{Y}$ that have maximal covariance.
We used the implementation of \cite{scikit-learn} using the default optimization parameters.
The sensor measurements $\bm{x}_i \in \mathbb{R}^{20}$ are used as inputs and the material properties $\bm{y}_i \in \mathbb{R}^2$ as targets.
We varied the number of latent variables $k$ in cross-validation to determine the optimal value. We evaluated the accuracy of the model using \ac{RMSE} in cross-validation. In each fold of the cross-validation, one coil was left out for validation and the model was fitted on the rest of the coils. This is similar to Leave One Out cross-validation, with the exception of one fold that had 18 datapoints of the testcoil.
Furthermore, we evaluated the accuracy of a binary classifier that was based on thresholding of the estimated material properties $\hat{y}_i \in \mathbb{R}^2$ using the \ac{USL}. We considered the following three classification rules for classifying a measurement as a material fault:
\begin{align} \label{eq:fault_classification_rules}
\begin{split}
\hat{y}_{i1} > \USL(t1)\,, \quad \hat{y}_{i2} > \USL(t2)\,, \\
(\hat{y}_{i1} > \USL(t1)) \, \, | \, \, (\hat{y}_{i2} > \USL(t2))\, .
\end{split}
\end{align}
Hence, material is classified as faulty based on the yield strength and tensile strength individually or based on the combination.
For each rule, it is possible to compute the precision and recall from the resulting classifications and the true labels. We also computed $F_1$ and $F_3$ scores. The $F_3$ score assigns three times more importance to recall over precision, which is more appropriate in our case: a missed material fault means that out of specification material goes into production, which may cause extremely costly damage to the machinery or low quality products. In contrast, a false alarm results in minor cost in the form of wasted material that would have been suitable for production and potentially minor production delays when the material is removed from the coil.
\section{Results} \label{sec:Results}
\subsection{Dataset/Production coils} \label{sec:Results:ProductionCoils}
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{K_PLS}}
\caption{\ac{RMSE} computed as the mean of the \ac{RMSE} obtained on the validation sets in leave-one-coil-out cross-validation vs. the number of components/latent variables in \ac{PLS}.
}
\label{fig:PLS_avgRMSE_K}
\end{figure}
Fig.~\ref{fig:PLS_avgRMSE_K} shows the average \ac{RMSE} obtained by the \ac{PLS} model in
the one-coil-out cross-validation for increasing number of latent variables $k$.
Upon further inspection we note that excluding the testcoil points results in
by far the highest \ac{RMSE}, which is an outlier that is not shown in Fig.~\ref{fig:PLS_avgRMSE_K}.
Thus, one needs to ensure that the full range of variation that is potentially seen in production is included in model fitting, which might require deliberate creation of undesirable material.
Furthermore, it can be seen that the \ac{RMSE} does not decrease significantly by introducing more than one component.
Hence, \ac{PLS} optimization determined
one component of the sensor measurements $\bm{X}$ and one component of the material properties $\bm{Y}$ which, due to significant covariance, could be exploited in the regression.
\begin{figure}
\centerin
\includegraphics[width=0.8\linewidth]{PLS_loadingplot}
\caption{Loadings on the component extracted by \ac{PLS}
of the sensor variables in $\bm{X}$ (\textit{Left}) and
the destructively tested material properties in $\bm{Y}$ (\textit{Right}).}
\label{fig:PLS_loadingplot}
\end{figure}
Fig.~\ref{fig:PLS_loadingplot} shows the loadings of the first \ac{PLS} component on the variables, for both the sensor variables $\bm{X}$ and the material properties $\bm{Y}$. These loadings were obtained from a \ac{PLS} fit on the entire dataset (42+18 points). As can be seen, the sensor variables 5 to 10 and 12 to 20 had nearly identical loadings on the first component. These loadings were highly similar to the first principal component from the \ac{PCA} of Sec.~\ref{sec:Data}.
The component extracted from $\bm{Y}$ had equal loadings for both material properties.
Since the non-invasive sensor measurements are strongly correlated and one PLS component is sufficient for the task, the question arises if similar performance can be achieved by individual variables.
\begin{figure*}
\centerin
\includegraphics[width=0.9\textwidth]{linearRegression_RMSE
\caption{\textit{Left}: Cross-validation \ac{RMSE} of linear regression for each sensor variable as predictor of the material properties t1 and t2. \textit{Right}: Cross-validation \ac{RMSE} of PLS with number of components $k=1$. Outliers are not shown.}
\label{fig:linearRegression_RMSE}
\end{figure*}
Fig.~\ref{fig:linearRegression_RMSE} shows the cross-validation \ac{RMSE} for linear regressions with the individual sensor variables as predictor along with the \ac{RMSE} obtained from \ac{PLS}. Linear regressions using one of the higher loaded variables from Fig.~\ref{fig:PLS_loadingplot} had similar performance as the \ac{PLS} model. Although differences were small, the predictions of property t1 and t2 were most accurate when based on SV 17 and SV 10, respectively.
These sensor variables had low estimated measurement noise in accordance with findings
in Sec.~\ref{sec:Data}, Fig.~\ref{fig:sensor_noise}.
We continued with the \ac{PLS} model, as the latent variable is more robust against sudden changes of the noise pattern in the variables.
\begin{figure}
\centerin
\includegraphics[width=0.9\linewidth]{PLS_productionCoils_t1
\\
\includegraphics[width=0.9\linewidth]{PLS_productionCoils_t2
\caption{One-coil-out cross-validation prediction results of material properties t1 (top panel) and t2 (bottom panel) using the \ac{PLS} model with number of components $k=1$.}
\label{fig:PLS_productionCoils_t2}
\end{figure}
Some coils introduce large variations in the material properties contained in the dataset and the predictions are negatively affected if the full range is not observed.
Fig.~\ref{fig:PLS_productionCoils_t2} shows for both material properties t1 (top) and t2 (bottom) the \ac{PLS} predictions made in the one-coil-out cross-validation against the target output.
Most predictions are within $0.5 \sigma$ of the target variables. The points from the test coil in the validation set are clearly underestimated.
However, the \ac{USL} divides the space into quadrants that are still mostly correctly predicted despite the extreme setting:
The bottom-left quadrant corresponds to true negative (TN) out-of-specification classifications, the bottom-right to false negatives (FN), top-right to true positive (TP) and top-left to false positive (FP) fault classifications.
\begin{table}
\caption{Performance fault classification based on \ac{PLS} predictions}
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& \textbf{TP} & \textbf{FN} & \textbf{FP} & \textbf{TN} &\textbf{Precision} & \textbf{Recall} & \textbf{$F_1$} & \textbf{$F_3$} \\
\hline
\textbf{Based on t1} & 10 & 7 & 13 & 30 & 0.43 & 0.59 & 0.50 & 0.57 \\
\textbf{Based on t2} & 9 & 0 & 5 & 46 & 0.64 & 1.00 & 0.78 & 0.95 \\
\textbf{t1 and t2} & 10 & 7 & 13 & 30 & 0.43 & 0.59 & 0.50 & 0.57 \\
\hline
\end{tabular}
}
\label{tab:fault_class_PLSresults}
\end{center}
\end{table}
For the classification of the datapoints according to the three classification rules in Eq.~\eqref{eq:fault_classification_rules}, the resulting quantities are listed in Table~\ref{tab:fault_class_PLSresults}, along with the precision, recall, $F_1$-score and $F_3$-score.
The recall of the fault classification based on t1 and the combination of t1 and t2 was only 0.59 and the corresponding $F_3$-score was 0.57.
The results were identical for these classification rules, because upon further inspection a violation of t2 was always accompanied by a violation of t1, but not vice versa. Hence, we hypothesize that the current \ac{USL} of t1 is more sensitive than the \ac{USL} of t2.
The fault classification based on t2 had an excellent recall of 1.00 and a precision of 0.64. Indeed, as can be seen in the bottom
panel of Fig.~\ref{fig:PLS_productionCoils_t2}, the fault classifier did not miss any faults and it classified some samples that were close to the \ac{USL} as faults.
The corresponding $F_3$ score was high: 0.95.
\begin{figure}
\centerin
\includegraphics[width=0.95\linewidth]{PLS_trainingfit
\caption{Training fit (model vs. target) of \ac{PLS} with number of \ac{PLS} components $k=1$ on all production samples. \textit{Left}: model vs. target for material property t1. \textit{Right}: model vs. target for material property t2.}
\label{fig:PLS_trainingfit}
\end{figure}
Fig.~\ref{fig:PLS_trainingfit} shows the training fit of the \ac{PLS} model when all data was included in the training set, which is the same fit of which the loadings are shown in Fig.~\ref{fig:PLS_loadingplot}.
It can be seen that the linear model had a sufficient complexity to fit the data.
As seen from the cross-validation results in
Fig.~\ref{fig:PLS_productionCoils_t2}, the points from the testcoil had a large influence on the model fit.
We assume in the rest of the discussion that the weaker linear relationship observed when excluding the testcoil points is related to the fact that these points were only in a small value range and had additional noise caused by the distance between the tensile test and the sensor measurement. It is assumed that the linear relationship as observed for the entire dataset generalizes, see the discussion in Sec.~\ref{sec:Discussion}.
\subsection{Relation of material properties to known production faults}
Besides predicting if the material is out of specification bounds based on non-invasive measurements we are interested whether such measures can be related to the occurrences of product faults
recorded during production.
The \ac{PLS} model fitted on all available data points was used to estimate material properties from the
sensor measurements taken during production. Subsequently, the estimations were compared to the logged faults. As an example case of what can be encountered in production, we first show the result of the model on the known suspicious testcoil and then consider the other logged faults from the rest of production.
\subsubsection{Testcoil results}
\begin{figure}
\centerline{\includegraphics[width=0.95\linewidth]{png/testrol_estimation.png}}
\caption{Estimation of the material properties t1 (\textit{left}) and t2 (\textit{right}) based on the sensor measurements made on the testcoil. \textit{Solid orange line}: moving average over 50 values. \textit{Solid black line}: marks the point at which the related production coil was removed from the production line.}
\label{fig:testrol_estimation}
\end{figure}
In Fig.~\ref{fig:testrol_estimation} the model estimations of the material properties are shown for the test coil. Halfway the coil, the material properties drifted out of specification. The point at which production with the related coil was stopped due to cracks occurring in the press has been marked in the figure. As can be seen, this is right after the material properties exceeded the specifications.
\subsubsection{Production data}
In total we got 17 measurement identifiers of strip steel that were linked to faults later in production and for the rest of the dataset we got hours at which faults occurred. Of the 17 measurements, there were 12 predictions by the \ac{PLS} model that exceeded the \ac{USL} of t1 and t2.
Four predictions only exceeded the \ac{USL} of t1 and the remaining one was within the specifications.
However, a large fraction of the estimated material properties in these coils were out of specification but not labeled as faults in production
as is shown in Fig.~\ref{fig:t1_predictions_labeled_faults}.
\begin{figure}
\centerin
\includegraphics[width=0.95\linewidth]{png/t1_predictions_labeled_faults.png
\caption{Model estimation of material property t1 for two full production days. \textit{Black stars} indicate the model predictions made using the sensor measurements that were linked to product faults. \textit{Solid orange line}: moving average over 50 values.}\label{fig:t1_predictions_labeled_faults}
\end{figure}
Therefore the question arises if measurements from predicted material faults that were not connected to reported product faults could be distinguished from those related to faults.
If this was true a classifier that works on small sample sizes should distinguish those cases.
In order to test this hypothesis we trained the supervised \ac{GMLVQ} \cite{Schneider2009} model using the implementation from \cite{vanVeen2021} on the 16 labeled sensor measurements (positive class) and 16 randomly chosen measurements that did not cause faults but also had estimated out of specification material properties (negative class).
Out of 100 random cross-validation splits with 8 samples validation set size and training with early stopping, the mean validation area under the ROC curve
was $0.58$, which is barely above random indicating that it could not be distinguished well.
This suggests that the prediction of undesired material properties does not necessarily cause a fault every time,
but rather increases the risk of a production fault.
\begin{figure}
\centerin
\includegraphics{t1t2_fractionOutSpec_faults
\caption{Fraction of out of specification model estimations of the material properties t1 and t2 for coils with reported faults and without reported faults.}
\label{fig:t1t2_fractionOutSpec_faults}
\end{figure}
As indication of the risk for a fault we computed the fraction of estimated material properties that were out of specification for each production coil with at least 2000 measurements (40 coils).
Fig.~\ref{fig:t1t2_fractionOutSpec_faults} shows that for the six coils with reported faults, the fraction of estimated out of specification material properties was significantly higher than for the 34 coils without reported faults.
Especially for t1, the great majority of production coils without reported faults had a lower fraction of out of specification t1 than the coils with reported faults.
\section{Discussion} \label{sec:Discussion}
From the cross-validated \ac{PLS} performance, we found evidence that the relevant information about the material properties was mainly contained in the higher frequency sensor variables.
The latent variables of the sensor measurements and the targets were
linearly correlated.
We demonstrated that the sensor variables had different levels of measurement noise and that using linear regression with one of the least noisy variables resulted in similar estimation performance as the \ac{PLS} model.
Hence, the results are robust with a comparably wide range of frequencies of the sensor.
The model fitting was heavily influenced by the suspicious test coil measurements which covered a significantly larger variety of material properties than the other coils.
However, coils with material properties close to the \ac{USL} also conformed to approximately the same linear relationship.
Given that the production coils fell in a small range of material properties, it is important that such measurements are performed as accurately as possible.
We confirmed that in a number of cases the sensor measurements showed considerable variations, hence the distance between the tensile test and the sensor measurements for the production coils added uncertainty to the true value of the measurement at the location of the tensile test.
Moreover, the time administration of new coils was not always exact, such that in a few cases the closest sensor measurements to the location of the tensile test could not be determined and the averaging was done over a suboptimal sample.
The accuracy of the the current \ac{PLS} model can be
verified by additional
tensile test samples from the coils in production and comparing the estimation with the tensile test result.
In the cross-validation the estimations of material property t2 were slightly better than the estimations of t1. Likewise, the material fault classification based on thresholding with respect to the \ac{USL} had a much better recall for t2 than for t1, which is a crucial performance indicator in mass production settings.
However, the results showed that the \ac{USL} of t2 is less sensitive than the \ac{USL} of t1.
Therefore, when relating the material specification predictions to actual reported faults during production, the fraction of violations of the \ac{USL} of t1 was always large for the coils with reported faults.
In scenarios with a clear drift in material properties, such as the one of the testcoil, the estimation of material properties from the inline \ac{NDT} measurements can prevent material that is far out of specification from entering the production line in the future.
In these situations the insufficient material quality is most likely the culprit causing production faults.
In more subtle scenarios, where the estimated material properties were just above the \ac{USL}, the production of the great majority of products did not result in reported faults.
Hence, in order to prevent faults in these situations, it may be crucial to estimate a risk value of faults given the sensor measurements and raise an alert
or adjust the parameters of the production machinery
suitable for the encountered material.
\section{Conclusion and Outlook} \label{sec:Conclusion}
This contribution discusses an exemplary industry 4.0 case: the real-time fault detection and quality control in a mass production line.
Material measurements gathered by an \ac{NDT} soft sensor were analysed in three scenarios:
firstly, measurements taken on deliberately altered material showed that these modifications can be detected by the sensor.
Secondly, a \ac{PLS} model was fitted and validated on measurements taken from several coils in production,
after which it was used to estimate material properties of a suspicious twin to a coil that had to be removed from production and evaluated with destructive testing.
And lastly an analysis of 108 km of coil
encountered during the full run of this experiment with reported production faults.
We showed the potential of the strategy
in preventing insufficient material quality from entering the production line.
In the future, the prevention of these faults could save extremely high costs due to machinery damage.
Furthermore, the material specification may not always directly lead to faults, but could have a direct influence on the durability of tooling.
We also demonstrated
evidence
in preventing the more subtle faults, by revealing the relationship between large fraction of out of specification estimations and reported faults.
A future direction is to combine the model estimations and risk determination with
machine parameters, to identify optimal settings for the specific properties of the material, which has the potential to
widen the material's specification limits.
Further investigations will incorporate process knowledge, such as the physics of the sensor,
other inline measurements and the interplay of the tooling with certain material properties for the prevention of faults.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2022-06-14T02:19:52",
"yymm": "2206",
"arxiv_id": "2206.05818",
"language": "en",
"url": "https://arxiv.org/abs/2206.05818"
}
|
\section{Introduction}
{O}{bject} Detection is an important task in computer vision with many real world applications. Object Detection models based on state-of-the-art convolutional networks are often data-hungry \cite{tian2019fcos}. At the same time, annotating large dataset for Object Detection is usually expensive and time-consuming. For example, 4 workers need work for an hour to annotate about 3000 object bounding boxs of our dataset called Smart Shelf. Therefore, it is imperative to develop new methods to improve the data-efficiency of state-of-the-art object detection models.
We discuss the process of adding new categories to base dataset with less effort to collect and annotation new categories images. there is majorly two trends to solve this problem: (1) Use 3D reconstruction to generate 3D model of our new categories, and use render engine to design a program to generate annotated images automatically \cite{hinterstoisser2019annotation} \cite{tremblay2018training}. (2) Use less real new categories images
for training new categories, but keep high accuracy at the same time. We think the first method met some domain adaptation issues now, so it cannot reach highly enough accuracy in real situation now, even use some GAN based methods \cite{hoffman2018cycada}. There are many researches focus on network structures to boost detection performance \cite{zhang2020bridging} \cite{rong2020solution} \cite{tian2019fcos}, but which may have some disadvantages in certain issues such as increasing inference time or complex networks. We want to provid universal strategies that boost network performance through data augmentations like \cite{zhang2019bag}. Therefore, we focus on second way to address new categories training problem.
We consider that managing data collection and augmentation is a straight way to significantly improve the data-efficiency of object detection models. Detection network training based on diverse images \cite{he2019rethinking}. Object Occlusion has the potential to create challenging new training data \cite{kuznetsova2020open}. Data collection phase can give more natural occlusion because use real objects to take pictures, and data augmentation phase uses extracted bounding box from other images to occluded target objects. We realize bounding box can be annotated obvious faster than segmentation, but an extracted bounding box may contain partial background. If we copy this bounding box image, then paste to other images, the partial background may be inconsistent with new image background, so object occlusion in data augmentation phase is sub-optimal than use segmentation annotation. But our experiments show that do object occlusion in data augmentation phase still can improve model accuracy by appropriate organization.
Firstly, we consider use Class-Balanced Loss \cite{cui2019class} with data copy-paste, But there is not obvious improvement show in our experiments. Thus we do not adopt this method in our continuous experiments. We are inspired by several data augmentation methods \cite{zhang2017mixup} \cite{ghiasi2021simple}, and propose a new copy-paste based method for network trainng using bounding boxes. We also agree the idea proposed in \cite{shmelkov2017incremental}, while we do not get good results only use synthetic data as \cite{hinterstoisser2019annotation}.
The key idea behind the Object-Occlusion is to imitate objects occlusion in real scenarios when constructing training dataset. This idea can lead to a number of combinatorial new object occlusion relationship, with multiple possibilities:
\begin{enumerate}
\item Choose many objects that occluded each other;
\item Decide the occlusion relationship among these objects;
\item Decide positions to place these objects and camera viewpoints to see the scene.
\end{enumerate}
\section{Method}
Our approach for generating new data using Copy-Paste designs different occlusion levels. We assume occlusion relationship between objects is the most important factor for neural network learning. Every time our network can only see some parts of our new category. So imitating real occlusion between objects can give strong ability to learning a new category from small annotated images of the new category. And our experiments show objects appear occlusion, which shows the categories of objects are less important than the occlusion relationship itself. In another word, the occlusion relationship of real scenarios, including occluded level, view direction of target objects, visible region of target objects, can be imitated well when we construct dataset for training. The necessary number of training images is small, but still can hit high accuracy in test dataset. We also demonstrate annotated bounding box should only boxing visible part of objects, ignoring occluded region of the objects, which can make training process converge faster and give better accuracy.
\subsection{Choices of objects for occlusion}
For real objects occlusion relationship, small objects can occluded small partial of large objects, while large objects can occluded large partial of small objects. In real scenarios, objects occlusion relationship has relative fixed distribution, which give us a chance to imitate distribution important sample points. If we can place objects to make occlusion which imitates important sample pointa as real scenarios' objects occlusion distribution. And we find that according to a target object A of a category X, if in one point of occlusion distribution, A's bottom 50\% is occluded by a object B of a category Y, but now we do not have any object of category Y at hand, then we can use a object C of a category Z to occlude A's bottom 50\%, which gives a same effect as use a object B of a category Y. Therefore, the category of frontal objects used to occlude target object is not important, but whether it imitates a occlusion relationship fitting the real occlusion relationship is important.
We can see general placement of goods of one layer in shelf in Fig. \ref{Fig1} and Fig. \ref{Fig2}. Because we use fisheye to take pictures in our scenarios, we should follow fisheye characteristics, which is an ultra wide-angle lens that produces strong visual distortion intending to create a hemispherical image. Therefore, for the purpose of make all goods have distinctive regions that can be visible in camera view, the tall beverages should be placed near shelf wall, and dwarf goods will be placed in the center area of one layer of our shelf. As the size of goods increase, goods will be placed more peripheral. We should guarantee all goods can be seen properly by fisheye camera, so they all can potentially be detected by neural network model.
\begin{figure}[t]
\centering
\includegraphics[height=5.5cm,width=7.5cm]{pictures//1.png}
\caption{\centering{General placement of beverages.}}
\label{Fig1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[height=5.5cm,width=7.5cm]{pictures//2.png}
\caption{\centering{General placement of beverages and snacks (with annotations).}}
\label{Fig2}
\end{figure}
Preparing more different sizes of objects can be helpful for imitate more occlusion relationships. For instance, one real occlusion relationship is that a target object is occluded a small part by small objects. And another real occlusion relationship is that a target object is occluded a large part by large objects.
We can generate occlusion in data collection stage or data augmentation stage. we should be aware of importance that the object size of each category in constructing occlusion in data collection stage, because different sizes of categories can generate different occlusion relationship by nature. But we do not care object size in constructing occlusion in data augmentation stage, we can use any category to occlude a target object, using techniques including copy-paste, cut-paste, image scale and image translation.
\subsection{Decide the occlusion relationship}
Correctly finding the real object occlusion distribution is the precondition of imitation. In particular scenarios, object occlusion is determined by many factors, such as camera viewpoint, object size, object location, etc. Objects relative positions should be reasonable. For example, in indoor scenarios, a cup on the table along with a TV controller. Paper picker may partial occlude cup, even total occlude cup in some viewpoint, but the TV controller is rarely placed on top of the cup, almost all cases are alongside with the cup. So we give major cases high priority in data collection. Each occlusion relationship only collecting one or two cases is enough for hit high accuracy in real test cases.
Monte-Carlo method can be used to pick data sample points from occlusion distribution. Firstly, we should find data occlusion distribution of our new category in real scenarios. we deal each new category one by one. Then we generate sythetic images by copy-paste bounding boxes to occlude objects of new category following occlusion distribution of our new category. And we associate these sythetic images with a few real images to train our detection network.
In data-collection stage, we do not have previous data occlusion distribution of a new category. But we possibly have at least one similar size category before. For occlusion generation, we do not care the surface material or texture detail of our new category. the location this new category should be placed may be affected by new category's volume and intrinsic characteristics. In dataset FVSS, small box package snacks or canned drinks usually are placed in center of a layer of the shelf, while big bag snacks are placed in the periphery of the layer. For most effecient utilization of one layer space, we fill a layer with goods as possible as we can, which under the conditions that each good placed in this layer can be viewed by top-centered fisheye camera clearly. At least each good visible region is necessary and distinct that human can recognize the good belong to which category from the fisheye camera captured images. Therefore, in a fisheye view, we should guarantee the top part or top lateral part of every good is visible and no good stacks. In a replete layer, below part of each good usually is occluded by nearby goods. We follow a rule that small size objects are placed at center of a layer, while large size objects are placed at periphery of the layer. Therefore, if we add a new category which has big size, so it is usually placed in periphery of one layer, or near the walls of the shelf. This kind of placement make objects of this category less possible to be occluded by nearby objects of same category or other categories. Below part of objects are usually occluded with different occlusion ratios by different categories. For example, we add a bottle water. If goods of same category occlude this good, maybe only top bottle cap of this good is visible. If a smaller milk box is next to this bottle, maybe below two third part of bottle is occluded. If a lie-down snack bag is next to this bottle, maybe below one third part of bottle is occluded. And we should not add a bigger object next to the bottle in the side near fisheye camera center, because we do not want our bottle to be totally occluded.
In data-occlusion stage, we use already annotated images to generate new occluded images following the data occlusion distribution we found. Firstly, we should find correct data occlusion distribution of our target new category. The most reasonable method is that we analyze the new category attributes and infer the occlusion distribution by an experienced researcher. But this method is hard to generalization, because we always need some experienced experts. So an automatic method is needed.
In dataset COCO, the category named 'person' usually stands in street or sit near a table, so one person may be occluded by other people at the outdoor, or the table at the indoor. Interestingly, one person that is annotated almost show his head in most cases even in crowd scene or remote scene. If no head appear of one person, the dataset organizer may be not collect that kind of images, because our human being usually identify another human being by its head. And annotators may be feel strange to annotate a person only show below half part of body while no head can be seen.
Some features:
(1) Notice that a person head maybe occluded by umbrella.
(2) Head maybe show in a lateral view or show the back side of head. These view should imitate in data collection stage, while data augmentation stage only imitate the real occlusion relationship rather than different viewpoints.
(3) In rare cases, in a image, a region inlcudes only visible a small part of human that is still annotated as a person, such as only close-up hands or a foot can be seen.
These features can generalize to all kinds of animals, which possibly show their heads in pictures as photographers can make the objects in their picture easy to be understood.
COCO is a general dataset which contains large diversity of each category. In COCO, each category may appear in abundant environments and light conditions, with diversity of gestures and viewpoints. For example, a animal category named 'bear' has many sub-categories, such as polar bear, black bear, brown bear and raccoon. We can add a new category with little number of images like dozens of images, then train a detection network well, which includes all categories besides our new category.
For stuffs have small size, like toothbrush, remote controller, may have near view or remote view, so the object sizes in picture vary in large range. There is a question: is it useful that use small size object occlusion distribution in large size objects? we find it is useful, and can be even better with image scale changing as data augmentation.
We also analyze Open Images Dataset \cite{kuznetsova2020open}. Each category has more samples as well as more diversity. It show 4 types of annotation: Detection, Segmentation, Relationships and Localized Narratives. Detection is annotated with bounding boxes. Segmentation is annotated with polygons. Relationships show many kind of relationships between human and objects or between different objects. Relationships use dotted-line bounding boxes that show one object is in another object. These relationships also strong relate to data occlusion distribution of categories and give us inspiration. We show one Relationships image in Fig. \ref{Fig3}, which describes a contain relationship between a piatter and a tomato.
\begin{figure}[t]
\centering
\includegraphics[height=5.5cm,width=7.5cm]{pictures//5.png}
\caption{\centering{Open Images Dataset Relationship Annotation. ID: 01aa8a57ba69a78b.}}
\label{Fig3}
\end{figure}
\subsection{Decide camera viewpoints}
It is hard to imitate all viewpoints, especially in large outdoor scenarios. But we find a simple way which only use several camera viewpoints and reach high accuracy in real scenarios. We refer the method proposed in NERFIES \cite{Park_2021_ICCV}, using main camera viewpoint and several nearly main camera viewpoints to take objects pictures, ignoring those rare seen viewpoints.
\subsection{Copy-paste collect data}
After collecting tens of images for our new skus, we choose copy-paste \cite{ghiasi2021simple} data augmentation to cover more sample points representing the data placement and data occlusion distribution, which can improve detection performance. Copy-paste strategy we used is that random copy-paste many image bounding box regions to another image following our data occlusion distribution, and following some boundary conditions like not overlap too much over exist objects.
\subsection{FairMOT for speed up data annotation}
We mainly use bounding boxes as annotation for each objects. In situation that data collection is controllable, we can move objects slow in each frames. Thus we can use tracking models to annotate each object in continuous slow movement if the first frame is annotated, reducing human annotators workload.
We try single object tracking at first. If we move one object slowly each time, we can annotate a target object in first frame of a clip which only capture the movement of a target object, and the other objects keep static. We assume camera also stay in same position to simplify the issue, but we also can move camera slowly to achieve excellent single object tracking performance. But there are two main issues using SOT.
(1) If there are several objects of smae category placed near each other, when we slowly move one of them, the tracker may be jump to capture an another static but near object. It requires a more accuracy SOT model, which is necessary for scalable use of SOT for helping annotation.
(2) Because we only support single object tracking, we need annotate many clips, each of which moves a new object.
Multi-objects tracking stratey can tracking many objects, so we can move different objects simultaneously. So we combine FairMOT to boost our annatation process.
\begin{figure}[t]
\centering
\includegraphics[height=5.5cm,width=7.5cm]{pictures//4.png}
\caption{\centering{Small box shape drinks detecion results (usually are placed in layer center).}}
\label{Fig4}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[height=5.5cm,width=7.5cm]{pictures//3.png}
\caption{\centering{Low stature snacks bound boxes (usually are placed in layer center).}}
\label{Fig5}
\end{figure}
\section{Experiments}
We design experiments to demonstrate our approach that only need tens of images each new category to compete thousands of images at accuracy. We conduct two directions of experiments. One is data-occlusion in data collection stage, another one is data-occlusion in data augmentation stage.
Our dataset named Fisheye View of Shelf SKUs(FVSS), is used to do experiments, like data collection stage validation. We show basic situation of FVSS in Fig. \ref{Fig4}, and several bounding boxes of one category in FVSS in Fig. \ref{Fig5}. This dataset gives a view in a shelf, and uses a fisheye camera in top center to view one layer goods in the shelf. In our experiments, we use hundreds of categories as base dataset, and try to add a new category.
We use dataset COCO as our target testbed for data augmentation stage validation. COCO has 80 categories category. We randomly pick one category as new categories, and use left 79 categories as base dataset. We analyze the new category data occlusion distribution, and only pick 1\% to 10\% images of the new category which hit important sample point of data distribution to training. We use yolov5-small as our detection model., and we convert all data annotation to yolov5 format.
\subsection{Data collection stage}
We do experiments in shelf environment and we use fisheye cameras to take pictures, which follow our FVSS dataset construction style. We use 10 thousand images containing 457 categories as our base training dataset, and we add one new category with only 10 images into base training dataset, then we test the performance in an validation dataset with 1000 images, in which every image contains the new category at least one bounding box. we show two categories heatmap in our dataset in Fig. \ref{Fig6}, which is relavant to the occlusion distribution of these categories.
\begin{figure}[h]
\centering
\includegraphics[height=3.5cm,width=7.5cm]{pictures//6.png}
\caption{\centering{Actual heatmaps of Categories in dataset.} }
\label{Fig6}
\end{figure}
For example, we use "coke can" as our new category. Adding 10 images of "coke can" to 10000 images training dataset , which contain 457 categories but do nothing contains category "coke can". These 10 images of "coke can" is taken pictures in important sample points in data occlusion distribution of "coke can" in shelf environment. Therefore, we have total 58 bounding boxes of "coke can". We also construction validation dataset with 1000 images, which contains 179 categories in total. And each image in validation dataset contains at least one "coke can" bounding box. There are total 3939 "coke can" bounding boxes. We use three metrics, first is the AP@0.5 and AP@0.5:0.95 of "coke can" in validation dataset. Second is pass rate of "coke can", which means if "coke can" bounding boxes are all found in correct positions, this image is pass, otherwise is not pass. Third is if a "coke can" bounding box is not detectd as "coke can", it is possible that the bounding box is detected as one other category or just ignore by neural network model. Therefore, we design a rate to describe whether this bounding box is detected as a other category with a confidence lower than 95\%. Results show in Table \ref{table1}.
\begin{table*}[htbp]
\normalsize
\centering
\caption{"coke can" metrics in validation dataset}
\begin{tabular}{|c|c|c|c|c|}\hline
AP@0.5&AP@0.5:0.95&pass rate&mis-detect rate@0.90 & mis-detect rate@0.95 \\\hline
98.4\% & 83.6\% & 81.3\% & 77.0\% & 87.0\%\\\hline
\end{tabular}
\label{table1}
\end{table*}
We test 16 categories as one new category each time. Our categories all are retail field goods such as snacks, milks and beverages, etc. And we conduct a conclusion that we can use few images for training a new category and keep accuracy above 80\%, and mis-detected rate above 85\% in average, which mean only 3\% images in validation dataset is mis-detected with high confidence or ignore by detect model. We show some categories results in Table \ref{table2}. we use mis-detected rate that confidence below 90\% is counted. And we add a new accuracy called final undistinguishable rate, which particular indicates bounding boxes are mis-detected as another category and have high confidence above 90\% or cannot be detected by model. Results show in Table \ref{table2} and Table \ref{table3}.
We also find a way to better imitate important data occlusion distribution using many different size categories to generate diverse data occlusion relationship. And this method can get near double accuracy in average.
\begin{table*}[t]
\centering
\caption{detail of new categories}
\begin{tabular}{|c|c|c|c|c|}\hline
name & image number & pass rate & mis-detect rate@0.9 & final undistinguishable rate@0.9 \\\hline
xiandangao & 6015 & 78\% & 91\% & 1.98\% \\
yibaochunjingshui & 2037 & 54\% & 92\% & 3.68\% \\
jiaduobaoguan & 1238 & 42\% & 78\% & 12.76\% \\
cuiguoba & 6359 & 91\% & 80\% & 1.8\% \\
420meizhiyuanguolicheng & 712 & 95\% & 95\% & 0.25\% \\
feizixiaolizhi & 1884 & 52\% & 95\% & 2.4\% \\
heqingjiaotangbinggan & 3569 & 84\% & 92\% & 1.28\% \\
duoweixiaoxibing200 & 1799 & 90\% & 90\% & 1.0\% \\
4wahahaadgainai & 2807 & 55\% & 100\% & 0.0\% \\
yizhongtaohuangtaoguantou & 2520 & 78\% & 80\% & 4.4\% \\
heqingjiaotangbinggan & 3992 & 94\% & 86\% & 0.84\% \\
4wahahaadgainai & 3094 & 32\% & 91\% & 6.21\% \\
enaakdianxinmian30g & 488 & 62\% & 76\% & 9.12\% \\
guowangshiguangguoba & 867 & 90\% & 100\% & 0.0\% \\
mailisu & 531 & 95\% & 95\% & 0.25\% \\
average & 3194 & 72\% & 90.7\% & 4.2\%\\\hline
\end{tabular}
\label{table2}
\end{table*}
\begin{table*}[t]
\centering
\caption{New category data power, evaluate in validation dataset}
\begin{tabular}{|c|c|c|}\hline
name & pass rate & mis-detect rate@0.90 \\\hline
without new category data & 0.0\% & 79.0\% \\
with new category data & 72.0\% & 90.0\% \\\hline
\end{tabular}
\label{table3}
\end{table*}
\begin{table*}[h]
\centering
\caption{Without new category data, evaluate in validation dataset}
\begin{tabular}{|c|c|c|}\hline
name & image number & mis-detected rate@0.90 \\\hline
xiandangao & 6015 & 87.0\% \\
yibaochunjingshui & 2030 & 87.0\% \\
jiaduobaoguan & 2945 & 81.0\% \\
cuiguoba & 6992 & 90.0\% \\
420meizhiyuanguolicheng & 712 & 78.0\% \\
feizixiaolizhi & 1884 & 87.0\% \\
heqingjiaotangbinggan & 3569 & 52.0\% \\
duoweixiaoxibing200 & 1799 & 93.0\% \\
4wahahaadgainai & 2805 & 44.0\% \\
yizhongtaohuangtaoguantou & 2520 & 82.0\% \\
average & 3127.1 & 79.3\% \\\hline
\end{tabular}
\label{table4}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{New category training comparison different methods}
\begin{tabular}{|c|c|c|c|}\hline
sku name & 3000+ bboxes & 60 bboxes(20 images) & 370 bboxes(20 images+copy-paste)\\\hline
guangshiboluopi & 33.83\% & 12.7\% & 53.38\%\\
yangzhiganlu & 60.96\% & 34.76\% & 76.83\%\\
zhiqingchunniunai & 49.87\% & 3.56\% & 27.95\%\\
tengyeyicunxiaoyuanbinggan & 95.98\% & 38.16\% & 98.19\%\\
aolangtangeweihuabinggan & 37.50\% & 58.33\% & 97.22\%\\
average & 55.63\% & 29.52\% & 70.71\%\\\hline
\end{tabular}
\label{table5}
\end{table*}
We try add images of a new category from another domain, like using phone take pictures, and add to training dataset including shelf fisheye images. We do not apply any domain adaptation methods for enhancement. We get 0\% pass rate of the new category in 1000 images validation dataset. and mis-detection of the new category is almost same as adding no new category data. Therefore, domain adaptation still a issue in training a new category. Results show in Table \ref{table3}.
We test whether add a new category or not in training dataset. Validation dataset has new category data, evaluating in validation dataset. we discover if no new category data in training dataset, the new category in validation dataset cannot be detected correctly even one image, so pass rate is always 0.0\%. And there is 21\% bounding box will miss or be error detected as others with high confidence. Meanwhile, average pass rate is 72.0\% of the new category if we add only 10 new category images to training dataset, and mis-detect rate@0.90 will increase to 90.0\%. And we found that some high aspect ratio categories can get higher mis-detect rate@0.90 increasement. Maybe if we add images of this new high aspect ratio category, there are less chance that the objects of this category could be error detected as other categories. But a high aspect ratio category often gets a lower pass rate than other categories using few images. Therefore, we think adding new category images is important even few images is added. And if these few images can fit well in data occlusion distribution of the new category, most necessary and important information will be learned by neural network. Results show in Table \ref{table4}.
We do a further experiment of few images of adding a new category in large dataset. We use a large shelf fisheye view dataset with more than 360 thousand images, and add a new category called "sizhoushaokaoweixiatiao" with only 15 images containing 60 bounding boxes to this large dataset. we train 1.5 epochs, with common data augmentation methods like image flipping, hue tuning and normalization, etc. We evaluate on test dataset including 500 real images. Only about 10 images is incorrect. This result show the huge potential of use copy-paste with data occlusion distribution in detection using bounding boxes annotation format. Results show in Fig. \ref{Fig7} and Fig. \ref{Fig8}.
\begin{figure}
\centering
\includegraphics[height=4.5cm,width=7.5cm]{pictures//7.png}
\caption{\centering{Data augmentation bbox copy-past - base results.} }
\label{Fig7}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=4.5cm,width=7.5cm]{pictures//8.png}
\caption{\centering{Data augmentation bbox copy-past (zhiqingchunniunai use category similarity postprocess).}}
\label{Fig8}
\end{figure}
\subsection{Data augmentation stage}
We do experiments that compare normal data training with empirical enough images of a new category which have more than 3000 bounding boxes to only 20 new category training images. These 20 category training images is a subset of the before large dataset. And we conduct two kind of experiments of 20 new category images. one we use only these 20 images with some data augmentation such as image flipping, hsv transformation, hue tuning. Another experiment we use copy-paste data augmentation following data occlusion distribution to create more 100 images from these 20 images, Then we get 120 new category images in total, 100 of them are copy-pasted. We test 5 new categories one by one and show results in Table \ref{table5}. The results is amazing, which show that little amount of images with copy-paste generated images is better than original large images training results. Fig. 9 shows a image augmented with copy-paste by our strategy. Fig. 10 shows a failure case of detection by a network trained by our strategy.
\begin{figure*}
\centering
\centering
\includegraphics[height=6.5cm,width=9.5cm]{pictures//9.png}
\caption{\centering{Data augmentation bbox copy-paste.}}
\centering
\includegraphics[height=7.0cm,width=17.5cm]{pictures//10.png}
\caption{\centering{A detection failure case, the appearances of two categories are similar.}}
\end{figure*}
There is a situation that we use already collected dataset to training, and we cannot control the data collection stage, but we also want to add few images of a new category to exist dataset. We design experiments for demonstrating that few images of a new category can give a relative high level accuracy of this new category if we pick these images from important sample points of data occlusion distribution of this new category in test dataset. Our implementation refers to \cite{ghiasi2021simple}. We think \cite{ghiasi2021simple} use simple copy-paste data augmentation strategy can get noticeable improvement of accuracy. This conclusion is based on these copy-paste operations create many new occlusion relationship which contains the important sample points of data occlusion distribution. Fig. \ref{Fig11} and Fig. \ref{Fig12} give us two categories test results, which show the condence distribution of target categories in test dataset.
\begin{figure*}
\centering
\includegraphics[height=8.5cm,width=16.5cm]{pictures//12.png}
\caption{\centering{xiandangao 30 images pass rate 0.79.}}
\label{Fig11}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=8.5cm,width=16.5cm]{pictures//13.png}
\caption{\centering{heqingjiaotangbinggan 30 images pass rate 0.93.}}
\label{Fig12}
\end{figure*}
\section{Conclusion}
Data collection is a core step of applying vision systems to real world. In this paper, we propose a Object occlusion data collection method, and find that it is very effective and robust. Object occlusion performs well across multiple experimental settings and provides significant improvement using obvious little amount of data. Experiments base on our dataset FVSS and COCO benchmarks.
The Object occlusion data collection and augmentation strategy is easy to plug into any dataset when construct new dataset or add new categories to exsit dataset, and save the training cost because using little amount of images. Therefore, we can use smaller model with suitable data occlusion, like use copy-paste to create appropriate occlusion relationship of target objects. Also use less memory in training process. Proper Object occlusion data collection and augmentation strategy enables small models to achieve accuracy competitive with more complicated model.
We find that networks can learn a new category from few samples like human, who have strong inference ability. On the other hand, human also need imitate networks learning style, which learning process uses little analysis skill and inference ability but sees more samples. Learning style of networks may be useful is learning some new concepts or new languages, showing more samples of this new thing without detailed explanation. Learners are also easy to understand and remember this new thing.
Interesting directions for future work are improving object-occlusion data collection and augmentation strategies to fill the gap between virtual 3D objects and real 3D objects.
|
{
"timestamp": "2022-06-15T02:13:08",
"yymm": "2206",
"arxiv_id": "2206.05730",
"language": "en",
"url": "https://arxiv.org/abs/2206.05730"
}
|
\section{Introduction}
The noting by \cite{Curtis1918} that in the elliptical galaxy M87 ``a curious straight ray lies in a gap in the nebulosity ... , apparently connected with the nucleus by a thin line of matter'' marked the discovery of the first astrophysical jet, although its significance was not then recognized. The term `jet' was first used by \cite{Baade1954}
who noted ``several strong condensations in the outer parts of the jet'' in M87 and also reported ``strong emission line of [O II] $\lambda$3727 \AA, which is shifted relative to the nuclear G-type spectrum by $-$295$\pm$100 km~sec$^{-1}$''. They suggested that this may be due to ejection from the nucleus. The jet in the quasar 3C273 was referred to as a ``faint wisp or jet'' while reporting the discovery of quasars \citep{Hazard1963,Schmidt1963}. These were the early beginnings.
Although a number of jets in radio galaxies were mapped in the 1970s at radio frequencies \citep[e.g.][]{Northover1973,Turland1975,vanBreugel1977},
the ubiquity of radio jets was demonstrated from observations with the Very Large Array in the 1980s. This along with the subsequent detection of jets across the electromagnetic spectrum have helped develop a deeper understanding of astrophysical jets in active galactic nuclei or AGN \citep[e.g.][]{Harris2006,Blandford2019}.
These jets indicate the channels via which energy, momentum, mass and magnetic field are transported from the central supermassive black hole and its accretion disk to form the extended lobes of radio emission. As suggested by \citet{Bridle1984} we define a radio jet to be at least four times longer than its width. The jets range in size from sub-pc scales seen in the nuclear regions of active galaxies to hundreds of kpc for the largest radio sources.
The early developments in our understanding of radio jets were summarised by \cite{Bridle1984} in their seminal review. They noted that the jets in the lower-luminosity, edge-darkened sources without prominent
hot-spots at the outer edges (Fanaroff$-$Riley class I or FRI sources) tend to have two-sided radio jets although these may be one-sided
close to the parent optical object, while the higher-luminosity, edge-brightened sources with prominent hot-spots (FRII sources) tend to have one-sided jets (Figs.~\ref{f:3C31_IC4296} and \ref{f:3C175}). The traditional
dividing luminosity between these two classes identified by \cite{Fanaroff1974} is $\approx$10$^{26}$ W Hz$^{-1}$ at 150 MHz in a cosmology with H$_{o} = 70$ km s$^{-1}$ Mpc$^{-1}$,
$\Omega_{\rm m}$ = 0.3 and $\Omega_\Lambda$ = 0.7. The magnetic field orientations also appear to be different for the jets in the two FR classes.
The jets in FRII sources which are either one-sided or highly asymmetric were found to exhibit a magnetic field predominantly parallel to the jet axes, while in the lower-luminosity FRI sources the magnetic field was either predominantly perpendicular to the jet axes or had a combination of perpendicular and parallel components \citep{Bridle1984}.
\begin{figure*}
\centering
\hbox{
\includegraphics[width=9.25cm]{3C31_radio_optical_montage_Alan_Bridle.png}
\includegraphics[width=7.3cm]{IC4296.jpeg}
}
\caption{Examples of Fanaroff-Riley Class I sources. Left: VLA radio image of the radio galaxy 3C31 shown in red and orange colours superposed on the Palomar Sky Survey optical image shown in blue. Middle: Higher-resolution VLA image of the inner jet superposed on the Hubble Space Telescope WFPC2 image. Right: MeerKAT radio image of the FR class I source IC4296 shown in orange and red hues superimposed on the SuperCOSMOS Sky Survey image in visible light. Credits for 3C31: NRAO, Alan Bridle; wide-field radio data: \cite{Laing2008}; HST/WFPC2 image from \cite{Martel1999}. Credits for IC4296: SARAO, SSS, S. Dagnello and W. Cotton (NRAO/AUI/NSF). Adapted from \cite{Condon2021}.
}
\label{f:3C31_IC4296}
\end{figure*}
Besides radio galaxies and quasars, radio jets have also been observed in Seyfert galaxies, an archetypal example being NGC4151 (\citealt{Williams2017}, and references therein), low-luminosity AGN (LLAGN) and also in star-forming H{\sc{ii}} galaxies \citep{Baldi2021}. The Seyfert, LLAGN and H{\sc{ii}} galaxies span the lower luminosity region of radio selected AGN. For example local radio luminosity function of AGN at 1.4 GHz extends down to a few times
10$^{20}$ W Hz$^{-1}$ \citep{Mauch2007}. The radio luminosity of Seyferts at 1.4 GHz lie in the range of a few times 10$^{20}$ to 10$^{24}$ W Hz$^{-1}$ \citep[e.g.][]{Ulvestad1989}. The luminosity distribution of the LOFAR Two-Metre Sky Survey, LoTSS-DR1, \citep{Shimwell2017,Shimwell2019} sources ranges from $\approx$10$^{21}$ to 10$^{29}$ W Hz$^{-1}$ at 150 MHz, the traditional dividing line between FRI and FRII sources being at 10$^{26}$ W Hz$^{-1}$ \citep{Mingo2019,Mingo2022}.
In addition to AGN, astrophysical jets have been found in a wide variety of cases, such as protostellar jets \citep{Bally2016}, pulsar wind nebulae \citep{Durant2013}, $\gamma-$ray bursts \citep{Gehrels2009}, stellar binary systems with a black hole companion such as SS433, and the micro-quasars which exhibit superluminal motion of the radio jets \citep{Mirabel1999}. Similar principles have been invoked in understanding these jets.
In this article, we confine ourselves to jets in radio galaxies and quasars, with more emphasis on radio observations, summarising our current understanding and discussing future work. High-energy emission from jets and their spectral energy distributions are being discussed in an accompanying article in this issue \citep{Singh2022}, and have also been extensively covered in the reviews by \cite{Harris2006}, \cite{Worrall2009}, \cite{Blandford2019} and \cite{Hardcastle2020}. From the vast body of literature
we have been able to cite only a limited number of articles in this relatively short review.
\begin{figure}
\centering
\hbox{
\includegraphics[width=8.5cm]{3C175.png}
}
\caption{Example of an FR class II source. VLA radio image of the FR class II source 3C175 associated with a quasar. Credit: NRAO and Alan Bridle; adapted from \cite{Bridle1994}.
}
\label{f:3C175}
\end{figure}
\section{FR classes, HERGs and LERGs, radio jets}
The Fanaroff-Riley or FR classification of sources, with the two classes exhibiting different jet structures, was till recently based on studies of strong source samples, such as the 3CR and 2-Jy samples. From the available information at that time \cite{Ledlow1996} found the dividing radio luminosity between the two classes to increase with optical luminosity of the host galaxy, although recent studies have shown the relationship to be more complex \citep[e.g.][]{Mingo2019,Mingo2022}.
A number of reasons have been suggested for the observed dichotomy in the FR classes and their jets.
These can be broadly classified as (i) entrainment of
thermal material by the jets close to the nuclear region in FRI radio sources \citep[e.g.][]{Laing2007}; (ii) fundamental differences in the central engine such as the spin of the black hole and/or material forming the jet \citep[e.g.][]{Celotti1997}; and (iii) differences in the external environment and jet power which determine how rapidly jets may decollimate \citep[e.g.][]{GopalKrishna1996}.
Studies of radio sources in different environments suggested that on average FRI sources tend to lie in higher density environments than FRII sources, indicating that jets may be affected by a denser surrounding medium \citep[e.g.][]{Wing2011,Gendre2013}. There is observational evidence
that deceleration and decollimation of jets in FRI sources on small scales, with possible entrainment of material from the interstellar medium, may play an important role in the observed dichotomy \citep[e.g.][]{Bicknell1994,Laing2002a,Laing2002b,Laing2014,Mingo2019,Hardcastle2020}.
Radio galaxies have also been traditionally classified based on their optical spectra since the early work by \cite{Hine1979}, into low-excitation radio galaxies (LERGs) and high-excitation radio galaxies (HERGs) \citep[e.g.][]{Hardcastle2007,Buttiglione2010,Best2012,Heckman2014,Tadhunter2016}. In the low-excitation or jet-mode AGN, accretion is radiatively inefficient (RI) where the Eddington
ratio is less that 1\%, while in the high-excitation or radiative mode AGN, which is radiatively efficient (RE), the Eddington ratio
is greater than 1\%. In LERGs the nuclear region is
dominated by a ‘‘geometrically thick advection-dominated accretion flow’’ \citep{Narayan1995}, while in HERGs accretion is via the classical optically thin, geometrically thick accretion disk \citep{Shakura1973}. These aspects have been summarised by \cite{Heckman2014} in their review.
Traditionally the LERGs have been found to have an FRI-type structure although there are a significant number of FRII LERGs,
while HERGS are predominantly of FRII type \citep{Best2012,Heckman2014,Tadhunter2016}. Recent studies from deep radio surveys with LOFAR have significantly altered our understanding of the relationship between FR class, accretion mode and host galaxy properties \citep{Mingo2019,Mingo2022}.
\begin{figure}
\centering
\vbox{
\includegraphics[width=8.5cm]{BM_DF_4rms_L150_sSFR_Fig3.pdf}
\includegraphics[width=8.5cm]{BM_DF_FRI_FRIIH_FRIIL_sSFR_histo_norm_Fig3.pdf}
\includegraphics[width=8.5cm]{BM_DF_LERG_HERG_sSFR_histo_norm.pdf}
}
\caption{Upper panel: Specific star formation rate (sSFR) vs radio luminosity at 150 MHz for FRIs and low- and high-luminosity FRIIs as indicated in the figure. The HERGs are indicated. Middle panel: Distributions of sSFR for the FRIs and low- and high-luminosity FRIIs. Lower panel: Distributions of sSFR for HERGs and LERGs. Figures are from \citet{Mingo2022}.}
\label{f:mingo2022}
\end{figure}
We briefly summarise a few of the significant results of \citet{Mingo2022} which have a significant bearing in our understanding of jet formation, accretion mode and large-scale radio structure (Fig.~\ref{f:mingo2022}).
Dividing the FRIIs into FRII-high and FRII-low in the traditional dividing line of L(150 MHz)
= 10$^{26}$ W Hz$^{-1}$ for the FRI and FRII classes, they find that $\sim$65 per cent of the FRII-high sample are LERGs, contrary to earlier studies. There appears to be no significant difference in the large-scale radio structure on 100-kpc scale between FRII LERGs and HERGs, suggesting that FRII ``classification is not primarily controlled by the central engine''. As in earlier studies, they find a significant population of FRIIs below the dividing luminosity, suggesting that FR classification is not determined by jet power alone. FRII sources appear across all luminosities and both accretion modes. Low-luminosity FRIIs and FRIs are overwhelmingly LERGs, so that RE accretion is rare at these luminosities. By comparing low-luminosity FRIIs and FRIs of similar luminosity, they show that the probability of a low-power jet becoming either an FRI or FRII jet depends on the stellar mass of the host galaxy. This would be consistent with the ideas of the environment playing an important role in the formation of FRI jets. HERGs across all luminosities and morphologies tend to have high specific star formation rates, suggesting a close link with availability of fuel. Radio morphology and jets, accretion mode and host galaxy properties appear related but in more complex ways than simple one-to-one relationships \citep{Mingo2022}.
These results raise a number of interesting questions. Traditionally FRI and FRII sources divided by luminosity have shown evidence of different evolutionary properties \citep[e.g.][]{Wall1980a,Wall1980b}. If the luminosity-FR class division is blurred, is the primary dependence of evolution on FR class or luminosity or the LERG/HERG classification? The radio jet structures of FRI and FRII sources are also different. Although low-luminosity FRIIs are also expected to have reasonably well-collimated jets as the hot-spots are visible, are their jet structures and field orientations similar to those of high-luminosity FRIIs? The internal composition of the relativistic plasma in the lobes of FRI and FRII radio sources appear to be different \citep[e.g.][]{Croston2018}. How does this extend to low-luminosity FRIIs? Are the jets in low-luminosity FRIIs more susceptible to instabilities and entrainment than high-luminosity FRIIs? As these low-luminosity sources are imaged with greater sensitivity and resolution to clarify their jet structures, it would be interesting to pursue some of these questions.
The evolution of LERGs and HERGs and their impact on galaxy evolution also need to be better understood. Recently, \citet{Kondapally2022} have examined this aspect for LERGs by splitting the sample into quiescent and star-forming galaxies. They find that the quiescent LERGs dominate the radio luminosity function at z$<$1 and are consistent with accretion occurring from cooling of hot gas halos. The star-forming radio luminosity function increases with redshift, dominating the space densities by z$\sim$1. They suggest that accretion in these cases is possibly due to cold gas present in these star-forming galaxies.
\section{The FR0 sources}
Combining sensitive radio surveys with optical surveys such as the Sloan Digital Sky Survey or SDSS \citep[e.g.][]{Best2012} has revealed a population of radio sources whose core luminosity is similar to that of FRI radio sources, but the extended emission is weaker by a factor of $\sim$100 \citep[e.g.][]{Baldi2009,Baldi2015,Sadler2014,Sadler2016,Cheng2018}. These sources termed as FR0s can be found in both high- and low-frequency surveys. For example in the
AT20G-6dFGS sample, $\sim$68 per cent of the sources fall into the FR0 category \citep[e.g.][]{Sadler2014,Sadler2016}. Similarly for a complete sample of sources chosen from the Cambridge 10C survey at 15.7 GHz, again $\sim$68 per cent have been classified as FR0s \citep{Whittam2016}. At low radio frequencies $\sim$70 per cent of the sources in LoTSS appear unresolved \citep{Shimwell2017,Shimwell2019}. In observations of deep fields at low frequencies such as of ELAIS-N1 the majority of sources are unresolved with an angular resolution of a few arcsec and have steep radio spectra \citep[e.g.][]{Sirothia2009b,Ishwara-Chandra2020},
all showing that FR0s are quite common among radio AGN at low luminosities.
A catalog of 108 FR0 sources (FR0CAT) with redshifts less than 0.05 and projected linear size $<$5 kpc was compiled by \cite{Baldi2018}. The host galaxies of the FR0s are massive luminous early type
galaxies $(-21 < M_r < -23)$ with mid-IR colours consistent with those of elliptical galaxies, and black hole masses of $\sim10^{7.5}$ - $10^9 M_\odot$, which are less than those in FRI radio galaxies. The most striking difference with FRIs is that the radio luminosity is lower than 3CR sources by $\sim$100 even for sources of similar [O{\sc{iii}}] luminosity \citep{Baldi2018}. There are also indications that the galaxy density of FR0s may be lower than that of FRIs by a factor of $\sim$2 \citep{Capetti2020}.
In this review we focus on the launching of jets in FR0s. Although FR0s are likely to be a mixed bag of objects, high-resolution observations often reveal evidence of jet-like features \citep[e.g.][]{Cheng2018,Baldi2019,Baldi2021}. Although these features may not always be consistent with the classical definition of \cite{Bridle1984} of being 4 times longer than the width, they do provide evidence of collimated ejection of relativistic plasma from an AGN. A search for extended emission in FR0s observed with LOFAR show that about 20 per cent show evidence of bipolar emission on opposite sides \citep{Capetti2020}.
\cite{Cheng2018} observed 14 FR0s with VLBI techniques and found 4 of the sources to have Doppler boosting factors ranging from 1.7 to 6, and two with multi-epoch observations to have proper motions between 0.23 and 0.49c. \citet{Baldi2021b} report high-resolution observations of 15 FR0s with eMERLIN, EVN and JVLA and find that most show evidence of jet-like structures. \citet{Baldi2021b} also report a linear correlation between the radio core luminosity and [O{\sc{iii}}] line luminosity for a sample of low-luminosity active nuclei consisting of both FR0s and FRIs, suggesting similar disk-jet coupling for these sources. The high-resolution studies are consistent with FR0s having mildly relativistic jets.
The similarity of host galaxies of FR0s and FRIs does not suggest that the jets in FR0s are confined to small dimensions by a dense medium. Possible reasons suggested for understanding the inability to launch large-scale jets as in FRI sources include low black hole mass \citep{Miraghaei2017} and/or black hole spin (\citealt{Baldi2021b}, and references therein).
Theoretical studies indicate jet power could depend strongly on the black hole spin and may also provide a viable explanation for the radio loud - radio quiet dichotomy \citep[e.g.][]{Tchekhovskoy2010}. \citet{Maraschi2012}
suggest that above a spin threshold, black hole spin and accretion rate could lead to a grand unification of AGN. Observationally, the detection of maximally rotating black holes in the low-luminosity Seyfert galaxies (e.g. Table 2 in \citealt{Brenneman2011}) suggests that spin alone may not be the determining factor for inability to launch high-luminosity radio jets. It is
possibly due to a combination of black hole mass, spin and accretion rate, which requires more observational and theoretical work to clarify.
\section{Hybrid morphology sources and radio jets}
While detailed studies of jets in FRI and FRII sources are discussed later, here we discuss briefly the nature of sources which appear to have a hybrid morphology. These are sources where one side appears to have an FRI structure while the opposite side exhibits an FRII structure.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{0500+630.pdf}
\caption{A highly asymmetric radio galaxy with an apparent hybrid FRI-FRII morphology \citep{Saikia1996}.}
\label{f:0500+630}
\end{figure}
One of the very early examples of these sources
is B0500+630 (Fig.~\ref{f:0500+630}) where the authors noted that ``the source appears to have a composite structure, with one side being typical of Fanaroff-Riley class II sources, while the diffuse lobe is similar to those seen in Fanaroff-Riley class I sources'' \citep{Saikia1996}. However, although no hotspot is visible on the apparently FRI side, there is also no evidence of a jet, symptomatic of jets in FRI sources, connecting the core to the diffuse lobe. Being associated with a galaxy B0500+630 is likely to be inclined at $>$45$^\circ$ to the line of sight. With a peak brightness ratio of the oppositely-directed hotspots of $\sim$100, the source appears to be intrinsically asymmetric. \cite{GopalKrishna2000} compiled a sample of 5 such sources and suggested that this supports the scenario that the FR dichotomy is due to jet interaction with the external environment rather than due to differences in the central engine, such as black hole spin, or differences in jet composition.
From existing surveys of sources further examples of such sources, termed HyMoRS, were reported by a number of authors \citep[e.g.][]{Gawronski2006,Banfield2015,Kapinska2017}. \citet{Ceglowski2013} observed a sample of 5 HyMoRS using the Very Large Baseline Array (VLBA) and found core-jet structures in two of them, one pointing towards an FRII-like lobe and the other towards an FRI-like one, and two probable weak jets. They suggested that HyMoRS are possible ``FRIIs evolving in a heterogeneous environment''.
More recently, \citet{Harwood2020} made a detailed study of a small sample of HyMoRS examining their spectral index distributions and the injection spectral indices. They concluded that these ``objects are most likely the result of
orientation and are intrinsically FRII radio galaxies''.
High-resolution sensitive observations of candidate HyMoRS to examine both their spectra and structure in both total intensity and polarization would be helpful to clarify whether there are genuine HyMoRS.
Absence of hotspots alone on one side may not be adequate to classify a radio galaxy as a HyMoRS. The total-intensity and polarization structure of the radio jets, besides spectral index information, could provide valuable clues towards identifying genuine HyMoRS.
There could be intrinsic asymmetries in well-collimated jets in high-luminosity FRII sources with significantly weaker hotspots on one side. For example, the hotspots in the high-luminosity quasars 3C9, 3C280.1 and B1857+566 have very weak hotspots on the side facing the jets, which are possibly approaching us within about 45$^\circ$ to the line of sight \citep[cf.][]{Swarup1982,Saikia1983}.
\section{Nuclear or VLBI-scale jets}
As summarized by \citet{Blandford2019}, nuclear or VLBI-scale jets tend to be one-sided with a flat-spectrum nuclear core at one end, and with components often appearing to move along the jet with superluminal velocities. Superluminal motion is common in core-dominated radio sources which are inclined at small angles to the line of sight, with apparent velocities ranging from $\sim$0.03c to 50c. Surveys of jets on VLBI scales in the last couple of decades include the
Very Long Baseline Array Calibrator Survey \citep{Beasley2002}, Australian Long Baseline Array survey of southern sources \citep{Petrov2019}, Monitoring Of Jets in Active galactic nuclei with VLBA Experiments or MOJAVE \citep{Lister2016,Lister2018} and monitoring a sample of $\gamma$-ray blazars \citep{Jorstad2017}. The Astrogeo Project contains the observations of about 12000 AGN observed by VLBI techniques \citep{Petrov2022}.
The polarization properties of nuclear or parsec-scale jets have also been studied using VLBI techniques. One of the extensive studies based on observations of 484 sources over the time interval 1996-2016 was reported by \citet{Pushkarev2017}. They report a significant increase in the degree of linear polarization with distance
from the radio core along the jet for quasars, BL Lac objects and galaxies, and also an increase towards the edges of the jets. The increase with distance could be due to more ordered fields further down the jet, while the increase towards the edges is possibly due to greater depolarization closer to the jet axes. The cores and jets of BL Lac objects tend to be more polarized than quasars. Also the E-vector position angles (EVPA) of the cores tend to be more stable in BL Lacs and the EVPAs in both the cores and jets appear better aligned with the jet axes. This suggests compression of the magnetic field due to shocks with the B-field being perpendicular to the jet direction. \citet{Pushkarev2017} found no such trend for the jets in radio galaxies and quasars.
One of the most extensive studies of rotation measure (RM) estimates in the jets has come from the MOJAVE group who reported observations of 191 extragalactic jets \citep{Hovatta2012}. They find the quasars to have on average larger RM values than BL Lacs, and the cores to have higher values than the jet components. There is a significant negative correlation between the jet RM and deprojected distance from the core. They find significant transverse RM gradients in 4 sources, with the RM in the quasar 3C273 changing sign from positive to negative along the transverse cut. This result was confirmed by \citet{Wardle2018} who estimate a current of $10^{17}$-$10^{18}$ A flowing down the jet. The RM variations transverse to the jet indicates a toroidal field, although the field is largely along the axis of the jet. Wardle also find the RM distribution to be variable on time scales of months to years and suggest that this is due to motion of superluminal components behind a turbulent Faraday screen around the jet. ALMA observations at 1 mm on a scale of about 2 kpc suggest a sheath surrounding a conically expanding jet \citep{Hovatta2019}.
Gradients in RM transverse to the jet axes which could be due to helical or toroidal magnetic fields have been reported for a number of other AGN as well \citep[e.g.][]{Gabuzda2015,Gabuzda2021,Kharb2009}. These could play a significant role in the collimation of the jets.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{Boccardi_NGC315.pdf}
\caption{The jet collimation profile in NGC315 illustrating the transition from a parabolical shape to a conical one on sub-parsec scale \citep{Boccardi2021}. Reproduced with permission \copyright ESO.}
\label{f:NGC315}
\end{figure}
Observational studies of the collimation of jets are important to understand the physics of jets, including their formation, propagation and acceleration processes.
High-angular resolution observations are required to determine the jet profiles and their variation with distance from the radio core. The variation of the apparent jet width $w$ with distance from the core $r$ is usually fitted with the function of the form $w \propto r^k$, where $k \approx 0.5$ for a quasi-parabolic jet and 1 for a conical jet. One of the early evidences for transition from a parabolic to a cylindrical jet shape was in the radio galaxy M87, the transition occurring near the feature HST-1 at a projected distance of $\approx$70 pc, corresponding to
$\approx 10^5$R$_s$, where R$_s$ the Schwarzschild is given by $2GM/c^2$ \citep{Asada2012}. \citet{Pushkarev2017} found most resolved jets to have an approximately conical shape.
Observational evidence suggests a jet strucure with a fast spine and a slower outer layer.
\citet{Hervet2017} attempt to link different types of AGN with specific stratified jet characteristics based on VLBI observations of a large sample of AGN jets.
A number of other authors have attempted to study the jet profiles in the innermost jet regions (e.g. \citealt{Kovalev2020}, and references therein). \citet{Kovalev2020} find the transition from parabolic to cylindrical shapes to be quite common in AGN jets. The transition occurs at gravitational radii, $r_g = GM/c^2 \approx 10^5 - 10^6$, which roughly corresponds to the Bondi radius $r_B = 2GM/c_s^2$ where $c_s$ is the sound speed. They suggest that the transition occurs where the bulk plasma kinetic energy equals the Poynting energy flux, with Bondi accretion determining the pressure of the ambient medium. Detection of features in the jets possibly due to shocks at the transition region where the jets become plasma dominated appears to support this scenario \citep{Kovalev2020}.
Although change from a parabolic to conical jet collimation profile around the Bondi radius appears fairly common, there are also examples of deviation from this picture. An interesting example is the low-ionisation nuclear emission line region (LINER) galaxy NGC1052 with twin-jets. \citet{Baczko2022}
find that both jets are conical downstream of a break in the jet collimation profile at $10^4$R$_s$.
However upstream of the jets, the jet collimation profile is neither cylindrical nor parabolic for the approaching jet and close to cylindrical for the receding one. While more observational work is required, evidence of differences in collimation on opposite sides in the nuclear jets will also have implications in interpreting asymmetries in the large-scale structure.
We highlight briefly a few significant results from recent studies of radio jets on parsec or sub-parsec scales in different sources which have been reported since the review by \citet{Blandford2019}.
\subsection{Giant radio galaxy NGC315, J0057+3021}
NGC315 is a giant radio galaxy with a black hole mass of $\approx 1.3\times 10^9 M_\odot$, whose sub-parsec scale structure has been studied recently by \citet{Boccardi2021} and \citet{Park2021}. \citet{Boccardi2021} have observed it with higher resolution extending to 86.2 GHz using the Global Millimeter-VLBI Array (GMVA). Both groups find a transition from a parabolic to a conical shape, although \citet{Boccardi2021} find it to occur closer to the central engine at a distance of 0.58$\pm$0.28 pc or $\sim 5\times10^3$ Schwarzschild radii (Fig.~\ref{f:NGC315}). This is much smaller than the Bondi radius which has been estimated to be 92 pc from x-ray observations. The transition appears to occur at sub-pc scales, after which it remains conical to kpc scales. They note a similar behaviour in other low-luminosity AGN (e.g. NGC4261, Cen A) and suggest that the initial confinement of the jet may be due to a thick disk extending $\sim 10^3$-$10^4 R_s$.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{HO_3C273_Fig6.pdf}
\caption{The relation between the central black hole mass and the de-projected distance of the jet collimation break from it \citep{Okino2021}.
}
\label{f:Okino_3C273}
\end{figure}
\begin{figure*}[th]
\centering
\includegraphics[width=10cm]{AP_M87_helix_Fig7.pdf}
\caption{A polarization study of the conical jet of M87 reveal a helical magnetic field configuration \citep{Pasetto2021}. The upper panel shows the
double-helix structure between knots D and I. The middle image shows the magnetic field lines largely follow the double-helix structure. The bottom figure which plots the Faraday depth shows that the magnetic field has opposite directions where they are clearly able to separate the emission from both edges. These suggest a helical configuration for the M87 jet (see \citealt{Pasetto2021} for more details).
}
\label{f:M87_Pasetto}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{Boccardi_heg_leg_02.pdf}
\caption{Distribution of the de-projected transition distances in LERGs and HERGs where the distance is expressed
in units of their Schwarzschild radii \citep{Boccardi2021}. Reproduced with permission \copyright ESO.}
\label{f:Boccardi_heg_leg}
\end{figure}
\subsection{Quasar 3C273, J1229+0203}
Is the transition in jet collimation seen in low-luminosity AGN also present in higher-luminosity sources with more powerful jets? The archetypal quasar 3C273, whose black hole mass and viewing angle have been estimated to be ($2.6\pm1.1)\times 10^8 M_\odot$ and 12$^\circ$ respectively by the \citet{GravityCollaboration2018}, was observed by \citet{Okino2021} with an angular resolution of 60 $\mu$as at 86 GHz. They resolve the innermost jet on scales of $10^5$ Schwarzscild radii, and find a similar behaviour to that of the lower-luminosity AGN. Here too the inner jet collimates parabolically while the outer jet expands conically. The jet collimation break is seen at $\sim 8 \times 10^6 R_s$, the Schwarzschild radius.
\citet{Okino2021} compare the results for 3C273 with other jets in AGN by exploring the relation between the deprojected distance of the collimation break vs the black hole mass. They find that the collimation break occurs over a wide range, from $\sim 10^4 R_s$ to $\sim 10^8 R_s$ (Fig.~\ref{f:Okino_3C273}). They suggest that the transition region is determined not merely by the sphere of gravitational influence of the black hole, but also diverse environmental factors such as the torus, disk, disk wind, and a hot gas cocoon around the jet.
\subsection{FRI radio galaxy Centaurus A, NGC5128, J1325-4301}
\citet{Janssen2021} have presented the image of the nuclear jets in Cen A with an angular resolution of 25 $\mu$as, probing the structure of the jets at about 200 gravitational radii from the $5.5\times 10^7 M_\odot$ black hole. These observations reveal a highly collimated, asymmetrically edge-brightened radio jet as well as a weaker counter jet. There appears to be no radio emission from the spine of the jet, the sheath to spine intensity ratio being $>$5. They find the jet to have a wide initial opening angle of $>40^\circ$ and the width to vary with distance with $k = 0.33$. They suggest that this either indicates strong magnetic collimation or external ambient pressure and density decreasing as $\propto r^{-1.3}$ and $\propto r^{-0.3}$
respectively. The similarity of the spine-sheath structure and a large initial opening angle seen in other nearby galaxies, M87 \citep{Kim2018}, Mkn501 \citep{Piner2009} and 3C84 \citep{Giovannini2018} suggests that this may be a common feature in low-luminosity AGN. These very high-resolution observations provide an opportunity of comparing the observations with general relativistic magnetohydrodynamics (GRMHD) simulations \citep[e.g.][]{Chatterjee2019}. The sheath is possibly the region of interaction between the fast spine and the accretion powered outflow \citep{Janssen2021}.
\subsection{M87, OJ287 and 3C279}
In addition to the ones discussed above, there have been interesting results on jets obtained for a number of well-known AGN. From high-fidelity images of M87 with a resolution of 10 pc, \citet{Pasetto2021} find ``a double-helix morphology of the jet material between $\sim$300 pc and $\sim$1 kpc''. They suggest a helical magnetic field which is sustained on these scales by Kelvin-Helmholtz instabilities (Fig.~\ref{f:M87_Pasetto}).
In the context of M87, it is important to note that since the publication of the total-intensity images around the supermassive black hole by the Event Horizon Telescope (EHT), linear polarization images have been reported recently \citep{Akiyama2021a,Akiyama2021b}. The high angular resolution of $\sim$20 $\mu$as, $\approx$2.5 R$_s$, enabled a study of the polarization properties, magnetic fields and plasma properties in the vicinity of the event horizon. Only a part of the ring appears polarized with the degree of polarization rising to $\sim$15 per cent in the south-western part. The low polarization is possibly due to unresolved structures within the EHT beam which they attribute to Faraday rotation within the emission region. The net linear polarization pattern is azimuthal which may be due to organized poloidal magnetic fields. The EHT Collaboration estimate the density n$_{e} \sim 10^{4-7}$ cm$^{-3}$, magnetic field strength B$\sim$ 1-30 G, and electron temperature T$_{e} \sim (1-12)\times10^{10}$ K. They also find that the consistent GRMHD models are of magnetically arrested accretion disks, and estimate a mass accretion rate onto the black hole of $(3-20) \times 10^{-4}$ M$_\odot$ yr$^{-1}$.
Polarimetric space VLBI observations of the well-known blazar OJ287 enabled imaging the inner jet with an angular resolution of 50 $\mu$as \citep{Gomez2022}. They find the innermost jet to be dominated by a toroidal magnetic field, and suggest that the VLBI core is threaded by a helical magnetic field. Another archetypal blazar 3C279 was observed in total intensity at mm wavelengths with an angular resolution of 20 $\mu$as \citep{Kim2020}. These observations show non-radial motion of inner jet components at apparent speeds of $\sim$15c and $\sim$20c.
\subsection{Collimation in LERGs, HERGs, FRI and FRII sources}
As mentioned earlier the collimation break occurs over a wide range, from $\sim 10^4 R_s$ to $\sim 10^8 R_s$ \citep[e.g.][]{Okino2021}. In NGC315 jet collimation is complete within a parsec, in M87 the jet is anchored in the vicinity of the ergosphere (\citealt{Kim2018} and references therein), while in Cygnus A, \citet{Boccardi2016} find a minimum jet width of $\sim230 R_s$ and suggest that the jet may be launched from a larger distance. The jet collimation break in 3C273 is also at a large distance of $\sim 8 \times 10^6 R_s$ \citep{Okino2021}.
Cygnus A is an archetypal FRII source and its jet power is larger than that of M87, an FRI source, by about 3 orders of magnitude. Do the nuclear jets suggest a difference in collimation between FRI and FRII sources, and between HERGs and LERGs which reflect different accretion processes? \citet{Boccardi2021} investigate these aspects using a sample of 27 sources and defining a LERG and a HERG based on the ratio of the x-ray to Eddington luminosity, $L_{x-ray}/L_{Edd}$. Those with a ratio $>1.1\times10^{-3}$ are considered to be HERGs, while those below as LERGs. Although the sample is small and limited by redshift, the HERGs tend to show a transition above $10^6 R_s$ while the LERGs below this limit (Fig.~\ref{f:Boccardi_heg_leg}). This suggests a relationship between jet collimation and the properties of the accretion disk and black hole. \citet{Boccardi2021} also suggest jets in HERGs to have a more prominent outer sheath, and an outer launch radius
$>100 R_s$. Jets in sources such as in M87 appear anchored in the innermost disk regions. BL Lac objects which are the beamed counterparts of FRI sources appear consistent with LERGs in their jet collimation characteristics.
Disk winds may be responsible for the prominent outer sheath in FRII sources and play a prominent role in the collimation of jets. These winds could be probed for example by x-ray detection of ultrafast outflows (UFOs) in AGN originating in the accretion disk \citep{Tombesi2010,Tombesi2014,Reynolds2015}. Most of the HERGs in the \citet{Boccardi2021} sample exhibit UFOs suggesting that disk winds can be a viable process for the collimation of jets in these sources. As most of the HERGs belong to the FRII category, it is conceivable that the FRII sources have a prominent sheath which stabilises the inner spine and minimises entrainment from the interstellar medium (\citealt{Perucho2006} and references therein). The FRI jets on the other hand are more prone to entrainment and more dissipative. Besides enlarging the sample, it would also be important to study the collimation of jets in FRI HERGs and the FRII LERGs (cf. \citealt{Mingo2022}).
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.125]{Thomasson_3C459.pdf}
\caption{ MERLIN images of 3C459 with an angular resolution of 70 mas. The upper panel shows the total-intensity image, while the
lower panel shows the components with the polarization E-vectors are superimposed on the total intensity contours. The eastern component in this highly asymmetric source is strongly depolarized possibly due to interaction with the external medium \citep{Thomasson2003}.}
\label{f:3C459_Thomasson}
\end{figure*}
\section{Jets in Compact steep-spectrum and peaked-spectrum sources}
Our understanding of compact steep-spectrum (CSS) and peaked-spectrum (PS) sources, defined to be less than 20 kpc in size with the latter exhibiting a peak in the radio spectrum, has been reviewed recently by \citet{ODea2021}. The three main scenarios for the nature of CSS and PS sources are as follows. (i) The PS sources are young sources with those which peak at higher frequencies being smaller and younger. The PS sources evolve to the CSS sources which in turn evolve into the larger sources as the jets propagate outwards through the interstellar medium of the host galaxy and later through the intragroup/intracluster medium and then the intergalactic medium. (ii) The jets in CSS and PS sources may be confined to small dimensions within the confines of their host galaxies due to a dense interstellar medium. The jets may also be disrupted. (iii) Alternatively the jets in these sources may be intermittent. Although each of these scenarios may be applicable to different sets of sources, not all CSS and PS sources are likely to evolve into large radio galaxies and quasars. In this Section we summarise a few salient features relevant for the propagation of jets.
The CSS and PS sources are smaller than the dimensions of the host galaxy. Hence the effects of propagation of the jets through the interstellar medium of the host galaxy
can be probed via both structural and polarization properties of the lobes. Also feedback processes of the jets which may affect the interstellar medium of the host galaxy as well as star formation can be studied from the properties of the host galaxy and the interstellar medium.
\subsection{Jet propagation in an asymmetric environment}
Most of the CSS and PS sources when observed with high angular resolution exhibit a double-lobed structure, often with a radio core particularly in the case of quasars, although the jets in some appear quite distorted and complex. Examples of the latter include 3C48 and 3C119, suggesting disruption of the jet via interaction with the interstellar medium of the host galaxy. For the ones with a double-lobed structure selected largely from strong-source samples, \citet[and references therein]{Saikia2003b} investigated the symmetry parameters of CSS and PS sources such as the separation ratio, flux density ratio of the oppositely directed lobes and the overall misalignment of the sources compared to the larger radio galaxies and quasars. The CSS and PS sources were found to be more asymmetric and misaligned possibly due to interaction of the jets with an asymmetric external environment. The more luminous lobe with the more prominent hotspot being nearer to the core suggests interaction with an external environment rather than this being due to effects of orientation and relativistic motion. In the latter case, the more prominent hotspot would have been on the approaching side of the jet and farther from the nucleus. Similar results were found for weaker source samples as well (\citealt{KunertBajraszewska2016} and references therein).
The external environment which is a magnetoionic plasma through which the jets are propagating can also be probed via polarization observations. The medium will cause a rotation of the E-vector of the synchrotron radiation, the degree of rotation being given by $\chi(\lambda) = \chi_o + RM\lambda^2$ where the rotation measure $RM = 812 \int n_e B_\parallel dl$ rad m$^{-2}$. Here $\chi(\lambda)$ is the position angle (PA) of the E vector at a wavelength $\lambda$,
$\chi_o$ is the PA at zero wavelength, $n_e$ is the electron density in cm$^{-3}$, the parallel component of the magnetic field $B_\parallel$ is in units of mG and $l$ in parsec.
Depolarization of the radio emission can occur due to thermal plasma mixed within the synchrotron emitting region, with the emission from different depths being rotated by different amounts, as well as by unresolved structures in an external medium or screen which may have very different rotation measures.
We need observations of high angular resolution to adequately resolve the structure to probe the environment through which the jets are propagating. \citet{Fanti2001} observed the B3-VLA sample of sources and reported significant asymmetries in polarization of the oppositely directed lobes, again suggesting asymmetries in the external environment. \citet{Saikia2003} studied a stronger source sample of 3CR and 4C sources and also found a significantly higher degree of polarization asymmetry in the lobes, compared with a control sample of larger sources. They argued that this is unlikely to be due to orientation as seen in the Laing-Garrington effect, but reflects an asymmetry in the environment through which the jets are propagating. This is sometimes also seen in radio galaxies and quasars with sizes larger than that of the CSS sources. Two striking examples are two of the most asymmetric sources 3C254 associated with a quasar and 3C459 associated with a galaxy (Fig.~\ref{f:3C459_Thomasson}) although with sizes $>$20 kpc \citep{Thomasson2003,Thomasson2006}. In both cases the lobe located far closer to the nucleus is strongly depolarized relative to the one on the opposite side, suggesting interaction of the jet with a dense magnetoionic cloud.
\begin{figure*}
\centering
\includegraphics[width=15cm]{Laing_NC315.pdf}
\caption{Observations and models of the nearby radio galaxy NGC 315 \citep{Laing2014,Laing2015}. (a) False colour radio image of the galaxy with the vectors
denoting the degree of polarization, p, and direction of the apparent magnetic
field. (b) Their model fit to the observations shown in (a). (c) The velocity field derived from their model in units of c. Figure is from \citet{Laing2015}. }
\label{f:NGC315_Laing}
\end{figure*}
The asymmetries in the environment can also be probed via estimates of the rotation measure or $RM$ of the lobes.
Although many CSS sources are known to have high values of $RM$ extending to thousands of rad m$^{-2}$, there are also sources with low values (cf. \citealt{ODea2021} and references therein). Detailed imaging of these sources show that evidence of interaction of jets is quite common. In the CSS quasar 3C147, \citet{Junor1999} reported a huge differential $RM$ with the southern one facing the jet and much closer to the nucleus having a value of $\sim -$3140 rad m$^{-2}$ compared with +630 rad m$^{-2}$ for the more distant one.
VLBI-scale polarimetric observations of the parsec-scale jet suggest $RM$ values ranging from $-1200$ - $-2400$ rad m$^{-2}$ \citep{Zhang2004,Rossetti2009}. \citet{Junor1999} suggested that the jet is interacting with a dense cloud of gas embedded in the magnetoionic medium of the host galaxy, which is hindering its advance. Other examples of enhanced $RM$ near where the jet bends due to interaction with the external environment include B0548+165, B1524$-$136 and 3C119 \citep{Mantovani2002,Mantovani2010}. Sources with a complex morphology possibly due to disruption of the jet may also have a high $RM$ as in
3C119, 3C318 and 3C343 \citep{Mantovani2005,Mantovani2010}.
However relationship between the total-intensity structure and $RM$ variations are rich and diverse. For example \citet{Cotton2003} find a high $RM$ and a brightening of the jet in 3C43 where the jet bends, while 3C454 appears to have a high $RM$ across the jet with no significant increase in either brightness or $RM$ where the jet bends. In this case the bend in the direction of the jet may not be due to collision with a cloud of gas.
The possibility of jets propagating through a dense asymmetric medium on opposite sides has also been inferred from measurements of velocities of hotspots from their proper motion as in J0111+3906 and J1944+5448 \citep{Polatidis2002,Rastello2016}. The closer hotspots which are also brighter are moving with smaller velocities. Inferring velocities from estimates of radiative ages of very asymmetric sources also suggest that the nearer components are moving with slower velocities \citep{Orienti2007a}. H{\sc{i}} in absorption has been detected towards the closer and brighter hotspots in the CSS sources 3C49 and 3C268.3, which are also depolarised and associated with optical emission-line gas \citep{Labiano2006}.
The radio structures of CSS and PS sources are often asymmetric suggesting that the jets are propagating through an asymmetric external environment on these scales with greater dissipation of energy on the jet side \citep{Bicknell2003,Jeyakumar2005}. However in reasonably symmetric sources the hotspots may be traversing outwards with similar velocities as for example in 4C31.04 \citep{Giroletti2003} and J1511+0518 \citep{An2012}.
\section{Large-scale jets}
The large-scale kpc-scale jets in the FRI and FRII sources which can extend up to hundreds of kpc appear structurally different. The FRII jets appear well collimated all along leading to the formation of hot-spots at the lobes, often at the outer edges. The jets in FRI sources tend to `flare' exhibiting an increase in the width of the jet with distance from the AGN. Both FRI and FRII jets appear asymmetric close to the AGN, but the FRI jets are more symmetric on larger scales. The broad picture is that the jets in both FRI and FRII sources are initially relativistic, but the jets in FRI sources decelerate while those in FRII sources remain relativistic till it reaches the outer lobes. The FRI jets are more prone to mass loading or entrainment as it traverses outwards. Mass loading could be due to either stellar winds from within the volume traversed by the jet \citep[e.g.][]{Komissarov1994,Bowman1996} or entrainment of material from the interstellar medium of the host galaxy (e.g. \citealt{Bicknell1986,Rosen2000}, and references therein).
A number of key and outstanding questions related to our understanding of jets on kpc scales have been highlighted by \citet{Laing2015}. These include the velocity fields of the jets especially the differences in the FRI and FRII sources; jet composition and effects of entrainment; magnetic field structure; confinement of jets and effects of the external environment; generation of relativistic particles and the effects of feedback on the interstellar medium on small scales and on intracluster or intergalactic medium on larger scales. Detailed studies of jets at radio wavelengths have been possible for the FRI sources which can be well resolved transverse to the jet axis, while the jets in FRII sources are narrower.
\subsection{Jets in FRI sources}
One of the more detailed empirical models to understand jets in FRI radio galaxies was developed by \citet{Laing2002a,Laing2002b} initially applying it to the radio galaxy 3C31. They assume that the two oppositely-directed jets are axisymmetric, intrinsically symmetrical and stationary. The jets are shown to be relativistic so that the apparent asymmetries due to relativistic aberration are much larger than intrinsic asymmetries. They model the jet geometry, three-dimensional distributions of velocity, emissivity and magnetic field structure. These are optimised by comparison with high-resolution images.
They suggest that the jets can be divided into three parts where the inner region is well-collimated. This is followed by a region of rapid expansion, referred to as the flaring region after which the jets recollimate, and then there is a conical outer region. The magnetic field structure is primarily longitudinal and toroidal. The on-axis velocity drops at the end of the flaring region from $\sim$0.8c to $\sim$0.55c, decreasing further outwards. The velocity at the edges of the jet are significantly lower, with the deceleration of the jet being possibly due to entrainment from the external medium. \citet{Laing2002b} suggest that entrainment from the galactic atmosphere is the dominant process at large distances, while stellar mass loss could make a significant contribution near the flaring point.
This has been extended to a larger sample of FRI sources which have been observed with high sensitivity with the VLA by \citet{Laing2013}, who provide a summary of the results. The observations and model for one of their galaxies, NGC315, are shown in Fig.~\ref{f:NGC315_Laing}. In the regions where the jets are resolved in the transverse direction, the jets appear to flare increasing in opening angle before recollimating and then having a conical outflow at a distance $r_o$ from the AGN. The velocity is $\sim$0.8c at $\sim 0.1r_o$ where the jet brightens rapidly. The high emissivity
continues till $\sim 0.3r_o$ with rapid deceleration starting $\sim 0.2r_o$ and continuing till $\sim 0.6r_o$, followed by a constant flow speed. The outflow speed at the jet edges are slower than in the spine of the jet. The magnetic field is predominantly longitudinal close to the AGN but predominantly toroidal after recollimation. The flaring region would require reacceleration of the ultrarelativistic particles to compensate for the adiabatic losses. Also x-ray synchrotron emission is observed from this region. The evolution and observed characteristics of the jets are best understood in terms of interaction with the external environment, with most entrainment occurring before recollimation \citep{Laing2014,Laing2015}.
High-quality images of eleven FRI jets showed that the spectral index between 1.4 and 8.5 GHz decreases with distance from the nucleus
in all the sources \citep{Laing2013}, similar to what was seen earlier \citep{Laing2008}. The mean spectral index when the jets first brighten abruptly is 0.66$\pm$0.01 and after the jets recollimate the mean spectral index flattens to 0.59$\pm$0.01. The mean change in spectral index which is more robustly measured is $-0.067\pm0.006$ \citep{Laing2013}. Their jet model associate this with a decrease in the jet velocity from $\sim$0.8c to less than $\sim$0.5c, reflecting the particle acceleration processes at play. The possibility of first-order Fermi acceleration would require shocks all along the volume of the jet. The remarkable similarity of the spectral index evolution along the FRI jets studied by \citet{Laing2013}, especially when normalized to the same recollimation distance is striking. This is in contrast to the FRII jets which exhibit greater dispersion in their spectral indices and often have steeper spectra. Although we need to extend such studies to FRII jets and also for a larger sample of FRI jets, the difference suggests that the dominant particle acceleration processes may be different for the FRI and FRII jets.
A detailed study of the total intensity and linear polarization asymmetries of the jets in two FRI galaxies B2~0206+35 (UGC1651) and B2~0755+37 (NGC2484)
has been made by \citet{Laing2012}. They have shown that the asymmetries can
be understood if the jets are intrinsically symmetrical with decelerating relativistic outflows but are also surrounded by mildly relativistic backflows. The backflow velocities are in the range of $0.05<\beta<0.35$ and could be traced to distances from the AGN of at least $\sim$15 kpc and 50 kpc for B2~0206+35 and B2~0755+37 respectively. Backflows are normally associated with FRII sources, and it is interesting to find examples among FRI radio sources. There are a number of open questions listed by \citet{Laing2012}, including where does the backflow start and where does it end and how ubiquitous is it among FRI sources, which the new generation of radio telescopes will help address.
\subsection{Jets in FRII sources}
Detailed modelling as has been done for the extended jets in FRI sources has not been possible for the ones in FRII sources due to inadequate resolution. Also jets in FRII radio galaxies are often quite weak making it difficult for detailed work with the current sensitivity limits.
Collimation of jets has been probed by examining the structure of jets, especially in quasars and possible correlation of the size of hotspots with projected linear size. In their detailed study of 12 3CR quasars, \citet{Bridle1994} examined the variation of width transverse to the jet axis with distance from the core, and found that after an initial rapid expansion, the expansion slows down and there is evidence of recollimation. This is consistent with studies of other jets as well. However they note that while the spreading rate, defined as the ratio of knot width to distance from core, is often $>$0.1 for jets in low-luminosity sources as seen for the FRI sources, but rarely $>$0.1 for high-luminosity ones. Collimation of jets in FRII sources may also be examined by studying the variation of hotspot size with projected linear size. For a sample of FRII radio galaxies larger than about 70 kpc, \citet{Hardcastle1998c} found the hotspot size to be correlated with the projected size with a slope of about unity; consistent with the trend noted by \citet{Bridle1994} that the hotspot size scales with linear size. \citet{Jeyakumar2000}
extended this to compact steep-spectrum and peaked spectrum radio sources, defined to be less than about 20kpc, and found that the hotspot size for CSS and PS sources increases with linear size, with some evidence of flattening beyond this scale. The jets in quasars exhibit a significant trend to point towards the more prominent hotspot, while this trend is weaker in the case of radio galaxies (cf. \citealt{Hardcastle1998c}). This must be due to mild relativistic beaming of the hotspots and would be consistent with the unified scheme for radio galaxies and quasars. The high detection rate of jets in quasars compared with a much lower fraction for radio galaxies \citep[e.g.][]{Fernini1993,Hardcastle1998c} is also consistent with the unified scheme.
With what velocities are the jets traversing outwards in the FRII sources? The correlation of jet-sidedness with lobe depolarization, the Laing-Garrington effect \citep{Laing1988,Garrington1988}, demonstrates that the prominent jets are on the approaching side. This is consistent with relativistic beaming being a viable explanation of jet asymmetry. Superluminal motion of nuclear jets show that the nuclear jets are travelling close to the velocity of light. Assuming that the nuclear jets have a typical Lorentz factor $\gamma \sim$5, \citet{Bridle1994} estimate the Lorentz factor of the extended quasar jets in their sample $\gamma_j$ to be $1.6\pm0.2$. This suggests that although extended radio jets may start off with highly relativistic velocities, the velocities on average slow down with increasing distance from the nucleus.
If the jets and their environments were intrinsically symmetric, it would have been in principle possible to estimate jet velocities from the observed asymmetry of the jet to counter-jet brightness or flux density ratio. However, although relativistic beaming appears to play a dominant role, there has also been evidence of intrinsic asymmetries playing a role.
Examples of this are seen in the form of increased brightness in jets in regions where jets appear to bend. Also, in a small number of cases jets contribute over about 30 per cent of the total flux density of the source in sources with relatively weak cores, and hence likely to be inclined at large angles to the line of sight. Examples include the quasars 3C9 \citep{Bridle1994}, 3C280.1 \citep{Swarup1982} and B1857+566 \cite{Saikia1983}. These jets point in the direction of a weak hotspot, suggesting that the observed jet asymmetry could also be contributed significantly by dissipation of energy in the jet. Examples of weak-cored, one-sided radio sources also suggest intrinsic jet asymmetries in these sources \cite{Saikia1989}, although deeper observations are required to clarify the situation.
\subsection{Transition cases}
In addition to the jets in FRI and FRII sources, also referred to as weak and strong-flavour jets, there are
a number of transition cases which often occur in sources classified as FRI-II. These jets may flare, but do not appear to decelerate significantly and have detected counter-jets \citep{Laing2015}. One of the well-studied examples in this category is NGC6251 which is a giant radio galaxy with an overall size of $\sim$1.56 Mpc, and with a large side-to-side ratio for the oppositely-directed jets (\citep{Perley1984,Laing2015}.
\subsection{Giant radio sources}
Although giant radio sources (GRSs) have traditionally been defined to be $>$1 Mpc \citep[e.g.][]{Schoenmakers2001}, a limit of 0.7 Mpc has been widely used recently with the current cosmological parameters (e.g. \citealt{Kuzmicz2018,Dabhade2020a,Dabhade2020b}). Among the giant radio sources, most of them belong to the FRII class, with some in the intermediate FRI/II category and only a small fraction in the FRI class. The FRIs include tailed radio sources in clusters of galaxies. such as 3CR129 \citep[e.g.][]{Lane2002,Lal2004} and 3CR130 \citep[e.g.][]{Hardcastle1998b}. In the early compilation by \citet{Ishwara-Chandra1999}, of the 53 GRSs, only 4 were classified as FRIs. The percentage of FRIs in the LoTSS \citep{Dabhade2020a} and SAGAN samples \citep{Dabhade2020b} are similar. In the compilation by \citet{Kuzmicz2018}, of the 349 GRSs only 20 are FRIs, again a similar percentage to that of \citet{Ishwara-Chandra1999}. In the \citet{Kuzmicz2018} sample, the median projected size of the FRI GRSs was found to be lower than those of FRIIs. The FRIs also appear confined to the nearby Universe with a maximum redshift of $\sim$0.24, while the median redshift of the entire sample is 0.24 with the highest value being 3.22 and 28 objects having a redshift $>$1.
As the jets in FRI sources are highly dissipative as they traverse outwards, it is possible that a significantly smaller fraction may be able to reach sizes $>$0.7 Mpc compared with FRIIs. However, the small fraction may also be partly due to difficulty in detecting weak diffuse emission, especially at high-redshifts where inverse-Compton losses with the cosmic microwave background may dominate over synchrotron losses. Deep radio observations sensitive to diffuse large-scale structure with the required resolution at low radio frequencies should help clarify whether FRI GRSs may be more common than presently observed. In the nearby FRI radio galaxy 3CR31, deep low-frequency observations have revealed that the plumes of radio emission extend much farther than earlier seen making the projected linear size $\sim$1.1 Mpc \citep{Heesen2018}. Modelling the jets in FRIs from high-resolution observations has been discussed in Section 7.1, where modelling of the jets in the giant radio galaxy NGC315 \citep{Laing2014,Laing2015} has also been discussed. The jets in giant radio galaxy 3CR31 have been modelled with an inclination angle of $\sim$52$^\circ$ to the line of
sight, an on-axis jet speed $\beta\sim$0.9 at 1 kpc from the nucleus, decelerating to $\sim$0.22 at 12 kpc, with slower speeds at the edges \citep{Laing2002a,Laing2002b}. Deriving an external pressure profile from x-ray observations \citet{Croston2014} extend the modelling of entrainment to $\sim$120 kpc. \citet{Heesen2018} have extended the analysis to larger distances.
The fraction of sources with well-defined jets in FRII GRSs is small \citep{Dabhade2020b,Kuzmicz2021}. In a sample of 174 GRQs, less than $\sim$3 per cent have been found to exhibit radio jets \citet{Kuzmicz2021}. As GRSs may on average be expected to be inclined at larger angles to the line of sight than smaller sources associated with similar hosts, jets may appear weaker due to Doppler effects. However, the quasars are expected to be inclined within $\sim$45$^\circ$ to the line-of-sight in the unified scheme for FRII radio galaxies and quasars \citep[e.g.][]{Barthel1989}, and the relative core strengths of giant radio quasars (GRQs) were found to be broadly consistent with the unified scheme (cf. \citealt{Ishwara-Chandra1999}). Therefore the extremely low fraction of quasars with radio jets is somewhat surprising, and deeper observations with adequate resolution should help clarify this aspect. Our current understanding of GRSs and possibility of future studies with SKA are being discussed by \citet{Dabhade2022}.
\subsection{Jets in tailed radio sources}
Extragalactic radio sources with a head-tail shape where the parent optical galaxy is at the head of the radio source were first identified by \citet{Ryle1968}. More examples of such sources led \citet{Miley1972} to suggest that the structure of these sources is due to bending of the jets by ram pressure of the intracluster medium. Further observations of radio sources in clusters of galaxies showed that the opening angle of the tails had a large range, those with small opening angles were termed narrow-angle tailed (NAT) sources, while those with larger opening angles as wide-angle tailed (WAT) sources. The WATs appear to be of higher radio luminosity and tend to be associated with the dominant galaxies in clusters, although not exclusively \citep[e.g.][]{Owen1976,GoldenMarx2019}.
Clusters of galaxies are dynamical systems where there could be infall of individual galaxies or small groups, and mergers of clusters of similar mass or a smaller cluster merging with a bigger one. These interactions lead to the development of turbulence, shocks and sloshing motions in the intracluster medium (ICM). In addition there is feedback from AGN in clusters of galaxies, with the jets often showing evidence of recurrent activity (Section 9). Extended radio sources in clusters are evolving in such a turbulent medium. While the radio jets in NATs may be bent by ram pressure due to motion of the parent galaxy through the ICM, such an explanation is unlikely for WATs as the host galaxies are often the dominant galaxies in clusters and not expected to have high velocities relative to the ICM. The wide range of shapes of the jets and tails is likely due a combination of motion of the galaxy as well as dynamics of the ICM. Their appearance will also be strongly influenced by projection effects. It is also interesting to note that recent observations of tailed radio sources in clusters have revealed new features which pose interesting challenges in understanding these features (e.g. \citealt{Gendron2021} and references therein).
In this article we highlight a few aspects of jets in tailed radio sources in clusters. The jets in NATs are similar to those of FRI sources, except for being bent into a C- or V-shape. Among the archetypal NATs is 3CR83.1B (NGC1265) which was highlighted by \citet{Ryle1968} and \citet{Miley1972}, and studied in detail by \citet{ODea1986,ODea1987}, and more recently by a number of other authors (e.g. \citealt{Gendron2020,Gendron2021}, and references therein). With the jets being swept backwards, the inner jet knots appear brightest in the leading or 'front' edge with higher fractional polarization, and magnetic field along the jet axis. The field lines have been possibly sheared tangentially. In the latter one-third of the jet the magnetic field lines are more complex with a significant perpendicular component. The jets exhibit regions of faster and slower expansion with distance from the core, and wiggles as it traverses outwards possibly due to the development of Kelvin-Helmholtz instabilities \citep{ODea1986,ODea1987}.
\begin{figure*}
\centering
\includegraphics[width=14.0cm]{BH_3C273_cloud_schematics7_Fig11.jpeg}
\caption{Schematic illustration to explain the gas kinematics in the central region of the quasar 3C273 where a rotating gas disk is affected by the emerging jet with its associated expanding hot gas cocoon \citep{Husemann2019}. \copyright AAS. Reproduced with permission.}
\label{f:3C273_Husemann}
\end{figure*}
One may enquire whether jets and tails which appear unresolved and occur on only one side of a galaxy may represent truly one-sided jets. High-resolution observations of a sample of such sources have shown that almost all have two-sided jets when observed with sufficiently high angular resolution. This demonstrates that these are similar to other NATs. The jet to counter-jet brightness ratio suggests that the large-scale jets are at best mildly relativistic with velocities of $\sim$0.2c, similar to those of FRI radio galaxies \citep{TernideGregory2017}. The velocities could be larger for the nuclear jets. IC310, associated with an SO galaxy appear to have a one-sided parsec scale jet in the observations of \citet{TernideGregory2017}, but was later shown to have two-sided jets on a larger scale \citep{Gendron2020}. It has the most prominent core among the sources observed by \citet{TernideGregory2017}, exhibits blazar-like characteristics \citep[cf.][]{Glawion2017} and it is likely that in such cases relativistic beaming effects are playing a role.
The WATs on the other hand have luminosities near or above the classical Fanaroff-Riley break, and when observed with high angular resolution exhibit one or two jets which are well-collimated for tens of kpc and appear similar to those of FRII sources. They then broaden and flare dramatically to form extended plumes or lobes of emission \citep[e.g.][]{ODonoghue1993,Hardcastle1998b}. The large-scale jet velocities have again been found to be mildly relativistic with velocities of $\sim$0.2c from jet to counter-jet brightness ratios. The plumes and lobes of emission were generally seen to bend in the same direction forming a C-shaped structure, but deeper observations may reveal more complex structures reflecting the complexity of the cluster and its ICM.
For example in the tailed galaxy NGC1272, the collimated jets ``initially bend to the west, and
then transition eastward into faint, 60 kpc long extensions with eddy-like structures and filaments'' \citep{Gendron2020} They suggest that gas motion of the ICM, motion of the galaxy in the cluster including through a sloshing cold front all play a role. More sensitive observations especially with SKA are likely to reveal more such structures and provide deeper insights into the ICM and interaction of the jets with it.
\section{Jet interaction and feedback}
Feedback processes in AGN include the effects of jets, winds, cosmic rays and radiation on the host galaxy, its interstellar medium and the environment. Among these, the effects of radio jets are perhaps better understood, although it is often difficult to disentangle the different contributions. The energy input from radio jets could regulate star formation suppressing star formation in the massive galaxies and determining the high-mass end of the galaxy luminosity function; prevent cooling flows in clusters of galaxies and help understand the balance of heating and cooling processes in the intracluster medium (cf. \citealt{Benson2003,Croton2006,Fabian2012,McNamara2007,McNamara2012}. Feedback has been invoked to understand the strong colour bi-modality of galaxies suppressing star formation as galaxies move to the red sequence (e.g. \citealt{Baldry2004}, and references therein),
galaxy black-hole and bulge mass correlation \citep{Silk1998}, properties of the circumgalactic medium and evolution of gas in dark matter halos. Radio galaxies and feedback from AGN jets have been reviewed extensively relatively recently by \citet{Hardcastle2020}, and effects on the cold components of the interstellar medium, neutral atomic hydrogen and molecular gas have been reviewed by \citet{Morganti2018}, \citet{Morganti2021} and \citet{Veilleux2020}.
\begin{figure*}
\centering
\includegraphics[width=16.0cm]{SM_CO_outflow_Fig12.pdf}
\caption{Left panel: The position-velocity (PV) diagram of the large-scale gas disk in colour and the circumnuclear gas in contours in the radio source B2~0258+35 associated with the galaxy NGC~1167. These are from along the major axis of the large-scale gas disk which highlights the different kinematics of the regularly rotating gas and the disturbed gas. Right panel: PV diagram of the circumnuclear gas extracted from along the radio axis \citep{Murthy2022}.}
\label{f:Murthy_CO_gas}
\end{figure*}
\subsection{The alignment effect and jet-triggered star formation}
One of the early suggestions of jet-cloud interactions triggering star formation was the alignment effect discovered by \citet{McCarthy1987} and \citet{Chambers1987}. Here the optical images of high-redshift radio galaxies appear to align well with the radio axes of the double-lobed radio sources. This was also clearly demonstrated in a sample of 3CR radio galaxies in the redshift interval $0.6<z<1.3$ observed with the HST, VLA and UKIRT \citep{Best1996}. The elliptical galaxy with its old stellar population was seen in the infrared images, distinctly different from the aligned structures seen at optical wavelengths. The alignment effect evolved with size in this redshift range, being less prominent for larger sources, suggesting that it is a relatively short-lived phenomenon. It was natural to assume that this may be due to formation of young stars triggered by the jet which have evolved on time scales of $\approx 10^7$ yr, similar to those of the evolution of the double-lobed radio galaxies. \citet{Rees1989} developed a model where cold clouds of $\approx$10$^4$K are compressed triggering star formation, while \citet{DeYoung1989} performed numerical simulations to suggest high star formation behind the shock wave as the gas cools. Although attractive, the explanation may be more complex \citep[e.g.][]{Longair1995,Best2000}. Detection of polarized emission suggested that some of it could be scattered light from a hidden nucleus. \citet{Best2000} showed in the young sources the bow shock affects the morphology, kinematics and ionization properties of the emission line gas, while these are more settled for the larger sources.
Although exploring the alignment effect in GPS sources is a challenge because of their small sizes, it has been done for a couple of GPS sources and a number of CSS objects (see Section 5.5.2 in \citealt{ODea2021} for a summary). The alignment effect in CSS sources is seen at all redshifts although for the larger sources it is confined to $z>0.6$ \citep[e.g.][]{Privon2008}. Recently, \citet{Duggal2021} have
reported extended UV emission co-spatial with the radio source, and have suggested that this may be due to star-formation triggered by the radio jet. Although this remains a possibility which needs further exploration, the alignment effect is perhaps due to a combination of scattered AGN light, nebular continuum emission and star formation. In this context it is also relevant to note that \citet{Collet2015} reported the detection of extended warm ionized gas in two high-redshift galaxies which does not appear to be related to the radio jets, unlike most high-redshift radio galaxies.
They suggest that the extended line emission in these two cases may arise from extended gas disks or filaments in the vicinity of the radio galaxy.
There are a number of well-studied examples of jet-induced star formation in luminous radio galaxies as the jets propagate through the ISM \citep[e.g.][]{Fragile2017}. These include Minkowski's Object (\citealt{Zovaro2020}, and references therein), Centaurus A (\citealt{Salome2017}, and references therein), 3C285
\citep{Salome2015}, 4C 41.17 \citep{Nesvadba2020}, and 3C441 \citep{Lacy1998}. In the `radio-quiet' quasar J1316+1753 the close alignment of the jet and the position angle of the stellar bulge is also suggestive of star formation triggered by the jet which contributes to the stellar bulge (\citealt{Girdhar2022}; see Section 8.2)
\subsection{Suppression of star formation}
Since the early evidence of radio jets affecting the observed properties of the narrow-line regions in samples of Seyfert galaxies \citep[e.g.][]{Whittle1992}, as well as in detailed studies of individual sources such as for example NGC~1068 \citep{Axon1998} and Mkn~3 \citep{Capetti1999}, examples of such interaction have also been found in other low-luminosity AGN \citep[e.g.][]{May2018}. In the nearby radio galaxy Coma A the radio emitting plasma appears closely related to the ionized gas \citep{Tadhunter2000}. The effects of jet-ISM interactions for the luminous compact steep-spectrum and peaked-spectrum radio sources at different wavelengths have been summarized by \citet{ODea2021}. A number of authors \citep[e.g.][]{Morganti2018,Morganti2021,Ruffa2022,Girdhar2022,Murthy2022}, and references therein, have discussed several examples of jet-ISM interaction inferred from from H{\sc i} and CO observations.
In this short review we highlight a few illustrative examples of jet-ISM interactions rather than provide an exhaustive list. Blue-shifted H{\sc i} absorption profiles suggest velocities ranging from a few hundred to $\sim1300$ km s$^{-1}$, mass of a few times 10$^6$
to 10$^7$ M$_\odot$ and outflow rates of about 20 - 50 M$_\odot$ yr$^{-1}$ \citep{Morganti2021}. Noted examples where the absorbing H{\sc i} clouds have been localised include
the restarted radio galaxies 3C293 \citep{Mahony2016} and 3C236 \citep{Schulz2018}, and the CSS source 4C12.50 \citep{Morganti2013}.
The CSS source 4C31.04 exhibits shocked molecular and ionized gas due to jet-driven feedback
\citep{Zovaro2019}. They suggest that dense clumps of gas inhibit the advancement of the brightest radio synchrotron emitting plasma, while the less dense material percolate through the porous ISM of the host galaxy. \citet{Nesvadba2010} find most of the molecular gas in the giant radio galaxy 3C326N to be warm, and suggest that a fraction of the mechanical energy of the jet is deposited in the ISM, which provide energy for the outflow besides heating the ISM. Optical observations suggest a mass outflow rate of 30-40 M$_\odot$ yr$^{-1}$
with a terminal velocity of $\sim -1800$ km s$^{-1}$.
Atacama Large Millimeter Array (ALMA) CO(1-0) observations of the giant radio galaxy associated with a spiral host, J2345$-$0449 indicate high gas motions possibly due to the jet kinetic energy \citep{Nesvadba2021}. \citet{Husemann2019} have observed the hyperluminous quasar 3C273 with VLT-MUSE optical 3D spectroscopy and ALMA and find that both the ionized gas in the narrow-line region and the molecular gas are kinematically disturbed. They propose a scenario in which a hot gas cocoon associated with the emerging jet affect the gaseous components in a rotating disk (Fig.~\ref{f:3C273_Husemann}).
\begin{figure*}
\centering
\vbox{
\hbox{
\includegraphics[width=5.6cm]{DM_rhofinal0252_Fig13.pdf}
\includegraphics[width=5.6cm]{DM_rhofinal0391_Fig13.pdf}
\includegraphics[width=5.6cm]{DM_rhofinal0551_Fig13.pdf}
}
\hbox{
\includegraphics[width=5.6cm]{DM_tempfinal0252_Fig13.pdf}
\includegraphics[width=5.6cm]{DM_tempfinal0391_Fig13.pdf}
\includegraphics[width=5.6cm]{DM_tempfinal0551_Fig13.pdf}
}
\hbox{
\includegraphics[width=5.6cm]{DM_vrfinal0252_Fig13.pdf}
\includegraphics[width=5.6cm]{DM_vrfinal0391_Fig13.pdf}
\includegraphics[width=5.6cm]{DM_vrfinal0551_Fig13.pdf}
}
}
\caption{Evolution of density (in units of cm$^{-3}$), temperature (in K) and velocity in units of 100 km s$^{-1}$ in the simulation of propagation of a low-power jet with P$_{\rm jet} = 10^{44}$ ergs s$^{-1}$. The low-power jets remain confined within the ISM for a longer period of time compared with high-power jets, affecting a larger volume of the ISM \citep{Mukherjee2016,Mukherjee2017}.}
\label{f:mukherjee_sim}
\end{figure*}
High angular resolution observations at both optical and millimeter wavelengths have provided a wealth of information on jet-ISM interactions, and provided valuable inputs for comparison with the results of numerical simulations. The importance of low-power jets in nearby AGN as an important source of feedback on sub-kpc scales has been highlighted in a number of recent studies. Seeing-limited optical integral field spectroscopic observations from the Multi Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope have been used to study nearby Seyfert galaxies in a survey called Measuring Active Galactic Nuclei Under MUSE Microscope or MAGNUM (\citealt{Venturi2021}, and references therein). The jets in a sample of galaxies studied by \citet{Venturi2021} are $<$1 kpc, have low power ($< 10^{44}$ ergs~s$^{-1}$) and are inclined with 45$^\circ$ to the galaxy disc. They find evidence of enhanced line widths (800 - 1000 km s$^{-1}$) which extended ($>$1 kpc) in directions perpendicular to the jets and the AGN ionisation cones. They interpret this to be due to jet-ISM interactions, showing that these low-power jets are also capable of affecting the host galaxies. A similar result has been
seen in the `radio-quiet' quasar J1316+1753 which have low-power radio jets inclined to the galaxy disk plane \citep{Girdhar2022}. Combining MUSE and ALMA observations \citet{Girdhar2022} report evidence of turbulent gas driven perpendicular to the jet axis, and extending to $\sim$7.5 kpc on opposite sides. They also find evidence of increased stellar velocity dispersion along the jet axis and co-spatial with it, and evidence of both positive and negative feedback. While highly turbulent material appears to escape the galaxy inhibiting star formation, the jets also appear to compress gas in the disk forming new stars which contribute to the stellar bulge. The stellar bulge is closely aligned with the radio jet axis \citep{Girdhar2022}. One of the very striking examples of massive molecular outflow is in the nearby low-luminosity compact radio galaxy B2 0258+35 where about 75 per cent of the central molecular gas is driven outwards by a jet in a radiatively inefficient AGN (Fig.~\ref{f:Murthy_CO_gas}, \citealt{Murthy2022}).
Outflow of molecular gas and turbulence injected into the ISM will affect star formation in the host galaxy.
For example, high-resolution observations of the nearby lenticular galaxy NGC1266 harbouring an AGN show that molecular gas is being driven out of the nuclear region at a rate of $\sim$110 M$_\odot$ yr$^{-1}$ \citep{Alatalo2015}. Although only a small fraction may escape the galaxy, the molecular gas that remains is very inefficient in forming stars, with star formation being suppressed by a factor of $\approx$50 compared with normal star-forming galaxies if all the gas is forming stars \citep{Alatalo2015}.
The star formation efficiency in the giant radio galaxy 3C326N appears to be 10 to 50 times lower than normal star forming galaxies \citep{Nesvadba2010}. Similarly the star-formation rate surface densities for J2345$-$0449 appear 30 to 70 times lower than the Kennicutt-Schmidt law of star-forming galaxies \citep{Nesvadba2021}.
\citet{Lanz2016} observed a sample of 22 radio galaxies which were selected due to the presence of warm molecular gas. They modelled the spectral energy distributions from the ultraviolet to the far infrared and found the star formation rate to be suppressed by a factor of about 3 to 6. In about 25 per cent of the sample the suppression was by a factor of more than 10.
They suggest that this is due to radio jets injecting turbulence into the interstellar medium via shocks. The observational results and trends discussed in this subsection are consistent with the results of numerical simulations of jets interacting with a clumpy ISM \citep[e.g.][]{Sutherland2007,Mukherjee2016,Mukherjee2017,Mukherjee2018a,Mukherjee2018b,Mandal2021}.
The low-power jets($<10^{44}$ ergs s$^{-1}$) remain trapped in the ISM of the host galaxy for a much longer period of time compared with jets of higher power, thereby affecting a larger volume of the ISM (Fig.~\ref{f:mukherjee_sim}; \citealt{Mukherjee2016,Mukherjee2017}).
\citet{Kalfountzou2017} present far-infrared
observations with Herschel of 74 radio-loud quasars, 72 radio-quiet quasars and 27 radio
galaxies (RGs) over the redshift range $0.9<z<1.1$
and investigate the dependence of star formation rate
on AGN luminosity, radio loudness and orientation. They suggest that there is a jet power threshold where feedback switches from compressing gas and enhancing star formation to heating and ejecting gas and thereby suppressing star formation. Both observational and theoretical work on jet-cloud interactions and perhaps a deeper understanding of star formation itself will enhance our understanding of these aspects.
\subsection{Jet feedback and x-ray cavities}
\citet{Pedlar1990} observed the FRI radio galaxy NGC1275 in the Perseus cluster over a wide range of frequencies lower than about a GHz, and were among the early ones to highlight the importance of jet feedback
while considering models of cooling flows in clusters of galaxies \citep[e.g.][]{Fabian1981}. Over the years high-resolution x-ray observations have revealed giant x-ray cavities and shock fronts which are closely related to the radio emission in many clusters of galaxies, underlining the importance of radio jet feedback \citep[e.g.][]{McNamara2007,McNamara2012}. Multiple generations of these cavities are signs of episodic AGN activity \citep[e.g.][]{Vantyghem2014}. These cavities also provide a direct and reasonably reliable means of estimating energy injected into the atmospheres by jets in AGN \citep[e.g.][]{Hardcastle2020}.
\section{Recurrent jet activity}
Radio galaxies have been found to show evidence of episodic or recurrent nuclear activity since the 1980s. For example, a radio jet south of the nucleus in the radio galaxy 3C338 has been suggested to be due to an earlier cycle of jet activity \citep{Burns1983}. Sharp discontinuities in the spectral index distributions of the lobes, where emission from the earlier cycle of activity have a significantly steeper spectral index, are signs of recurrent jet activity. Examples of such sources include 3C388 \citep{Roettiger1994,Brienza2020} and Her A \citep{Gizani2005}. Old electrons from an earlier cycle of activity could scatter low-energy ambient photons to high energies in the x-ray region of the spectrum via inverse-Compton scattering. \citet{Steenbrugge2008} have suggested an earlier cycle of activity in the archetypal FRII radio galaxy Cygnus A from x-ray observations. In the extreme case, an old radio galaxy may
be visible only at x-ray wavelengths due to inverse-Compton scattering of the ambient photons. One such example is inverse-Compton ghost of a giant radio source HDF130 in the Hubble Deep Field, where low-frequency observations with the GMRT did not reveal any radio emission \citep{Mocz2011b}. LOFAR observations of radio galaxies at low frequencies
show a variety of signatures of recurrent jet activity
\citep{Jurlin2020,Shabala2020}. The most striking examples of episodic jet activity are the double-double radio galaxies of DDRGs (Fig.~\ref{f:J1453_ddrg_Konar}) which have two pairs of radio lobes on opposite sides of the parent optical object \citep{Schoenmakers1999,Saikia2009,Kuzmicz2017}. In a couple of
cases three pairs of radio lobes indicating three cycles of jet activity have been seen \citep{Brocksopp2007,Hota2011}.
\begin{figure}[ht!]
\centering
\includegraphics[width=8.0cm]{J1453.png}
\caption{GMRT image of the double-double radio galaxy J1453+3308 at 330 MHz \citet{Konar2006}.}
\label{f:J1453_ddrg_Konar}
\end{figure}
\citet{Kuzmicz2017} compiled a sample of 74 extragalactic radio sources with evidence of recurrent jet activity, of which 67 are galaxies, 2 are quasars, and 5
are unidentified sources. They found the black hole masses of rejuvenated radio sources and a control sample of FRII sources to be similar. However they find a difference in optical morphology, which they interpret to be due to merger events in the history of the host galaxy of restarted radio sources. From a rather small sample of sources \citet{Saikia2009} and \citet{Chandola2010} suggested a higher incidence of H{\sc i}
absorption towards the nuclear regions of rejuvenated radio sources. Any differences between rejuvenated radio sources and control samples with evidence of a single cycle of activity needs further investigation using larger samples. The number of rejuvenated radio galaxies and quasars will increase with more sensitive observations especially at low frequencies as has been demonstrated by LOFAR observations of the HETDEX field and the Lockman Hole region \citep{Mahatma2019,Jurlin2020}.
The time scale of recurrent activity is likely to have a wide range. \citet{ODea2021} list seventeen CSS/PS sources with evidence of diffuse lobes of emission from an earlier cycle of activity. Several of these appear to have diffuse emission on only one side of the active nucleus; the one on the opposite side possibly being below the detection threshold. \citet{Stanghellini2015} estimated these relics to be from about $10^7$ - $10^8$ yr ago. Spectral and dynamical age estimates as well as statistical studies suggest similar time scales of the jet activity \citep{Konar2006,Shulevski2012,Shabala2008,Best2005}, although there are suggestions of smaller time scales as well, as in the case of 3C293 \citep{Joshi2011} and CTA21 \citep{Salter2010}. \citet{Reynolds1997} suggested that jets in CSS/PS sources may be intermittent on time scales of $\sim$10$^4$ - $10^5$ yr. The physical processes responsible for recurrent jet activity remains to be understood and may also provide insights towards understanding the triggering of the powerful radio jets in radio-loud AGN.
\section{Concluding remarks}
Sensitive, high-resolution observations at different wavelengths across the electromagnetic spectrum and monitoring programs, along with theoretical modelling and numerical simulations over the last decade or so have significantly enhanced our understanding of jets in AGN. However many of the fundamental questions related to jet physics such as jet launching and collimation,
jet composition, magnetic fields, particle acceleration
and constituents on different scales, remain largely unanswered. At radio frequencies the advent of the Square Kilometer Array (SKA) with unprecedented sensitivity and resolution in both total intensity and polarization, along with SKA1-VLBI, is likely to have a huge impact in our understanding of jets in AGN (cf. \citealt{Laing2015,Agudo2015}).
Here we summarise a few aspects where we are likely to see significant advances over the next decade or so.
Although jets are seen over a wide range of luminosities and different host galaxies, what determines the launching of jets of different jet powers? This is also related to our understanding of the radio-loud radio-quiet dichotomy, although the distribution of the radio loudness parameter may not be strongly bimodal as was once seen. There may be a smooth transition from the radio quiet to the radio loud regime (\citealt{Macfarlane2021} and references therein). An early study of the Palomar-Green quasar sample suggested that almost all quasars with the mass of the SMBH M${_\bullet} > 10^9$ M$_\odot$ are radio loud while those with M${_\bullet} < 3\times10^8$ M$_\odot$ tend to be radio quiet \citep{Laor2000}. But there are radio-quiet as well as radio loud objects in the range M${_\bullet} > 3\times10^8$ M$_\odot$ and M${_\bullet} < 10^9$ M$_\odot$, suggesting that mass alone may not be adequate to understand the radio loud - radio quiet dichotomy or the launching of luminous radio jets. The other parameter to consider is black hole spin \citep[e.g.][]{Chiaberge2011}. However, spin alone also may not be a sufficient condition for launching luminous radio jets as rapidly spinning SMBHs have been seen in both radio-loud and radio-quiet objects although it may be a necessary condition \citep[cf.][]{Reynolds2019}. Comparing the sources from LOFAR Two-Metre Sky Survey (LoTSS) DR1 with the Sloan Digital Sky Survey (SDSS) DR7, \citet{Sabater2019} find AGN activity to show a strong dependence on both stellar and black hole masses with massive galaxies above $10^{11}$ M$_\odot$ almost invariably exhibiting radio AGN activity. This is perhaps not surprising considering the Magorrian relationship between black hole and galactic bulge masses \citep{Magorrian1998}. The fundamental plane for black holes in the x-ray band \citep{Merloni2003} and also in the optical band \citep{SaikiaP2015} illustrate the close relationship between radio luminosity, black hole mass and x-ray/optical line luminosity. This has recently been extended to incorporate the spin of the black hole for a sample of flat-spectrum radio quasars and BL Lac objects \citep{Chen2021}. Perhaps stellar and black hole mass and spin, and the accretion process coupled with the availability of fuel all influence the launching of jets of different powers. On the theoretical front magnetic fields appear to play an important role \citep{Blandford1977,Blandford1982}, and this has been explored using fully 3D GRMHRD simulations \citep[e.g.][]{McKinney2012}.
What is the composition of the AGN jets and how do they change with distance from the central engine? In the initial acceleration phase jets are believed to be Poynting flux dominated, later converting to particle dominated plasma. The point of conversion is unclear and requires an understanding of the particle acceleration processes (see \citealt{Agudo2015} for a discussion). There is increasing evidence that jets consist of an inner spine and an outer layer. Is the constitution of the inner spine and the outer layer the same? Could the inner spine be made of an electron-positron plasma and the outer an electron-proton plasma as the jets propagate in the vicinity of the accretion disk and black hole? How does the constitution change with distance as jets propagate outwards for the FRI and FRII sources? Polarization observations especially circular polarization observations will provide important inputs towards understanding the constitution of jets.
Other important parameters related to the jets are the velocity fields, magnetic field structure and jet power and the environment.
For the low-luminosity FRI Laing \& Bridle (see Section 7.1) have constructed detailed models and showed that the velocity decreases with distance from the
central engine. \citet{Laing2014} find the magnetic field component to be predominantly longitudinal close to the
AGN and toroidal after recollimation.
Increased resolution and sensitivity will enable such studies to be extended to FRII sources \citep{Laing2015}. Detailed polarization studies will enable estimates of the Faraday depth and its variation in the jets, estimate the thermal particle content and explore whether the confinement of jets is due to magnetic fields or thermal pressure if they are confined. VLBI-scale observations suggest toroidal fields, but increased sensitivity and resolution will enable this to be explored over a range of length scales for large samples.
What is the source of high-energy emission from AGN jets?
Observations with the Chandra telescope demonstrated the ubiquity of AGN jets at X-ray wavelengths with individual knots of emission being detected at X-ray wavelengths.
For FRI sources the x-ray emission from the jets appear to be due to synchrotron radiation, while for FRII jets
inverse-Compton radiation from jets moving at highly relativistic velocities with Lorentz factors of 10 - 20 has been the standard explanation. An alternative explanation is that the high-energy emission is from a second electron population, which has been suggested for example from studies of the quasar 3C273 and other sources \citep[e.g.][]{Meyer2014,Meyer2015}. Such multiwavelength studies which identify sources of high-energy emission and also the location within the source responsible for the emission are important to understand the physical processes in the jets. In the context of high-energy emission from radio sources, it is relevant to note that there is a growing population of radio galaxies which are $\gamma$-ray sources. \citet{Bruni2022} have reported a new $\gamma$-ray emitting FRII radio galaxy which they model as arising from the radio lobes due to inverse-Compton scattering of the photons off the radiating electrons. They have listed another 8 such radio galaxies.
Are all AGN jet activity episodic? Drawing an analogy from microquasars, \citet{Nipoti2005} suggested that radio loudness may only be a function of epoch, and there may be no essential difference between radio-loud and radio-quiet objects. More recently \citet{Moravec2022} have explored whether different kinds of AGN reflect x-ray binary spectral states, and find that ``radio-loud AGN occupy distinct areas of the hardness-intensity diagram depending on the morphology and excitation class, showing strong similarities to x-ray binaries''. Although these are interesting approaches, a deeper understanding of how a powerful radio jet is launched may help clarify whether all AGN are episodic. On the observational side LOFAR observations have revealed many more sources with signs of recurrent activity. More sensitive observations with SKA is also likely to reveal how common is the evidence for earlier cycles of activity. A deep search with the GMRT showed that these are relatively rare \citep{Sirothia2009a}, but deeper observations are required. Besides radio observations deep x-ray surveys may also reveal many double-lobed sources which are too weak to be seen at radio wavelengths but may be visible at x-ray wavelengths due to inverse-Compton scattering of ambient photons by the low-energy electrons (see \citealt{Mocz2011b} for an example). An understanding of the frequency of recurrent AGN jet activity and their duty cycles are vital for understanding a number of aspects related to AGN feedback, including the evolution of galaxies.
In the last few years, new telescopes or upgraded versions of earlier ones have yielded many interesting results some of which have been highlighted in this short review. JWST, which has been launched, and upcoming telescopes such as SKA, TMT and ATHENA, to name a few, should yield a wealth of information to help answer many of outstanding questions related to AGN jets.
\section*{Acknowledgments}
It is a pleasure to thank Alan Bridle and Dipanjan Mukherjee, who was the reviewer, Pratik Dabhade and Mousumi Mahato for their detailed and helpful comments on the manuscript. I am extremely grateful to Bia Boccardi, Alan Bridle, Jim Condon, Bill Cotton, Bernd Husemann, Robert Laing, Beatriz Mingo, Dipanjan Mukherjee, Suma Murthy, Hiroki Okino, Alice Pasetto and Peter Thomasson, for kindly providing the figures and also for their permission to reproduce the figures. Thanks also to the Editor-in-chief, Astronomy and Astrophysics, and American Astronomical Society for permission to reproduce figures. I am also very grateful to Pratik Dabhade for his generous help in getting the figures and the references organized for the JoAA format. I also wish to express my gratitude to the organizers of the ARIES conference on jets titled Astrophysical jets and observational facilities: National perspective, and the editors of the proceedings, Shashi Pandey, Alok Gupta and Sachindra Naik, for asking me to write an extended version based of my talk and above all, waiting patiently for it in spite of a long delay on my part.
\begin{theunbibliography}{}
\vspace{-1.5em}
\input{jaasample.bbl}
\end{theunbibliography}
\end{document}
|
{
"timestamp": "2022-06-14T02:19:29",
"yymm": "2206",
"arxiv_id": "2206.05803",
"language": "en",
"url": "https://arxiv.org/abs/2206.05803"
}
|
\section{Introduction}
The initial value problem for ordinary differential equations (ODE's) is a basic and the primal problem to consider in any dynamic or control system (see for instance \cite{Chicone2006}). Among the different possible trajectories that an ODE can take, asymptotic stability is always a mater of research interest among the control community (see for instance \cite{Bacciotti2005}).
To date, even with many available techniques, stability, asymptotic stability or even controller design for non-linear systems is not an easy task (see for instance \cite{Sastry1999}). The case of asymptotic stability offers a very challenging problem, even using the well studied Lyapunov's method providing only a sufficient condition leaving the user the construction of such a function.
In the case of control systems (systems modelled with ODE's but with the inputs as free parameters \cite{Sastry1999}), the problem becomes involved since existence theorems are needed in order to determine beforehand the possibility of asymptotic stabilization (see for instance \cite{Brockett1983} and \cite{Garcia2018}).
The paper in \cite{Garcia2020} a smooth controller to render the origin of an unicycle robot asymptotically stable was derived. This clearly seems to contradict the Brockett's condition, which for this kinematic system proved the impossibility of such a controller.
However, a closer inspection of the controller designed reveals that the modularity in one state-space variable: $\theta(t)$ was invoked to mapp the set of initial conditions. In this case, only the initial conditions are mapped from the complete $\Re^{3}$ set for the value $\theta(0)=0$.
In this paper, motivated by the results in \cite{Garcia2020}, a generalized initial value problem is defined in order to present a new perspective but also an open problem to define theorems and tools to determine the possibility for asymptotic stabilizing controllers.
This paper is organized as follows: In Section \ref{GIVP} the mathematical set-up and the definition of the generalized initial value problem is presented along with an open problem, whereas Section \ref{Conclusions} presents some conclusions and future work.
\section{The initial conditions mapping: different trajectories}\label{GIVP}
The problem studied in this paper considers a generalization of the well-known initial value problem (see for instance \cite{Chicone2006}):
\begin{definition}[Generalized initial value problem]\label{GIVP: Definition}
Given an ODE $\dot{x}=f(x),\quad x\in\Omega\subset\Re^{n}$ and an initial condition $x(0)=\phi(x_{0}),\quad x_{0}\in\Omega$ along with $\phi:\Omega\rightarrow \Omega^{*}\subset\Omega$.
Finding trajectories (see \cite{Chicone2006} for details on trajectories definitions) satisfying this definition is called \textit{generalized initial value problem}.
\end{definition}
where $\dot{x}$ means the time derivative. A motivation for such a generalization comes from a well-known kinematic model for mobile robots (see for instance \cite{Garcia2012} and Figure \ref{Robot-Model}):
\begin{equation}\label{Kinematic model}
\begin{bmatrix}
\dot{x}\\
\dot{y}\\
\dot{\theta}
\end{bmatrix}=f(x,u)=
%
\begin{bmatrix}
cos(\theta) && 0\\
sin(\theta) && 0\\
0 && 1
\end{bmatrix} \cdot
%
\begin{bmatrix}
u_{1}\\
u_{2}
\end{bmatrix}
\end{equation}
where $\{u_{1},u_{2}\}$ are the control inputs or control variables. It is well known that asymptotic stability (see for instance ) can not be realized by means of smooth controllers (see for instance \cite{Brockett1983} and \cite{Garcia2018}).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Unicycle_robot}
\caption{Unicycle-like robot kinematic's model} \label{Robot-Model}
\end{figure}
For the sake of completeness the conditions to drive this model to the origin using smooth controllers can be readily written as follows (see\cite{Garcia2018}):
\begin{equation*}
f(V_{1},u(V_{1}))=\alpha \cdot f(V_{2},u(V_{2})), \quad \alpha \neq 0,k\quad \left\{V_{1},V_{2}\right\} \in
\delta_{0}
\end{equation*}
where $\delta_{0}$ is a neighbour of the origin. This condition is equivalent to look for constant direction regions:
\begin{equation*}
\frac{f(x,u(x))}{\Vert f(x,u(x))\Vert}=constant,\quad \forall x\in\
\delta_{0}
\end{equation*}
Checking this condition in equation (\ref{Kinematic model}):
\begin{equation*}
\begin{cases}
\frac{cos(\theta)}{sin(\theta)}=\rho_{1}\\
\frac{cos(\theta)\cdot u_{1}}{u_{2}}=\rho_{2}\\
\frac{sin(\theta)\cdot u_{1}}{u_{2}}=\rho_{3}
\end{cases}
\end{equation*}
where $\{\rho_{1},\rho_{2},\rho_{3}\}$ are constants. Clearly, the condition $\frac{cos(\theta)}{sin(\theta)}=\rho_{1}$ means a straight line excluding all the rest as asymptotically stable trajectories.
However, the controller defined in \cite{Garcia2020} renders the model (\ref{Kinematic model}) globally asymptotically stable:
\begin{equation}\label{Controller: Bessel}
\begin{bmatrix}
u_{1}\\
u_{2}
\end{bmatrix}=
%
\begin{bmatrix}
\sum_{i=1}^{N} (2 cdot i+1) \cdot a \cdot C_{i} \cdot J_{i}(\theta)
\cdot \theta^{i+1}\\
a \cdot \theta, \quad a<0
\end{bmatrix}
\end{equation}
where $N \in \mathbf{N}$ is an arbitrary number and $\{C_{i},a<0\}$ some constants. Moreover, an important initial conditions mapping is needed:
\begin{equation}\label{Initial conditions}
\theta(0)=
\begin{bmatrix}
\theta_{0}\neq 0,\quad \theta_{0}\in \Re\\
%
2 \cdot \pi,\quad \theta_{0}=0
\end{bmatrix}
\end{equation}
Notice that without this initial conditions mapping, the system (\ref{Kinematic model}) along with controller (\ref{Controller: Bessel}) is not asymptotically stable:
\begin{equation*}
\begin{bmatrix}
\dot{x}\\
\dot{y}\\
\dot{theta}
\end{bmatrix}=
%
\begin{bmatrix}
cos(\theta) && 0\\
sin(\theta) && 0\\
0 && 1
\end{bmatrix} \cdot
%
\begin{bmatrix}
\sum_{i=1}^{N} (2 \cdot i+1) \cdot a \cdot C_{i} \cdot J_{i}(\theta)
\cdot \theta^{i+1}\\
a \cdot \theta
\end{bmatrix}
\end{equation*}
Considering the initial condition $\theta(0)=0$:
\begin{equation*}
\dot{\theta}=a \cdot \theta \Leftrightarrow \theta(t)=0 \quad \forall t
\in\Re^{+}
\end{equation*}
Clearly, this is not asymptotically stable in $\{x(t),y(t)\}$ to the origin:
\begin{eqnarray*}
\begin{bmatrix}
\dot{x}\\
\dot{y}
\end{bmatrix}=
%
\begin{bmatrix}
cos(\theta)\\
sin(\theta)
\end{bmatrix} \cdot
%
\sum_{i=1}^{N} (2 \cdot i+1) \cdot a \cdot C_{i} \cdot J_{i}(\theta)
\cdot \theta^{i+1}\\
%
\theta(t)=0
\end{eqnarray*}
That is:
\begin{equation*}
\begin{bmatrix}
x-x(0)\\
y-y(0)
\end{bmatrix}=
%
\begin{bmatrix}
1\\
0
\end{bmatrix} \cdot
%
\sum_{i=1}^{N} (2 \cdot i+1) \cdot a \cdot C_{i} \cdot J_{i}(0)
\cdot 0^{i+1}
\end{equation*}
Finally:
\begin{equation*}
\begin{bmatrix}
x-x(0)\\
y-y(0)
\end{bmatrix}=
%
\begin{bmatrix}
0\\
0
\end{bmatrix}
\end{equation*}
This conclusions showed that mapping the initial condition set (not the entire trajectories domain) changes completely the behaviour of the orbits.
It is worth to notice that the general mapping definition (\ref{GIVP: Definition}) for this case can be written as follows:
\begin{equation*}
\theta(0)=
\begin{bmatrix}
\theta_{0}\neq 0,\quad \theta_{0}\in \Re\\
%
2 \cdot \pi,\quad \theta_{0}=0
\end{bmatrix} \Leftrightarrow \theta(0)=\theta_{0}+2\cdot \pi
\beta(\theta_{0})
\end{equation*}
where the function $\beta(\theta_{0})$ is depicted in Figure \ref{beta}, with $\beta \longrightarrow 0$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Beta.pdf}
\caption{Function $\beta(\theta_{0})$} \label{beta}
\end{figure}
It turns out that the classical results, theorems and method for the initial value problem with $\phi$ the identity function can not be applied to the generalized definition \ref{GIVP: Definition}:
\begin{open problem}
Given a control system: $\dot{x}=f(x,u), x\in\Re^{n}$ with control inputs: $u\in\Re^{m}, m \leq n$ derive conditions for asymptotic stability in the case of generalized initial value problem.
\end{open problem}
\textbf{Note:} For this generalized problems, it is necessary to identify the subset of initial conditions that can be mapped onto an equivalent values of another ODE's definition domain subset. In other words, the existence's possibility and its physical meaning for the mapping function $\phi$ in (\ref{GIVP: Definition}) must be established beforehand.
\section{Conclusions}\label{Conclusions}
In this paper and up to the authors knowledge, a new formulation to the classical initial value problem for ODE's is presented as a generalized initial value problem.
The motivation for this generalization comes from the smooth controller found in \cite{Garcia2020} in the case of a unicycle robot model that clearly would violate the well-known Brockett's condition.
The key to this result lies into the initial value mapping that changes completely the trajectories' behaviour allowing asymptotic stability even using smooth controllers.
This new result and formulation open the road to reconsider smooth controller's in the cases where Brockett's condition forbid such designs but, at the same time, open the necessity for new theorems and methods to establish the existence of smooth controllers for asymptotic stability in the case of generalised initial value problem.
As a future work, more complex models using ODE's will be analysed using these ideas in order to provide more examples were initial value mapping is useful providing asymptotic stability.
\section{Acknowledgments}
The authors would like to acknowledge Universidad Tecnol\'{o}gica Nacional, Facultad Regional Bah\'{i}a Blanca. and Comisión de Investigaciones Coentíficas (CIC) and Universidad Tecnológica Nacional-FRBB.
|
{
"timestamp": "2022-06-14T02:18:34",
"yymm": "2206",
"arxiv_id": "2206.05769",
"language": "en",
"url": "https://arxiv.org/abs/2206.05769"
}
|
\section{S1. Two-temperature model}
The dynamics of the electron temperature $T_{\rm{el}}$ and the phonon temperature $T_{\rm{ph}}$ can described via the two-temperature model (TTM) \cite{Kaganov1957,Chen2006},
\begin{align}
C_{\rm{el}} \frac{\partial T_{\rm{el}}}{\partial t} &= -g_{\rm{ep}}\left( T_{\rm{el}} - T_{\rm{ph}} \right) + P_{l}(t) \\
C_{\rm{ph}} \frac{\partial T_{\rm{ph}}}{\partial t} &= +g_{\rm{ep}}\left( T_{\rm{el}} - T_{\rm{ph}} \right).
\label{eq:2TM}
\end{align}
where $g_{\rm{ep}} = 6 \times 10^{17}$ J/sKm$^3$ is the electron-phonon coupling constant, $C_{\rm{ph}}= 3 \times 10^6$ J/Km$^3$ and $C_{\rm{el}} = \gamma_e T_{\rm{el}}$ ($\gamma_e = 700$ J/K$^2$m$^3$) represent the respective specific heats of the electron- and phonon system. Although we use standard values for metals, these values are material-dependent. $P_l(t)$ is Gaussian shaped and represents the absorbed energy of the electron system coming from the laser.
\section{S2. Rescaling of the exchange constant for quantitative comparison between MFA and ASD simulations}
In the main text, our analytical model for the magnetic order dynamics is based in the mean field approximation (MFA). The equilibrium magnetization as a function of temperature calculated using the MFA slightly differs from the ASD simulations. Fig.~\ref{fig:EquilibriumMag} shows the MFA results as a blue dashed line and the ASD simulations as red points for a sc-lattice using $J= 3.450 \times 10^{-21}$ J.
For the MFA case, we have rescaled the exchange constant, $J_{\rm{mfa}}=0.73 J_{\rm{asd}}$, to obtain $T_N^{\rm{MFA}}=T_N^{\rm{ASD}}$. We have estimated the ASD critical temperature by calculating the temperature at which the magnetic specific heat diverge.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\columnwidth]{FigS1.pdf}
\caption{Equilibrium magnetization of a sc-lattice as a function of temperature from ASD simulations (red dots), mean field approximation (blue dashed line) and from the MFA model including a temperature dependent rescaled Heisenberg exchange constant $J(T)$ (Eq. \eqref{eq:S2}), (red line).}
\label{fig:EquilibriumMag}
\end{figure}
The equilibrium magnetization as a function of temperature using the MFA start to deviate from ASD simulations in the intermediate-to-high temperature region, $T_N/2<T<T_N$.
In order to quantitatively compare our model to ASD simulations we resolve this discrepancy by introducing a temperature dependent Heisenberg exchange modulation $J(T)=J_0 +J^\prime(T)$, where $J_0$ describes the original MFA Heisenberg exchange constant, $J_{\rm{mfa}}=0.73 J_{\rm{asd}}$, $J^\prime(T)>0$ is a temperature dependent modulation that needs to be determined. We determine it by forcing the equality the equilibrium magnetization calculated through ASD, $m_e=(1-T/T_N)^{1/3}$ ($1/3$ for a sc-lattice), and the MFA, $m_e= L (\beta J(T) m_e)$. Thus, the temperature dependent Heisenberg exchange $J(T)$ can be calculated from
\begin{equation}
\left(1-T/T_{\rm{c}} \right)^{1/3}=L\left(\frac{(J_0 +J^\prime(T))m}{k_\text{B}T}\right)
\label{eq:S1}
\end{equation}
which can be solves as
\begin{equation}
J^\prime(T)= \frac{1}{\beta m} L^{-1}(\left(1-T/T_{\rm{c}} \right)^{1/3})-J_0.
\label{eq:S2}
\end{equation}
$L^{-1}$ describes the inverse Langevin function for which no analytical expression is known. However there have been numerous attempts at finding a simple and accurate approximation~\cite{Jedynak2015,Nguessong2014}. In this work we have used the equation proposed by Nguessong et al.~\cite{Nguessong2014} to approximate the inverse Langevin function numerically.\\
We note, that by using Eq.~\ref{eq:S2} $J(T)$ becomes independent of the numerical value of $J_0$ and is instead directly calculated from the magnetization curve $m(T)$ via the inverse Langevin function. For a sc-lattice $m_e(T)=(1-T/T_c)^{1/3}$ agrees well with the atomistic results. However for other lattices (fcc, 2D or bcc), a different analytical expression for $m_e(T)$ is needed to describe $m_e(T)$.
\clearpage
\section{S3. Breakdown of the MFA model for high fluence laser excitation}
As discussed in the main text, our model is based in the MFA. This means that better agreement between ASD and MFA would be expected when the microscopic spin configurations remain close to the MFA assumptions, when each atomic spin sees the same interactions from the neighbouring ones.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{FigS3.pdf}
\caption{
ASD simulations of AFM magnetic order dynamics for different laser powers ($\lambda=0.01$) (dots) in comparison to our analytical model (Eq.~\eqref{eq:neel-dynamics}) (lines). Higher laser powers yields larger demagnetization and the underlying MFA assumptions of Eq.~\eqref{eq:neel-dynamics} stops being a valid approximation. On the right different states of the spin system are shown, shortly after the excitation with a laser pulse.}
\label{fig:Domain}
\end{figure}
When magnetic domains are be nucleated, our MFA macroscpin model no longer describes the spin state correctly. Figure~\ref{fig:Domain}. shows the magnetic order dynamics for a range of laser fluences, where symbols correspond to ASD simulations and lines to the macrospin model. For higher fluences the agreement between the two models diminishes.
The right side shows snapshots of the microscopic spin configuration at different time delays corresponding to the a time range where maximum demagnetization is achieved. When the laser fluence is only the 73 $\%$ of the maximum fluence simulated, the microscopic spin configuration is homogeneous. In that case, the agreement between theory and simulations is very good. As the laser fluence increases, magnetic domains start to nucleate and the theory and simulations to deviate. For the maximum laser fluence that we simulate 100 $\%$, large magnetic domains are nucleated and the MFA breaks down. The theory is not able to describe this situation. For those cases, a micromagnetic model should be developed.
\end{document}
|
{
"timestamp": "2022-06-14T02:18:55",
"yymm": "2206",
"arxiv_id": "2206.05783",
"language": "en",
"url": "https://arxiv.org/abs/2206.05783"
}
|
"\\section{Introduction}\n\n A vivid interest has recently arisen over the issue of serverless compu(...TRUNCATED)
| {"timestamp":"2022-06-14T02:18:56","yymm":"2206","arxiv_id":"2206.05786","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{introduction-section}\nThe connection between macroscopic observab(...TRUNCATED)
| {"timestamp":"2022-06-14T02:19:08","yymm":"2206","arxiv_id":"2206.05793","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nIn this paper, all graphs are finite and simple, which means no parallel(...TRUNCATED)
| {"timestamp":"2022-06-14T02:18:50","yymm":"2206","arxiv_id":"2206.05780","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\n\nResearch on fairness in machine learning has been very active in rec(...TRUNCATED)
| {"timestamp":"2022-06-14T02:20:14","yymm":"2206","arxiv_id":"2206.05828","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nA complex Hilbert space $H$ is called a Hilbert module over a complex u(...TRUNCATED)
| {"timestamp":"2022-06-14T02:17:39","yymm":"2206","arxiv_id":"2206.05739","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 4