The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 16
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 29503)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 16
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
Let $R$ be a ring with an identity. An element $a\in R$ has Drazin inverse if there exists $x\in R$ such that $x=xax, ax=xa, a^n=a^{n+1}x$ for
some $n\in {\Bbb N}.$ Such $x$ is unique, if it exists, and we denote it by $a^D$. The smallest $n$ is called the Drazin index of $a$.
If $a$ has Drazin index $1$, $a$ is said to have group inverse $x$, and denote its group inverse by $a^{\#}$. As is well known, a square complex matrix $A$ has group inverse if and only if $rank(A)=rank(A^2)$. The group invertibility in a ring is attractive. It has interesting applications of resistance distances to the bipartiteness of graphs (see~\cite{SW}). Many authors have studied group invertibility from many different views, e.g., ~\cite{B2,BZ,C1,CE,Ca,MD2,M,P,ZM,Z2}. It was also extensively investigated under the concept "strongly regularity" in ring theory (see~\cite{CH}).
In ~\cite[Theorem 2.1]{B}, Benitez et al. studied the group inverse of the sum of two group invertible elements $a$ and $b$ in an algebra
under the condition $ab=0$. The group inverse of $P+Q$ of two group invertible complex matrices $P$ and $Q$ was investigated in~\cite[Theorem 2.3]{L} under the condition $PQQ^{\#}=QPP^{\#}$. Zhou et al. extended the preceding result to a Dedekind-finite ring in which $2$ is invertible (see~\cite[Theorem 3.1]{Z2}).
These motivate us to explore the group inverse of the sum in a general setting.
Let $a\in R^D$. The element $a^{\pi}=1-aa^{D}$ is called the spectral idempotent of $a$. In Section 2, we present new additive results for the group invertibility
by means of spectral idempotents in a ring. Let $a,b,ba^{\pi},ab^{\pi}\in R^{\#}$, $aba^{\pi}=0$ and $bab^{\pi}=0$. We prove that $a+b\in R^{\#}$ if and only if $aa^{\#}b+bb^{\#}a\in R^{\#}, a^{\pi}b^{\pi}a=0$ and $b^{\pi}a^{\pi}b=0$. The preceding known additive properties of the group invertibility are thereby extended to the wider case.
In Section 3, we further investigate the additive properties of the group invertibility under certain commutative-like conditions. Let $a,b\in R^{\#}$. If $a^2b=aba, b^2a=bab$, we prove that $a+b\in R^{\#}$ if and only if $1+a^{\#}b\in R^{\#}$ and $b^{\pi}a^{\pi}b=0$.
Let $X$ and $Y$ be Banach spaces, let $\mathcal{L}(X,Y)$ denote the set of all bounded linear operators from $X$ to $Y$ and $\mathcal{L}(X)$ denote the set of all bounded linear operators from $X$ to itself. The aim of the final section is to explore the the group invertibility of a block operator matrix $$M=\left(
\begin{array}{cc}
A&B\\
C&D
\end{array}
\right)~~~~~~~~~~(*)$$ where $A\in \mathcal{L}(X),B\in \mathcal{L}(X,Y),C\in \mathcal{L}(Y,X)$ and $D\in \mathcal{L}(Y)$. Here, $M$ is a bounded linear operator on $X\oplus Y$. This problem is quite complicated and was studied by many authors (see~~\cite{Ca,M,P,ZM}). In ~\cite{B}, Benitez studied the group inverse of $2\times 2$ block operator matrix $M$ under certain conditions. As applications of our results, we provide many new conditions under which $M$ has group inverse. These extend~\cite[Theorem 3.4-3.7]{B} to the general setting.
Throughout the paper, all rings are associative with an identity. Let $p\in R$ be an idempotent, and let $x\in R$. Then we write $x=pxp+px(1-p)+(1-p)xp+(1-p)x(1-p),$
and induce a Pierce representation given by the matrix
$$x=\left(\begin{array}{cc}
pxp&px(1-p)\\
(1-p)xp&(1-p)x(1-p)
\end{array}
\right)_p.$$
\section{additive properties}
The purpose of this section is to establish new additive results for group inverses in a ring. We begin with
\begin{lem} Let $p\in R$ be an idempotent and $x=\left(
\begin{array}{cc}
a&0\\
c&d
\end{array}
\right)_p.$ If $a,d\in R^{\#}$, then $x\in R^{\#}$ if and only if $d^{\pi}ca^{\pi}=0.$ In this case, $x^{\#}=\left(
\begin{array}{cc}
a^{\#}&0\\
u&d^{\#}
\end{array}
\right)_p,$ where $$u=d^{\pi}c(a^{\#})^2+(d^{\#})^2ca^{\pi}-d^{\#}ca^{\#}.$$\end{lem}
\begin{proof} We obtain the result as in the proof of~\cite[Theorem 2.1]{MD}.\end{proof}
\begin{lem} Let $b\in R^{\#}$ and $p\in R$ be an idempotent. If $pbp^{\pi}=0$ and $bp^{\pi}\in R^{\#}$, then $(bp^{\pi})^{\#}=b^{\#}p^{\pi}$ and $(pb)^{\#}=pb^{\#}$.\end{lem}
\begin{proof} Since $(1-p)bp=0$, we have $b=\left(
\begin{array}{cc}
pb&0\\
p^{\pi}bp&bp^{\pi}
\end{array}
\right)_p.$ Since $b,bp^{\pi}\in R^{\#}$, it follows by Lemma 2.1 that $pb\in R^{\#}$.
In this case, we have $$b^{\#}=\left(
\begin{array}{cc}
(pb)^{\#}&0\\
*&(bp^{\pi})^{\#}
\end{array}
\right)_p.$$ Therefore
$(pb)^{\#}=pb^{\#}$ and $(bp^{\pi})^{\#}=b^{\#}p^{\pi}$, as asserted.\end{proof}
\begin{thm} Let $a,b,ba^{\pi}\in R^{\#}$ and $aba^{\pi}=0$. Then $a+b\in R^{\#}$ if and only if $a(1+a^{\#}b)\in R^{\#}$ and $b^{\pi}a^{\pi}b=0$.\end{thm}
\begin{proof} Let $p=aa^{\#}$. Since $aba^{\pi}=0$, we see that $aa^{\#}ba^{\pi}=0$. Then $$a=\left(
\begin{array}{cc}
a_1&0\\
0&0
\end{array}
\right)_p, b=\left(
\begin{array}{cc}
b_1&0\\
b_3&b_4
\end{array}
\right)_p.$$ Here $a_1=aa^{\#}aaa^{\#}=a, b_3=a^{\pi}baa^{\#}, b_4=a^{\pi}ba^{\pi}=ba^{\pi}$.
Then $$a+b=\left(
\begin{array}{cc}
a_1+b_1&0\\
b_3&b_4
\end{array}
\right)_p.$$
$\Longrightarrow$ By hypothesis, $b_4=ba^{\pi}\in R^{\#}$. Since $aba^{\pi}=0$, it follows by
Lemma 2.2 that $b_4^{\#}=b^{\#}a^{\pi}$. Clearly, $aa^{\#}(a+b)a^{\pi}=0$ and $(a+b)a^{\pi}=ba^{\pi}\in R^{\#}$. Since $a+b\in R^{\#}$, it follows by Lemma 2.2 that $aa^{\#}(a+b)\in R^{\#}$ and $[aa^{\#}(a+b)]^{\#}=aa^{\#}(a+b)^{\#}$. Therefore $aa^{\#}(a+b)=a_1+b_1$ has group inverse. In view of Lemma 2.1, we have
$$\begin{array}{lll}
(a+b)^{\#}&=&\left(
\begin{array}{cc}
w^{\#}&0\\
u&b^{\#}a^{\pi}
\end{array}
\right)_p,
\end{array}$$ where $w=aa^{\#}(a+b)$ and $$\begin{array}{lll}
u&=&(b_4^{\#})^2b_3w^{\pi}+b_4^{\pi}b_3(w^{\#})^2-b_4^{\#}b_3w^{\#}\\
&=&b^{\#}a^{\pi}b^{\#}a^{\pi}baa^{\#}w^{\pi}+b^{\pi}a^{\pi}b(w^{\#})^2-b^{\#}a^{\pi}bw^{\#}.
\end{array}$$
We check that
$$\begin{array}{ll}
&(a+b)^{\pi}(a+b)\\
=&\left(
\begin{array}{cc}
w^{\pi}&0\\
-bw^{\#}-ba^{\pi}u&b^{\pi}a^{\pi}
\end{array}
\right)_p\left(
\begin{array}{cc}
w&0\\
a^{\pi}baa^{\#}&ba^{\pi}
\end{array}
\right)_p\\
=&\left(
\begin{array}{cc}
w^{\pi}w&0\\
-(bw^{\#}+ba^{\pi}u)w+b^{\pi}a^{\pi}baa^{\#}&b^{\pi}a^{\pi}ba^{\pi}\\
\end{array}
\right)_p\\
=&0.
\end{array}$$ Hence, $-(bw^{\#}+ba^{\pi}u)w+b^{\pi}a^{\pi}baa^{\#}=0$. Thus $$b^{\pi}a^{\pi}baa^{\#}=b^{\pi}[-(bw^{\#}+ba^{\pi}u)w+b^{\pi}a^{\pi}baa^{\#}]=0.$$
On the other hand, $b^{\pi}a^{\pi}ba^{\pi}=b^{\pi}ba^{\pi}-b^{\pi}a^{\#}aba^{\pi}=0$. Therefore $b^{\pi}a^{\pi}b=b^{\pi}a^{\pi}b[aa^{\#}+a^{\pi}]=0$, as required.
$\Longleftarrow$ We easily check that
$$\begin{array}{lll}
b_4^{\pi}b_3(a_1+b_1)^{\pi}&=&(1-b^{\#}a^{\pi}ba^{\pi})b_3(a_1+b_1)^{\pi}\\
&=&(a^{\pi}-b^{\#}ba^{\pi})b_3(a_1+b_1)^{\pi}\\
&=&[b^{\pi}a^{\pi}b]aa^{\#}(a_1+b_1)^{\pi}\\
&=&0.
\end{array}$$ According to Lemma 2.1, $a+b\in R^{\#}$. Moreover, we have
$$\begin{array}{lll}
(a+b)^{\#}&=&(a+b)^D\\
&=&w^{\#}+u+b^{\#}a^{\pi}\\
&=&w^{\#}+b^{\#}a^{\pi}b^{\#}a^{\pi}baa^{\#}w^{\pi}-b^{\#}a^{\pi}bw^{\#}+b^{\#}a^{\pi}\\
&=&[a(1+a^{\#}b)]^{\#}+b^{\#}a^{\pi}b^{\#}a^{\pi}baa^{\#}[a(1+a^{\#}b)]^{\pi}\\
&-&b^{\#}a^{\pi}b[a(1+a^{\#}b)]^{\#}+b^{\#}a^{\pi},
\end{array}$$ as asserted.\end{proof}
\begin{cor} Let $a,b,a^{\pi}b\in R^{\#}$ and $a^{\pi}ba=0$. Then $a+b\in R^{\#}$ if and only if $(1+ba^{\#})a\in R^{\#}$ and $ba^{\pi}b^{\pi}=0$. In this case, $$\begin{array}{lll}
(a+b)^{\#}&=&[(1+ba^{\#})a]^{\#}+[(1+ba^{\#})a]^{\pi}aa^{\#}ba^{\pi}b^{\#}a^{\pi}b^{\#}\\
&-&[(1+ba^{\#})a]^{\#}ba^{\pi}b^{\#}+a^{\pi}b^{\#}.\\
\end{array}$$\end{cor}
\begin{proof} Since $(R,\cdot)$ is a ring, $(R,*)$ is a ring with the multiplication $a*b=b\cdot a$, i.e., it is the opposite ring. Then we complete the proof by applying Theorem 2.3 to the ring $(R,*)$.\end{proof}
Now our first major result is demonstrated as follows.
\begin{thm} Let $a,b,ba^{\pi},ab^{\pi}\in R^{\#}$, $aba^{\pi}=0$ and $bab^{\pi}=0$. Then $a+b\in R^{\#}$ if and only if $aa^{\#}b+bb^{\#}a\in R^{\#},
a^{\pi}b^{\pi}a=0$ and $b^{\pi}a^{\pi}b=0$.\end{thm}
\begin{proof} In view of Theorem 2.3, $a+b\in R^{\#}$ if and only if $a+aa^{\#}b=a(1+a^{\#}a)\in R^{\#}$ and $b^{\pi}a^{\pi}b=0$.
Since $aa^{\#}ba^{\pi}=0$ and $ba^{\pi}\in R^D$, it follows by Lemma 2.2 that
$(aa^{\#}b)^{\#}=aa^{\#}b^{\#}$. Let $q=[aa^{\#}b][aa^{\#}b]^{\#}$. Then $q=aa^{\#}bb^{\#}$.
Obviously, we have $qa(1-q)=aa^{\#}bb^{\#}a(1-aa^{\#}bb^{\#})=aa^{\#}bb^{\#}ab^{\pi}=0$. Then $$a=\left(
\begin{array}{cc}
a_1&0\\
a_3&a_4
\end{array}
\right)_q, aa^{\#}b=\left(
\begin{array}{cc}
b_1&0\\
0&0
\end{array}
\right)_q.$$ Here $a_1=qaq, a_3=q^{\pi}aq, a_4=q^{\pi}aq^{\pi}$.
Then $$a+aa^{\#}b=\left(
\begin{array}{cc}
a_1+b_1&0\\
a_3&a_4
\end{array}
\right)_q.$$ Here, we have
$$\begin{array}{rll}
a_4&=&(1-aa^{\#}bb^{\#})a(1-aa^{\#}bb^{\#})\\
&=&(1-aa^{\#}bb^{\#})ab^{\pi}\\
&=&ab^{\pi}\\
&\in&R^{\#}.
\end{array}$$
$\Longrightarrow$ Since $aba^{\pi}=0$, it follows by Theorem 2.3 that $a^{\pi}b^{\pi}a=0$. Hence,
$$\begin{array}{lll}
a_1+b_1&=&qaq+qbq\\
&=&aa^{\#}bb^{\#}abb^{\#}+aa^{\#}b^2b^{\#}aa^{\#}bb^{\#}\\
&=&aa^{\#}bb^{\#}a+aa^{\#}b\\
&=&bb^{\#}a-a^{\pi}bb^{\#}a+aa^{\#}b\\
&=&aa^{\#}b+bb^{\#}a.
\end{array}$$
Since $bab^{\pi}=0$, it follows by
Lemma 2.2 that $a_4^{\#}=a^{\#}b^{\pi}$ and $a_4^{\pi}=q^{\pi}-aa^{\#}b^{\pi}=a^{\pi}$. Moreover, we have
$a_3=q^{\pi}aq=(1-aa^{\#}bb^{\#})a^2a^{\#}bb^{\#}=abb^{\#}-aa^{\#}bb^{\#}a$. Hence $a^{\pi}a_3=0$.
By hypothesis, $ba^{\pi}\in R^{\#}$. Since $a+b\in R^{\#}$, it follows by Theorem 2.3 that
$aa^{\#}(a+b)=a+aa^{\#}b=a(1+a^{\#}b)$ has group inverse. Set $z=a_1+b_1$. Then we have
$$\begin{array}{lll}
(a+aa^{\#}b)^{\#}&=&\left(
\begin{array}{cc}
(a_1+b_1)^{\#}&\\
v&a_4^{\#}
\end{array}
\right)_q\\
&=&\left(
\begin{array}{cc}
z^{\#}&0\\
v&a^{\#}b^{\pi}
\end{array}
\right)_q,
\end{array},$$ where $$\begin{array}{lll}
v&=&(a_4^{\#})^2a_3z^{\pi}+a_4^{\pi}a_3(z^{\#})^2-a_4^{\#}a_3z^{\#}\\
&=&[a^{\#}b^{\pi}]^2a_3z^{\pi}+a^{\pi}a_3(z^{\#})^2-a^{\#}b^{\pi}a_3z^{\#}\\
&=&[a^{\#}b^{\pi}]^2[abb^{\#}-aa^{\#}bb^{\#}a]z^{\pi}-a^{\#}b^{\pi}[abb^{\#}-aa^{\#}bb^{\#}a]z^{\#}.
\end{array}$$ We easily verify that $a_1+b_1\in R^{\#}$, and so $aa^{\#}b+bb^{\#}a\in R^{\#}.$
$\Longleftarrow$ Clearly, $a_4\in R^{\#}$ and $$\begin{array}{lll}
a_1+b_2&=&q(a+aa^{\#}b)q\\
&=&aa^{\#}bb^{\#}(a+aa^{\#}b)\\
&=&bb^{\#}a-a^{\pi}bb^{\#}a+aa^{\#}bb^{\#}aa^{\#}b\\
&=&bb^{\#}a+aa^{\#}b-aa^{\#}bb^{\#}a^{\pi}b\\
&=&bb^{\#}a+aa^{\#}b-aa^{\#}(1-b^{\pi})a^{\pi}b\\
&=&aa^{\#}b+bb^{\#}a\\
&\in&R^{\#}.
\end{array}$$
Moreover, we check that
$$\begin{array}{lll}
a_4^{\pi}a_3(a_1+b_1)^{\pi}&=&a^{\pi}b^{\pi}abb^{\#}(a_1+b_1)^{\pi}\\
&=&0.
\end{array}$$ According to Lemma 2.1, $$\begin{array}{lll}
a(1+a^{\#}b)&=&a+aa^{\#}b\\
&\in&R^{\#}.
\end{array}$$ Moreover,
$$\begin{array}{rll}
[a(1+a^{\#}b)]^{\#}&=&[aa^{\#}b+bb^{\#}a]^{\#}+a^{\#}b^{\pi}\\
&+&[a^{\#}b^{\pi}]^2[abb^{\#}-aa^{\#}bb^{\#}a][aa^{\#}b+bb^{\#}a]^{\pi}\\
&-&a^{\#}b^{\pi}[abb^{\#}-aa^{\#}bb^{\#}a][aa^{\#}b+bb^{\#}a]^{\#}.
\end{array}$$ This completes the proof.\end{proof}
In ~\cite[Theorem 3.1]{Z2}, Zhou et al. investigated the group inverse of $a+b$ under the condition $aa^{\#}b=bb^{\#}a$ in a Dedekind ring with $2$ is invertible. We now extend this result to a general setting.
\begin{cor} Let $a,b,aa^{\#}b\in R^{\#}$ and $aa^{\#}b=bb^{\#}a$. Then $a+b\in R^{\#}$ if and only if $2aa^{\#}b\in R^{\#}$.\end{cor}
\begin{proof} Since $aa^{\#}b=bb^{\#}a$, we check that $aba^{\pi}=a(aa^{\#}b)a^{\pi}=a(bb^{\#}a)a^{\pi}=0$.
Similarly, we have $bab^{\pi}=b^{\#}(b^2a)b^{\pi}=b^{\#}ba(bb^{\pi})=0$. Moreover, we have
$$\begin{array}{c}
a^{\pi}b^{\pi}a=a^{\pi}(1-bb^{\#})a=a^{\pi}bb^{\#}a=a^{\pi}aa^{\#}b=0,\\
b^{\pi}a^{\pi}b=b^{\pi}(1-aa^{\#})b=b^{\pi}aa^{\#}b=b^{\pi}bb^{\#}a=0.
\end{array}$$ Since $aa^{\#}b\in R^{\#}$ and $aa^{\#}ba^{\pi}=0$, similarly to Lemma 2.2, $ba^{\pi}\in R^{\#}$ and $[ba^{\pi}]^{\#}=b^{\#}a^{\pi}$. Likewise, $ab^{\pi}\in R^{\#}$. Therefore we complete the proof by Theorem 2.5.\end{proof}
Applying Corollary 2.6 to the opposite ring $(R,*)$, we dually derive
\begin{cor} Let $a,b,abb^{\#}\in R^{\#}$ and $abb^{\#}=baa^{\#}$. Then $a+b\in R^{\#}$ if and only if $2abb^{\#}\in R^{\#}$.\end{cor}
\section{communicative-like conditions}
The aim of this section is to investigate the group inverse in a ring under some commutative-like conditions. We now prove:
\begin{lem} Let $a,b\in R^{\#}$. If $a^2b=aba$ and $b^2a=bab$, then $ab\in R^{\#}$ and $$(ab)^{\#}=a^{\#}b^{\#}.$$
\end{lem}\begin{proof} Since $a,b\in R^{\#}$, we have $a,b\in R^D$. In view of~\cite[Theorem 3.1]{Z}, $ab\in R^D$ and $(ab)^D=a^Db^D=a^{\#}b^{\#}$.
One easily checks that$$(ab)^2(ab)^D=ababa^{\#}b^{\#}=(a^2a^{\#})(b^2b^{\#})=ab.$$ Therefore $(ab)^{\#}=a^{\#}b^{\#},$ as desired.\end{proof}
\begin{thm} Let $a,b\in R^{\#}$. If $a^2b=aba, b^2a=bab$, then $a+b\in R^{\#}$ if and only if $aa^{\#}b+bb^{\#}a\in R^{\#}, a^{\pi}b^{\pi}a=0, b^{\pi}a^{\pi}b=0$.\end{thm}
\begin{proof} Since $a^2b=aba$, we have $aba^{\pi}=a^{\#}(a^2b)a^{\pi}=a^{\#}(aba)a^{\pi}=0$. It follows from $b^2a=bab$ that $bab^{\pi}=0$.
Since $a(ab)=a^2b=aba=(ab)a$, we have $a^{\#}(ab)=a^D(ab)=(ab)a^D=(ab)a^{\#}$ by~\cite[Theorem 2.2]{D}; hence,
$(aa^{\#})^2b=a^{\#}(ab)=(ab)a^{\#}=aa^{\#}baa^{\#}$. Also we have $b^2aa^{\#}=(bab)a^{\#}=baa^{\#}b$.
In view of Lemma 3.1, $(aa^{\#}b)^{\#}=aa^{\#}b^{\#}.$ Since $aa^{\#}ba^{\pi}=0$, analogously to Lemma 2.2, $ba^{\pi}\in R^{\#}$ and $(ba^{\pi})^{\#}=b^{\#}a^{\pi}$.
Similarly, we show that $ab^{\pi}\in R^{\#}$. Therefore we complete the proof by Theorem 2.5.\end{proof}
\begin{cor} Let $a,b\in R^{\#}$. If $ab=ba$, then $a+b\in R^{\#}$ if and only if $aa^{\#}b+bb^{\#}a\in R^{\#}$.\end{cor}
\begin{proof} Since $ab=ba$, we have $a^2b=aba$ and $b^2a=ba$. The result follows by Theorem 3.2.\end{proof}
We now illustrates Theorem 3.2 by the following example:
\begin{exam} Let $a=
\left(
\begin{array}{cccc}
1&0&0&0\\
0&1&0&0\\
0&0&0&1\\
0&0&0&0
\end{array}
\right), b=\left(
\begin{array}{cccc}
\frac{1}{3}&0&0&0\\
0&0&0&0\\
0&0&0&0\\
0&0&0&\frac{1}{3}
\end{array}
\right)\in {\Bbb C}^{4\times 4}$. Then $a^2b=aba, b^2a=bab$. But $ab\neq ba$. We compute that
$$a^{\#}=\left(
\begin{array}{cccc}
1&0&0&0\\
0&1&0&0\\
0&0&0&0\\
0&0&0&0
\end{array}
\right), b^{\#}=\left(
\begin{array}{cccc}
3&0&0&0\\
0&0&0&0\\
0&0&0&0\\
0&0&0&3
\end{array}
\right).$$ In this case,
$$(aa^{\#}b+bb^{\#}a)^{\#}=\left(
\begin{array}{cccc}
\frac{3}{4}&0&0&0\\
0&0&0&0\\
0&0&0&0\\
0&0&0&0
\end{array}
\right),(a+b)^{\#}=\left(
\begin{array}{cccc}
\frac{3}{4}&0&0&0\\
0&1&0&0\\
0&0&0&0\\
0&0&0&3
\end{array}
\right).$$\end{exam}
We are now ready to prove:
\begin{thm} Let $a,b\in R^{\#}$. If $a^2b=aba, b^2a=bab$, then $a+b\in R^{\#}$ if and only if $1+a^{\#}b\in R^{\#}$ and $b^{\pi}a^{\pi}b=0$.\end{thm}
\begin{proof} $\Longrightarrow$ Since $a^2b=aba$, we see that $aba^{\pi}=a^{\#}(a^2b)a^{\pi}=a^{\#}abaa^{\pi}=0$. As in the proof of Theorem 3.2, we show that $ba^{\pi}$ has group inverse. By virtue of Theorem 2.3, $b^{\pi}a^{\pi}b=0$. Write $1+a^{\#}b=a^{\pi}+a^{\#}(a+b)$. One easily checks that
$$\begin{array}{rll}
(a^{\#})^2(a+b)&=&a^{\#}+(a^{\#})^2b\\
&=&a^{\#}+(a^{\#})^3(ab)\\
&=&a(a^{\#})^2+(a^{\#})^2(ab)a^{\#}\\
&=&a^{\#}aa^{\#}+a^{\#}ba^{\#}\\
&=&a^{\#}(a+b)a^{\#}.
\end{array}$$ Likewise, we have $(a+b)^2a^{\#}=(a+b)a^{\#}(a+b).$ In light of Lemma 3.1, $[a^{\#}(a+b)]^{\#}=a(a+b)^{\#}$.
Clearly, $a^{\pi}[a^{\#}(a+b)]=0$. Also we have $[a^{\#}(a+b)]a^{\pi}= a^{\#}ba^{\pi}=(a^{\#})^3(a^2b)a^{\pi}=(a^{\#})^3(aba)a^{\pi}=0$.
According to Corollary 2.6, $a^{\pi}+a^{\#}(a+b)\in R^{\#}$. Then $1+a^{\#}b\in R^{\#}$, as desired.
$\Longleftarrow$ Clearly, $aa^{\#}(a+b)=a(1+a^{\#}b)$. By hypothesis, we have $$a(1+a^{\#}b)=a+aa^{\#}b=a+aa^{\#}b=a+(a^{\#})^2aba=(1+a^{\#}b)a.$$ By virtue of Lemma 3.1, $[aa^{\#}(a+b)]^{\#}=a^{\#}(1+a^{\#}b)^{\#}$. As in the proof of Theorem 3.2, $ba^{\pi}\in R^{\#}$. Since $b^{\pi}a^{\pi}b=0$, we complete the proof by Theorem 2.3.\end{proof}
\begin{cor} Let $a,b\in R^{\#}$. If $ab^2=bab, ba^2=aba$, then $a+b\in R^{\#}$ if and only if $1+ba^{\#}\in R^{\#}$ and $ba^{\pi}b^{\pi}=0$.\end{cor}
\begin{proof} Since $(R,\cdot)$ is a ring, $(R,*)$ is a ring with the multiplication $a*b=b\cdot a$.
Then we complete the proof by applying Theorem 3.5 to the opposite ring $(R,*)$.\end{proof}
\section{applications}
We will investigate the group invertibility of a block operator matrix $M$ as in $(*)$. The main idea is to split $M$ as the sum of two special block operator matrices. Then we apply our additive results and find new conditions on bounded linear operators $A,B,C$ and $D$ under which $M$ is group invertible.
\begin{thm} Let $A,D$ and $BC$ have group inverses. If $B(CB)^{\pi}=0, C(BC)^{\pi}=0, ABD^{\pi}=0$ and $DCA^{\pi}=0$, then $M$ has group inverse if and only if
\begin{enumerate}
\item [(1)] $A^{\pi}(BC)^{\pi}A=0, D^{\pi}(CB)^{\pi}D=0$;
\item [(2)] $\left(
\begin{array}{cc}
A&AA^{\#}B \\
DD^{\#}C &D
\end{array}
\right)$ has group inverse.\end{enumerate}\end{thm}
\begin{proof} Write $M=P+Q$, where $$P=\left(
\begin{array}{cc}
A & 0 \\
0 & D
\end{array}
\right), Q=\left(
\begin{array}{cc}
0 & B \\
C & 0
\end{array}
\right).$$ Then $$P^{\#}=\left(
\begin{array}{cc}
A^{\#} & 0 \\
0 & D^{\#}
\end{array}
\right), P^{\pi}=\left(
\begin{array}{cc}
A^{\pi} & 0 \\
0 & D^{\pi}
\end{array}
\right).$$ Since $BC$ has group inverse, we have $$(Q^2)^D=\left(
\begin{array}{cc}
BC & 0 \\
0 & CB
\end{array}
\right)^D=\left(
\begin{array}{cc}
(BC)^{\#} & 0 \\
0 & (CB)^{\#}
\end{array}
\right).$$ Since $\mathcal{L}(X\oplus Y)$ is a Banach algebra, it follows by ~\cite[Theorem 2.1]{JW} that $Q$ has Drazin inverse and $$Q^D=Q(Q^2)^D=\left(
\begin{array}{cc}
0& B(CB)^{\#} \\
C(BC)^{\#}&0
\end{array}
\right).$$ Moreover, we have $$Q^{\pi}=\left(
\begin{array}{cc}
(BC)^{\pi}&0\\
0&(CB)^{\pi}
\end{array}
\right).$$
Hence, $$QQ^D=Q^DQ, Q^D=Q^DQQ^D.$$ It is easy to verify that
$$\begin{array}{lll}
QQ^{\pi}&=&\left(
\begin{array}{cc}
0 & B \\
C & 0
\end{array}
\right)\left(
\begin{array}{cc}
(BC)^{\pi} & 0 \\
0 & (CB)^{\pi}
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
0&B(CB)^{\pi}\\
C(BC)^{\pi} & 0
\end{array}
\right)\\
&=&0;
\end{array}$$ whence, $Q=Q^2Q^D$. Then $Q$ has group inverse and $$Q^{\#}=\left(
\begin{array}{cc}
0& B(CB)^{\#} \\
C(BC)^{\#}&0
\end{array}
\right).$$
Therefore we compute that
$$\begin{array}{lll}
P^{\pi}Q^{\pi}P&=&\left(
\begin{array}{cc}
A^{\pi}(BC)^{\pi}&0\\
0&D^{\pi}(CB)^{\pi}
\end{array}
\right)\left(
\begin{array}{cc}
A & 0 \\
0 & D
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
A^{\pi}(BC)^{\pi}A&0\\
0&D^{\pi}(CB)^{\pi}D
\end{array}
\right)\\
&=&0.
\end{array}$$
We easily check that
$$\begin{array}{rll}
PQP^{\pi}&=&\left(
\begin{array}{cc}
0&ABD^{\pi}\\
DCA^{\pi}&0
\end{array}
\right),\\
PP^{\#}(P+Q)&=&\left(
\begin{array}{cc}
A&AA^{\#}B \\
DD^{\#}C &D
\end{array}
\right).
\end{array}$$ Therefore we complete the proof by Theorem 2.3.\end{proof}
\begin{cor} Let $A,D$ and $BC$ have group inverses. If $AB=0,B(CB)^{\pi}=0, C(BC)^{\pi}=0$ and $DCA^{\pi}=0$, then $M$ has group inverse if and only if $BCA=0$ and $D^{\pi}(CB)^{\pi}D=0$.\end{cor}
\begin{proof} Clearly, $D^{\pi}(DD^{\#}C)A^{\pi}=0$. As in the proof of~\cite[Theorem 3.2]{B},
$\left(
\begin{array}{cc}
A&AA^{\#}B \\
DD^{\#}C &D
\end{array}
\right)$ has group inverse. This completes the proof by Theorem 4.1.\end{proof}
\begin{cor} Let $A,D$ and $BC$ have group inverses. If $DC=0, B(CB)^{\pi}=0, C(BC)^{\pi}=0$ and $ABD^{\pi}=0$, then $M$ has group inverse if and only if $A^{\pi}(BC)^{\pi}A=0$ and $CBD=0$.\end{cor}
\begin{proof} Obviously, $A^{\pi}(AA^{\#}B)D^{\pi}=0$. Similarly to~\cite[Theorem 3.1]{B}, $\left(
\begin{array}{cc}
A&AA^{\#}B \\
DD^{\#}C &D
\end{array}
\right)$ has group inverse. There we obtain the result by Theorem 4.1.\end{proof}
\begin{cor} Let $A\in {\Bbb C}^{m\times m}$ and $D\in {\Bbb C}^{n\times n}$ have group inverses. If $r(B)=r(C)=r(BC)=r(CB)$, then \end{cor}
\begin{enumerate}
\item [(1)] If $AB=0$ and $DCA^{\pi}=0$, then $M$ has group inverse if and only if $BCA=0$ and $D^{\pi}(CB)^{\pi}D=0$.
\item [(2)] If $DC=0$ and $ABD^{\pi}=0$, then $M$ has group inverse if and only if $A^{\pi}(BC)^{\pi}A=0$ and $CBD=0$.
\end{enumerate}
\begin{proof} Since $r(B)=r(C)=r(BC)=r(CB)$, by virtue of~\cite[Lemma 2.3]{BZ}, $BC$ and $CB$ have group inverses.
Let $N=\left(
\begin{array}{cc}
0&B \\
C&0
\end{array}
\right)$. Then $N^2=\left(
\begin{array}{cc}
BC&0 \\
0&CB
\end{array}
\right).$ Hence, $rank(N^2)=rank(BC)+rank(CB)=rank(B)+rank(C)=rank(N)$.
Therefore $N$ has group inverse. This implies that $$NN^{\pi}=\left(
\begin{array}{cc}
0&B(CB)^{\pi} \\
C(BC)^{\pi}&0
\end{array}
\right)=0,$$ and so $B(CB)^{\pi}=0, C(BC)^{\pi}=0.$ Therefore we complete the proof by Corollary 4.2 and Corollary 4.3.\end{proof}
We now ready to prove:
\begin{thm} Let $A$ and $D$ have group inverses. If $A^{\pi}B=0, D^{\pi}C=0, BCA^{\pi}=0, ABC=0$ and $BD=BCA^{\#}B$, then $M$
has group inverse.\end{thm}
\begin{proof} Write $M=P+Q$, where $$P=\left(
\begin{array}{cc}
A&B\\
0&0
\end{array}
\right), Q=\left(
\begin{array}{cc}
0&0\\
C&D
\end{array}
\right).$$
Since $A^{\pi}B=0, D^{\pi}C=0$, we easily see that $P$ and $Q$ have group inverses. Moreover, we have $$\begin{array}{c}
P^{\#}=\left(
\begin{array}{cc}
A^{\#}&(A^{\#})^2B\\
0&0
\end{array}
\right), P^{\pi}=\left(
\begin{array}{cc}
A^{\pi}&-A^{\#}B\\
0&I_n
\end{array}
\right);\\
Q^{\#}=\left(
\begin{array}{cc}
0&0\\
(D^{\#})^2C&D^{\#}
\end{array}
\right),Q^{\pi}=\left(
\begin{array}{cc}
I_m&0\\
-D^{\#}C&D^{\pi}
\end{array}
\right).
\end{array}$$
We compute that
$$\begin{array}{lll}
PP^{\#}Q&=&\left(
\begin{array}{cc}
AA^{\#}&A^{\#}B\\
0&0
\end{array}
\right)\left(
\begin{array}{cc}
0&0\\
C&D
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
A^{\#}BC&A^{\#}BD\\
0&0
\end{array}
\right)\\
&=&0.
\end{array}$$
Then $$[QP^{\pi}]^D=Q[(P^{\pi}Q)^D]^2P^{\pi}=Q(Q^D)^2P^{\pi}=Q^DP^{\pi}.$$ We check that
$[QP^{\pi}]^2[QP^{\pi}]^D=Q^2Q^DP^{\pi}=QP^{\pi}.$ Therefore $QP^{\pi}$ has group inverse.
Moreover, we check that
$$\begin{array}{rll}
PQP^{\pi}&=&\left(
\begin{array}{cc}
A&B\\
0&0
\end{array}
\right)\left(
\begin{array}{cc}
0&0\\
C&D
\end{array}
\right)\left(
\begin{array}{cc}
A^{\pi}&-A^{\#}B\\
0&I_n
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
BCA^{\pi}&BD-BCA^{\#}B\\
0&0
\end{array}
\right)\\
&=&0,\\
Q^{\pi}P^{\pi}Q&=&\left(
\begin{array}{cc}
I&0\\
-D^{\#}C&D^{\pi}
\end{array}
\right)\left(
\begin{array}{cc}
A^{\pi}&-A^{\#}B\\
0&I
\end{array}
\right)\left(
\begin{array}{cc}
0&0\\
C&D
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
I&0\\
-D^{\#}C&D^{\pi}
\end{array}
\right)\left(
\begin{array}{cc}
-A^{\#}BC&-A^{\#}BD\\
C&D
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
-A^{\#}BC&-A^{\#}BD\\
D^{\#}CA^{\#}BC&D^{\#}CA^{\#}BD
\end{array}
\right)\\
&=&0.
\end{array}$$ Further, we have
$$\begin{array}{lll}
PP^{\#}(P+Q)&=&\left(
\begin{array}{cc}
AA^{\#}&A^{\#}B\\
0&0
\end{array}
\right)\left(
\begin{array}{cc}
A&B\\
C&D
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
A+A^{\#}BC&AA^{\#}B+A^{\#}BD\\
0&0
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
A&AA^{\#}B\\
0&0
\end{array}
\right).
\end{array}$$ Since $A^{\pi}(AA^{\#}B)=0$ and $A$ has group inverse, $PP^{\#}(P+Q)$ has group inverse. Therefore we complete the proof by Theorem 2.3.\end{proof}
\begin{cor} (~\cite[Theorem 3.4]{B}) Let $A$ and $D$ have group inverses. If $A^{\pi}B=0, D^{\pi}C=0, BC=0$ and $BD=0$, then $M$ has group inverse.\end{cor}
\begin{proof} This is obvious by Theorem 4.5.\end{proof}
\begin{thm} Let $A,D$ and have group inverses. If $CA^{\pi}=0, BD^{\pi}$ $=0, A^{\pi}BC=0, BCA=0$ and $DC=CA^{\#}BC$, then $M$
has group inverse.\end{thm}
\begin{proof} Write $M=P+Q$, where $$P=\left(
\begin{array}{cc}
A&0\\
C&0
\end{array}
\right), Q=\left(
\begin{array}{cc}
0&B\\
0&D
\end{array}
\right).$$
Since $CA^{\pi}=0, BD^{\pi}=0$, $P$ and $Q$ have group inverses. Moreover, we have $$\begin{array}{c}
P^{\#}=\left(
\begin{array}{cc}
A^{\#}&0\\
C(A^{\#})^2&0
\end{array}
\right), P^{\pi}=\left(
\begin{array}{cc}
A^{\pi}&0\\
-CA^{\#}&I
\end{array}
\right);\\
Q^{\#}=\left(
\begin{array}{cc}
0&B(D^{\#})^2\\
0&D^{\#}
\end{array}
\right),Q^{\pi}=\left(
\begin{array}{cc}
I&-BD^{\#}\\
0&D^{\pi}
\end{array}
\right).
\end{array}$$ We check that
$$\begin{array}{lll}
QPP^{\#}&=&\left(
\begin{array}{cc}
0&B\\
0&D
\end{array}
\right)\left(
\begin{array}{cc}
AA^{\#}&0\\
CA^{\#}&0
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
BCA^{\#}&0\\
DCA^{\#}&0
\end{array}
\right)\\
&=&0.
\end{array}$$
By using Cline's formula, we have
$$\begin{array}{lll}
[P^{\pi}Q]^D&=&P^{\pi}[(QP^{\pi})^D]^2Q\\
&=&P^{\pi}[Q^D]^2Q\\
&=&P^{\pi}Q^D.
\end{array}$$
Since $[P^{\pi}Q]^2[P^{\pi}Q]^D=P^{\pi}Q$, it follows that $P^{\pi}Q$ has group inverse.
Moreover, we verify that
$$\begin{array}{lll}
P^{\pi}QP&=&\left(
\begin{array}{cc}
0&A^{\pi}B\\
0&D-CA^{\#}B
\end{array}
\right)\left(
\begin{array}{cc}
A&0\\
C&0
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
A^{\pi}BC&0\\
(D-CA^{\#}B)C&0
\end{array}
\right).
\end{array}$$
Moreover, we compute that
$$\begin{array}{lll}
QP^{\pi}Q^{\pi}&=&\left(
\begin{array}{cc}
0&B\\
0&D
\end{array}
\right)\left(
\begin{array}{cc}
A^{\pi}&0\\
-CA^{\#}&I
\end{array}
\right)\left(
\begin{array}{cc}
I&-BD^{\#}\\
0&D^{\pi}
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
-BCA^{\#}&BCA^{\#}BD^{\#}\\
-DCA^{\#}&DCA^{\#}BD^{\#}
\end{array}
\right)\\
&=&0.
\end{array}$$ We also see that
$$\begin{array}{lll}
(I+QP^{\#})P&=&(P+Q)PP^{\#}\\
&=&\left(
\begin{array}{cc}
A&B\\
C&D
\end{array}
\right)\left(
\begin{array}{cc}
AA^{\#}&0\\
CA^{\#}&0
\end{array}
\right)\\
&=&\left(
\begin{array}{cc}
A&0\\
CAA^{\#}&0
\end{array}
\right).
\end{array}$$ Since $[CAA^{\#}]A^{\pi}=0$ and $A$ has group inverse, $(I+QP^{\#})P$ has group inverse. Therefore $M$ has group inverse by Corollary 2.4.\end{proof}
\begin{cor} (~\cite[Theorem 3.6]{B}) Let $A$ and $D$ have group inverses. If $CA^{\pi}=0, BD^{\pi}=0, BC=0$ and $DC=0$, then $M$ has group inverse.\end{cor}
\begin{proof} This is obvious by Theorem 4.7.\end{proof}
\vskip10mm
|
{
"timestamp": "2022-03-16T01:11:02",
"yymm": "2203",
"arxiv_id": "2203.07582",
"language": "en",
"url": "https://arxiv.org/abs/2203.07582"
}
|
\section{Introduction}
In the solar atmosphere, radiation plays an important role in the energy balance. The contribution of radiation to the change in the internal energy, often referred to as radiative losses, is quantified as the divergence of the radiative flux:
\begin{equation}
\label{ }
Q_\mathrm{rad}=-\nabla\cdot\mathcal{F}.
\end{equation}
A positive value of $Q_\mathrm{rad}$ means that there is local heating from the extinction of photons, and a negative value indicates local cooling through the emission of photons.
In the transition region and corona, the atmosphere is optically thin and the radiative losses can be simplified in the form of
\begin{equation}
\label{ }
Q_\mathrm{rad}=-\Lambda(T,n_\mathrm{e})n_\mathrm{e}n_\mathrm{H},
\end{equation}
where $n_\mathrm{e}$ is the electron density, $n_\mathrm{H}$ is the hydrogen density, and $\Lambda$ can be calculated under the coronal approximation with the CHIANTI database \citep{1997dere,2021delzanna}.
However, in the chromosphere, the strong lines are normally optically thick, which means that there is a probability for the energy (in the form of photons) to escape. \citet[hereafter \citetalias{2012carlsson}]{2012carlsson} wrote the radiative loss function from element $X$ at ionization stage $m$ as:
\begin{equation}
\label{cl12}
Q_{\mathrm{rad},X_m}=-L_{X_m}E_{X_m}\frac{n_{X_m}}{n_X}\frac{n_X}{n_\mathrm{H}}\frac{n_\mathrm{H}}{\rho}n_\mathrm{e}\rho,
\end{equation}
where $L_{X_m}$ is the thin radiative loss function, $E_{X_m}$ is the escape probability and $\frac{n_{X_m}}{n_X}$ is the ionization fraction of element $X$ at ionization stage $m$. These three parameters are determined empirically from radiative (magneto)hydrodynamic simulations of the quiet Sun. $\frac{n_X}{n_H}$ is the abundance of element $X$ relative to hydrogen, $\frac{n_\mathrm{H}}{\rho}$ is the number of hydrogen particles per mass unit, a constant dependent on abundances, and $\rho$ is the mass density. This approximate recipe can reproduce radiative cooling of the quiet Sun very well, and has been included in many radiative (magneto)hydrodynamic codes \citep[e.g.][]{2011gudiksen,2013bradshaw,2020wang}.
During solar flares, the chromosphere undergoes drastic changes, with a rapid rise in temperature, electron density and pressure. The bombardment of the non-thermal electrons can also increase the excitation and ionization rate of neutral hydrogen in the ground level \citep{1993fang}. With such different local conditions, it is noted that the chromospheric radiative losses in flares are much larger than those in the quiet Sun \citep{1980machado,1986avrett}. The work of \citet[hereafter \citetalias{1990gan}]{1990gan} provides a revised recipe of \citet{1980nagai} based on fitting the radiative loss curves of semi-empirical models. The recipe of \citetalias{1990gan} has a similar form:
\begin{equation}
\label{ }
Q_\mathrm{rad}=-f(T)\alpha(z)n_\mathrm{H}n_\mathrm{e},
\end{equation}
where $f(T)$ is the thin radiative loss function, and $\alpha(z)$ is the probability that the energy escapes from height $z$.
\begin{table*}
\caption{Line fluxes and total radiative fluxes of the chromosphere from lines and Lyman continuum in different flare models. }
\label{flux}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c | rrccr | crrc | rrr | c}
\hline
flare & \multicolumn{5}{c|}{$\mathcal{F}_\mathrm{avg}$ ($10^7$ erg cm$^{-2}$ s$^{-1}$)} & \multicolumn{4}{c|}{$\mathcal{F}^\mathrm{tot}$ ($10^9$ erg cm$^{-2}$)} & \multicolumn{3}{c|}{$\Delta \mathcal{F}^\mathrm{tot}$ (\%)} &flare \\
model & Ly$\alpha$ & H$\alpha$ & \ion{Ca}{ii} K & \ion{Ca}{ii} 8542 \r{A} & \ion{Mg}{ii} k & detailed & \citetalias{1990gan} & \citetalias{2012carlsson} & this work & \citetalias{1990gan} & \citetalias{2012carlsson} & this work &class$^1$ \\ \hline
\multicolumn{14}{c}{Models for fitting}\\ \hline
f10E05d3 & 5.51 & 3.42 & 0.87 & 0.45 & 2.55 & 5.84 & 7.60 & 57.29 & 4.84 & 30.30 & 881.55 & -17.01 &C2.7 \\
f10E10d3 & 7.07 & 2.92 & 0.61 & 0.35 & 2.28 & 5.48 & 13.67 & 78.15 & 5.88 & 149.37 & 1325.90 & 7.34 &...\\
f10E15d3 & 6.39 & 2.85 & 0.65 & 0.38 & 1.71 & 5.48 & 9.34 & 86.55 & 6.80 & 70.53 & 1480.13 & 24.06 &...\\
f10E20d3 & 4.66 & 2.99 & 0.68 & 0.40 & 1.79 & 5.79 & 7.98 & 100.48 & 6.71 & 37.95 & 1636.76 & 15.93 &...\\
f10E25d3 & 3.44 & 3.08 & 0.71 & 0.42 & 1.86 & 5.19 & 7.50 & 89.74 & 5.13 & 44.72 & 1630.54 & -1.06 &...\\
f10E05d4 & 3.69 & 3.22 & 0.88 & 0.44 & 2.28 & 4.87 & 6.59 & 37.22 & 3.96 & 35.25 & 663.58 & -18.67 &C7.6\\
f10E10d4 & 6.96 & 3.02 & 0.73 & 0.39 & 2.56 & 6.15 & 12.22 & 76.78 & 6.56 & 98.93 & 1149.53 & 6.68 &A8.9\\
f10E15d4 & 7.67 & 2.49 & 0.54 & 0.34 & 1.71 & 5.19 & 9.70 & 81.54 & 7.64 & 86.92 & 1471.62 & 47.35 &...\\
f10E20d4 & 6.46 & 2.69 & 0.57 & 0.35 & 1.77 & 6.36 & 8.21 & 108.33 & 9.10 & 28.99 & 1602.69 & 43.00 &...\\
f10E25d4 & 4.71 & 2.83 & 0.58 & 0.36 & 1.78 & 6.26 & 7.36 & 106.08 & 6.38 & 17.66 & 1595.74 & 1.91 &...\\
f10E05d5 & 3.23 & 2.90 & 0.81 & 0.41 & 2.02 & 4.12 & 6.08 & 26.29 & 3.24 & 47.38 & 537.83 & -21.48 &C9.2\\
f10E10d5 & 6.45 & 3.05 & 0.77 & 0.40 & 2.56 & 6.14 & 11.55 & 71.84 & 6.46 & 88.16 & 1070.65 & 5.26 &B4.5\\
f10E15d5 & 8.10 & 2.25 & 0.49 & 0.32 & 1.74 & 4.70 & 9.77 & 73.06 & 7.49 & 107.63 & 1453.28 & 59.19 &...\\
f10E20d5 & 6.90 & 2.49 & 0.52 & 0.33 & 1.69 & 6.23 & 8.33 & 83.67 & 9.20 & 33.70 & 1242.40 & 47.67 &...\\
f10E25d5 & 5.49 & 2.64 & 0.52 & 0.33 & 1.70 & 6.70 & 7.16 & 102.43 & 4.20 & 6.92 & 1428.60 & -37.30 &...\\
f10E05d6 & 2.83 & 2.66 & 0.76 & 0.39 & 1.83 & 3.61 & 5.72 & 20.54 & 2.74 & 58.61 & 469.43 & -23.89 &C9.8\\
f10E10d6 & 6.08 & 3.05 & 0.77 & 0.40 & 2.56 & 5.96 & 11.15 & 66.55 & 6.22 & 87.02 & 1016.10 & 4.35 &C1.4\\
f10E15d6 & 8.47 & 2.08 & 0.44 & 0.29 & 1.73 & 4.38 & 10.60 & 62.16 & 7.24 & 141.69 & 1317.65 & 65.14 &...\\
f10E20d6 & 7.45 & 2.36 & 0.50 & 0.32 & 1.72 & 6.07 & 8.40 & 71.96 & 7.76 & 38.40 & 1086.18 & 27.93 &...\\
f10E25d6 & 6.06 & 2.53 & 0.50 & 0.31 & 1.62 & 6.87 & 7.05 & 96.32 & 2.94 & 2.62 & 1302.35 & -57.26 &...\\
f10E05d7 & 2.56 & 2.52 & 0.73 & 0.38 & 1.70 & 3.29 & 5.52 & 17.28 & 2.41 & 67.87 & 425.43 & -26.70 &C9.9\\
f10E10d7 & 5.74 & 3.04 & 0.77 & 0.39 & 2.51 & 5.73 & 10.89 & 62.02 & 5.97 & 90.26 & 983.12 & 4.33 &C2.1\\
f10E15d7 & 8.51 & 2.04 & 0.39 & 0.28 & 1.74 & 4.07 & 24.17 & 49.62 & 6.99 & 494.22 & 1119.93 & 71.87 &...\\
f10E20d7 & 7.97 & 2.28 & 0.48 & 0.31 & 1.70 & 5.85 & 8.23 & 63.71 & 5.05 & 40.67 & 989.04 & -13.69 &...\\
f10E25d7 & 6.54 & 2.47 & 0.49 & 0.31 & 1.52 & 6.89 & 6.99 & 89.27 & 2.35 & 1.48 & 1195.81 & -65.92 &...\\ \hline
\multicolumn{14}{c}{Models for test}\\ \hline
f11E15d3 & 18.18 & 11.30 & 1.79 & 0.76 & 5.82 & 26.65 & 88.13 & 613.66 & 58.06 & 230.69 & 2202.68 & 117.88 &M7.2\\
f11E20d4 & 18.00 & 13.35 & 2.13 & 0.91 & 4.84 & 25.43 & 99.28 & 633.49 & 53.25 & 290.35 & 2390.70 & 109.35 &M3.2\\
f11E25d5 & 15.64 & 14.11 & 2.56 & 1.01 & 5.49 & 28.02 & 58.06 & 688.50 & 48.53 & 107.21 & 2357.03 & 73.19 &C1.6\\ \hline
\end{tabular}}
\begin{tablenotes}
\footnotesize
\item $^1$ Flare class below A1.0 is not labelled.
\end{tablenotes}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{loss-eps-converted-to.pdf}
\caption{Probability density function of the thin radiative loss function (\ion{H}{I}, \ion{Ca}{II}, and \ion{Mg}{II}) as a function of temperature. Colored lines show relations from the recipe of \citetalias{2012carlsson} (red), the adopted fit of the PDF (yellow), {and cases with negligible collisional deexcitation rates (blue, Eq.~(4) of \citetalias{2012carlsson}).}}
\label{loss}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{escape-eps-converted-to.pdf}
\caption{Probability density function of the escape probability (\ion{H}{I}, \ion{Ca}{II}, and \ion{Mg}{II}) as a function of {the column density of the specific ion}. Colored lines show relations from the recipe of \citetalias{2012carlsson} (red) and the adopted fit of the PDF (yellow). {Note that in the middle and right panels, the results are plotted as a function of column density of \ion{Ca}{II} and \ion{Mg}{II}, respectively, and thus we do not plot the results from the recipe of \citetalias{2012carlsson}.}}
\label{esc}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fraction-eps-converted-to.pdf}
\caption{Probability density function of the population fractions (\ion{H}{I}, \ion{Ca}{II}, and \ion{Mg}{II}) as a function of temperature. The panels in the upper row are zoomed part of the panels in the lower row for the temperature range of $T<30$ kK. Colored lines show the results from the recipe of \citetalias{2012carlsson} (red), the coronal approximation with a two-level atom (blue), the Saha equation with a constant $n_\mathrm{e}=10^{13}$ cm$^{-3}$ (green), and the adopted fit of the PDF (yellow).}
\label{frac}
\end{figure*}
The recipe of \citetalias{1990gan} is constructed over only two semi-empirical models, and it is not clear whether these models can cover the variation range of solar flares. Besides, the recipe takes the height variable $z$ as an input, while in actual simulations, the height and thickness of the chromosphere are not necessarily the same as in semi-empirical models. On the other hand, the recipe of \citetalias{2012carlsson} is based on the quiet-Sun atmosphere. And it remains to be evaluated how it works during flares, since the atmospheric conditions are quite different. In this paper, we follow the steps in \citetalias{2012carlsson} and redo the fits of the three parameters from a grid of flare simulations. We briefly introduce our method in Section~\ref{sect2}. The fitting results and comparisons with other recipes are shown in Section~\ref{sect3}. We give our conclusions in Section~\ref{sect4}.
\section{Method}
\label{sect2}
We employ a grid of 25 flare models generated with the radiative hydrodynamics code \verb"RADYN" \citep{1992carlsson,1995carlsson,1997carlsson,2002carlsson}.
{The flare loop is assumed to be symmetric, thus half of the loop is modeled as a quarter circle with a 10 Mm length. }
In each flare model the initial quiet-Sun atmosphere is heated by a beam of non-thermal electrons injected from the loop top \citep{2015allred}.
{The initial temperature at the loop top is 1 MK.}
The beam of electrons is assumed to have a power-law distribution of energy, as described with three parameters: the electron flux $F$, the spectral index $\delta$, and the cutoff energy $E_c$. The electron flux $F$ is a triangular function of time, with 10 s increase and 10 s decrease, and the peak flux is $10^{10}$ erg cm$^{-2}$ s$^{-1}$ for all models. The spectral index $\delta$ varies from 3 to 7, and the cutoff energy $E_c$ varies from 5 to 25 keV.
{These 25 models are used to fit the three parameters in Eq.~(\ref{cl12}), and we use another three models with a larger peak electron flux ($10^{11}$ erg cm$^{-2}$ s$^{-1}$) to test the recipe (see Sec.~\ref{sect32}).}
{These models are labeled as f$n_1$E$n_2$d$n_3$ in Table~\ref{flux}, where the numbers $n_1$, $n_2$ and $n_3$ correspond to the values of $\log F$ at peak time, the cutoff energy $E_c$ and the spectral index $\delta$, respectively.}
Each simulation is run for 20 s, and the snapshot at every 0.1 s is saved to the output. The \verb"RADYN" outputs are then fed into \verb"RH" \citep{2001uitenbroek,2015pereira} to calculate the \ion{Ca}{II} and \ion{Mg}{II} lines, taking into account the effect of partial frequency redistribution. {In RH we assume statistical equilibrium.}
The H {model} atom used in our \verb"RADYN" simulations is the same as in \citetalias{2012carlsson}, but for the Lyman series a Gaussian profile is used instead of a Voigt profile to mimic the effect of partial frequency redistribution \citep{2012leenaarts}. The \ion{Ca}{II} {model} atom is the same as in \citetalias{2012carlsson} and the \ion{Mg}{II} {model} atom is the same as in \citet{2013leenaarts}.
{The radiative losses from H is calculated from the RADYN outputs, and the losses from Ca and Mg are calculated from the RH outputs.} We consider contribution from all the H lines between the lowest five energy levels, the Lyman continuum, the \ion{Ca}{II} H/K and triplet lines, as well as the \ion{Mg}{II} h/k lines. Balmer and higher continua of H are not considered here because they do not comply with the recipe, but we discuss in Section~\ref{sect3.4} that their contribution is not negligible and should be modeled properly.
{The time-averaged radiation fluxes of various spectral lines from these flare models are summarized in Table~\ref{flux}. In these models, the energy in the chromosphere is mostly dissipated through Ly$\alpha$, H$\alpha$, and \ion{Mg}{II} photons \citep{1980machado}. The flare class is estimated from the synthetic GOES 1--8 \AA\ flux that is calculated following \citet{2020kerr}, with the cross section of the flare loop assumed to be $4\times10^{15}$ cm$^2$ (a diameter of 1\arcsec). Note that in our flare models we only consider heating from non-thermal electrons, and the thermal electrons with an energy below the cutoff energy $E_c$ are all neglected. The coronal emission would be lower than expected in lack of these thermal electrons that could heat the corona efficiently \citep{2018polito}. Therefore, the calculated soft X-ray flux, which mainly originates from coronal emissions, would be fairly underestimated if $E_c$ is large enough. Special care must be taken when comparing the flare class of the models with real observations.}
\begin{figure*}
\centering
\includegraphics[width=\textwidth,trim=0 0 0 40]{radloss-eps-converted-to.pdf}
\caption{Comparison of chromospheric radiative losses calculated from detailed solutions and with different approximate recipes for four simulation cases. Blue colors denote radiative cooling, and red colors denote radiative heating. {The line plots give comparisons of the evolutions of the radiative cooling rates integrated from 0.5 Mm to 2.0 Mm.}}
\label{radloss}
\end{figure*}
\section{Results}
\label{sect3}
\subsection{Empirical fitting results of the parameters}
As shown in Eq.~(\ref{cl12}), there are three parameters that need to be fitted empirically, the optically thin loss function $L_{X_m}$, the escape probability $E_{X_m}$, and the ionization fraction $\frac{n_{X_m}}{n_X}$.
The probability density functions (PDFs) of the optically thin loss function are shown in Fig.~\ref{loss}. The radiative losses should be equal to the total collisional excitation from the ground state if the collisional deexcitation rates are small enough, which is the case in the quiet Sun for temperatures above 10 kK \citep{2012carlsson}. In flare cases, the fitted curves have similar shapes compared with the results from \citetalias{2012carlsson}, despite a larger spread at low temperatures. In addition, {the curves for cases with negligible collisional deexcitation rates (calculated from Eq.~(4) of \citetalias{2012carlsson})} and fitted curves overlap above a temperature much higher than 10 kK.
The PDFs of the escape probability are shown in Fig.~\ref{esc}. In \citetalias{2012carlsson}, the escape probability of \ion{H}{I} is tabulated as a function of the approximated optical depth of the Ly$\alpha$ line center ($\tau=4\times10^{-14}n_{c,\ion{H}{I}}$, with $n_{c,\ion{H}{I}}$ the column density of \ion{H}{I}), while the escape probability of \ion{Ca}{II} and \ion{Mg}{II} is tabulated as a function of column mass $m_c$. In flare conditions, the \ion{Ca}{II} and \ion{Mg}{II} atoms in the chromosphere are more likely to get ionized, and thus it is not appropriate to use the column mass to approximate the optical depths of these lines. Therefore, in our recipe we use the column density of \ion{H}{I}, \ion{Ca}{II} and \ion{Mg}{II} to tabulate their escape probability. There is a larger spread in the PDFs, and the curve of \ion{H}{I} escape probability is more steep than that of \citetalias{2012carlsson}. The fitted curves show small dips at low column densities, which is a result of chromospheric condensation regions where the local density is large enough to block photons to some extent.
The PDFs of the ionization fraction are shown in Fig.~\ref{frac}. It is clear that the empirical relations of \citetalias{2012carlsson} do not hold for typical chromospheric temperatures ($T =10^4$--$10^5$ K) any more. In the flaring chromosphere, the increased electron density (in the order of $10^{13}$ cm$^{-3}$) has greatly enhanced the collisional rates, and the local atmosphere is driven towards local thermodynamical equilibrium. The fitted curves of \ion{H}{I} and \ion{Ca}{II} at low temperatures are very close to the Saha equilibrium. At high temperatures ($T>10^5$ K), the fitted curve of \ion{H}{I} follows that of \citetalias{2012carlsson}, while the fitted curves of \ion{Ca}{II} and \ion{Mg}{II} follow that of the coronal approximation with a two-level atom. The humps in the curves of \ion{Ca}{II} and \ion{Mg}{II} near $T =10^{5.2}$ K result from dielectronic recombinations.
\begin{figure*}
\centering
\includegraphics[width=0.65\textwidth, trim=0 0 0 40]{cont-eps-converted-to.pdf}
\caption{Same as Fig.~\ref{radloss}, but for chromospheric radiative losses contributed by \ion{H}{I} Balmer to Pfundt continua.
}
\label{cont}
\end{figure*}
\subsection{Comparison with other recipes}
\label{sect32}
{We choose four flare models to test the performance of different recipes. Among the four flare models, the first one (f10E15d5) is within our dataset for fitting. The peak electron flux of the other three models is one order of magnitude larger (f11, $10^{11}$ erg cm$^{-2}$ s$^{-1}$), with varying values of $E_c$ and $\delta$ (Table~\ref{flux}).}
In Fig.~\ref{radloss} we calculate the detailed radiative losses in four flare models and compare them with the approximate results from different recipes. { We also integrate the total radiative losses spatially (from 0.5 Mm to 2.0 Mm) and show their time evolutions.} A striking feature of Fig.~\ref{radloss} is that in spite of radiative cooling, there exists strong radiative heating in the atmosphere, which will be discussed further in Sect.~\ref{sect3.3}.
Generally speaking, the recipe of \citetalias{1990gan} underestimates the radiative cooling in the mid-chromosphere (1.0--1.5 Mm), while in the upper chromosphere the radiative cooling is overestimated. {The time evolution of the spatially integrated radiative cooling show multi-peaks, owing to an inaccurate estimation of the radiative cooling near the transition region.} As for the recipe of \citetalias{2012carlsson}, the radiative cooling is overestimated up to 1--2 orders of magnitude due to an overestimation of the \ion{H}{I} population fraction in the chromosphere. The results from our recipe look more reasonable, although the estimated cooling can be larger than the actual cooling by a factor of 3--5 in the f11 models (peak electron flux of $10^{11}$ erg cm$^{-2}$ s$^{-1}$).
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{hion-eps-converted-to.pdf}
\caption{Probability density functions of the thin radiative loss function and the escape probability for H continua (Balmer to Pfundt), as well as the population fractions of \ion{H}{II}, as functions of temperature and column mass. Orange lines show the adopted fit, {and the blue dashed curve in the top panel shows the thin radiative loss function under coronal approximation.}
}
\label{hion}
\end{figure}
{The temporally and spatially integrated total radiative losses from different recipes, as well as corresponding errors in each flare model, are also listed in Table~\ref{flux}. For most flare models, the results from our recipe are the closest to the detailed solutions. The recipe of \citetalias{1990gan} can produce an overall good approximation of the integrated total radiative losses, but the height distribution of the radiative losses is quite different from the actual one (Fig.~\ref{radloss}). In flare models with a large cut-off energy ($E_c=25$ keV) and a large spectral index ($\delta\ge5$), the radiative cooling is underestimated from our recipe, but still in the same order of magnitude compared with the exact value. This is because in these models, there would appear a specific region trapped between the transition region and the heated chromosphere, with a low temperature, a large density, and a large ionization degree, referred to as ``chromospheric bubbles'' \citep{2020reid}. According to our recipe, a low temperature would mean a low ionization degree, thus the column density of \ion{H}{I}, \ion{Ca}{II}, and \ion{Mg}{II} would be overestimated and the radiative losses are underestimated. }
\begin{table}
\caption{Total radiative fluxes of the chromosphere from H continua (Balmer to Pfundt) in different flare models.}
\label{contflux}
\centering
\begin{tabular}{c | rr | r}
\hline
flare & \multicolumn{2}{c|}{$\mathcal{F}^\mathrm{tot}_\mathrm{cont}$ ($10^9$ erg cm$^{-2}$)} & $\Delta \mathcal{F}^\mathrm{tot}_\mathrm{cont}$ (\%) \\
model & detailed & this work & this work \\ \hline
\multicolumn{4}{c}{Models for fitting}\\ \hline
f10E05d3 & 9.00 & 8.36 & -7.09 \\
f10E10d3 & 11.05 & 8.55 & -22.62 \\
f10E15d3 & 14.59 & 10.82 & -25.88 \\
f10E20d3 & 17.81 & 12.91 & -27.55 \\
f10E25d3 & 21.48 & 15.37 & -28.44 \\
f10E05d4 & 5.40 & 6.33 & 17.27 \\
f10E10d4 & 7.42 & 7.03 & -5.22 \\
f10E15d4 & 8.97 & 7.76 & -13.45 \\
f10E20d4 & 12.24 & 10.08 & -17.66 \\
f10E25d4 & 16.17 & 12.92 & -20.07 \\
f10E05d5 & 3.76 & 4.81 & 27.80 \\
f10E10d5 & 6.04 & 6.44 & 6.48 \\
f10E15d5 & 6.08 & 5.84 & -3.88 \\
f10E20d5 & 8.98 & 7.93 & -11.70 \\
f10E25d5 & 12.56 & 10.62 & -15.57 \\
f10E05d6 & 2.93 & 3.93 & 34.19 \\
f10E10d6 & 5.40 & 6.09 & 12.83 \\
f10E15d6 & 4.53 & 4.60 & 1.62 \\
f10E20d6 & 7.10 & 6.62 & -6.87 \\
f10E25d6 & 10.44 & 9.11 & -12.82 \\
f10E05d7 & 2.48 & 3.45 & 39.01 \\
f10E10d7 & 5.01 & 5.83 & 16.38 \\
f10E15d7 & 3.78 & 3.85 & 2.11 \\
f10E20d7 & 5.98 & 5.77 & -3.48 \\
f10E25d7 & 9.10 & 8.11 & -10.84 \\ \hline
\multicolumn{4}{c}{Models for test}\\ \hline
f11E15d3 & 147.04 & 122.94 & -16.39\\
f11E20d4 & 109.77 & 110.19 & 0.38\\
f11E25d5 & 87.93 & 76.61 & -12.87\\ \hline
\end{tabular}
\end{table}
\subsection{Radiative heating}
\label{sect3.3}
As can be seen in Fig.~\ref{radloss}, there are regions where the radiative flux contributes to the internal energy as radiative heating, through absorption of photons emitted from nearby regions. These regions are always adjacent to the regions with large radiative cooling \citep{1980machado,2012carlsson}. The magnitude of radiative heating in specific regions is proportional to the magnitude of radiative cooling in the regions adjacent to them. In flares, radiative heating is mainly contributed by Ly$\alpha$ and the Lyman continuum, as well as the resonance lines of \ion{Ca}{II} and \ion{Mg}{II}. The contribution of Ly$\alpha$ dominates at the beginning of flare heating, while at a later time, the contribution of the Lyman continuum dominates. Heating from the \ion{Ca}{II} H/K lines are partly compensated by cooling from the \ion{Ca}{II} triplet lines.
Our recipe, just as the previous ones, does not consider the contribution of radiative heating. It is a great challenge to include both radiative cooling and radiative heating together in one unified empirical formula, because it is very difficult to locate the region of radiative heating. There seems to be no quick method to determine whether the atmosphere at each height is radiatively heated or cooled. However, we do find that for flare models with the same energy cutoff $E_c$, the location of the radiative heating region is roughly the same. Therefore, for a specific $E_c$, one might get an empirical relation by fitting both positive and negative escape probabilities together, but this relation is not valid for other values of $E_c$, and it is not applicable to actual self-consistent MHD simulations.
{Incident radiation of the optically thin radiative losses from the corona can also heat the chromosphere \citep{1994wahlstrom,2015allred}. In flares, the magnitude of incident radiation energy could be comparable to the amount of radiative losses from spectral lines \citep{1994hawley,2015allred}. An approximation method for this heating is described in \citetalias{2012carlsson}.}
\subsection{Cooling from Balmer and higher continua}
\label{sect3.4}
Both the above recipe and \citetalias{2012carlsson} do not include cooling from Balmer and higher continua, since they are optically much thinner than the \ion{H}{I} lines and Lyman continuum and the treatment of a unified escape probability curve is not reasonable. Their contributions to the radiative losses are shown in Fig.~\ref{cont}. For flare cases, the region with large cooling from Balmer and higher continua lies below the region with large cooling from \ion{H}{I} lines and Lyman continuum, as shown in previous calculations \citep{1986avrett,1994hawley,2019prochazka}. In these cases radiative heating from Ly$\alpha$ and Lyman continuum is compensated by radiative cooling from Balmer and higher continua to some extent.
{We list the values of spatially and temporally integrated radiative losses from Balmer and higher continua for different flare models in Table~\ref{contflux}.} We find that the total cooling from Balmer and higher continua can be larger than the total cooling from \ion{H}{I} lines and Lyman continuum, especially when the chromosphere is heated after a certain time \citep{1986avrett,1994hawley,2019prochazka}. Although the recipe of \citetalias{1990gan} is constructed over the semi-empirical models, it seems that cooling from \ion{H}{I} continua is not estimated accurately. Thus it would be very important to correctly include cooling from Balmer and higher continua.
{Radiative cooling from these continua can be approximated with Eq.~(\ref{cl12}) in a similar way. The PDFs of the thin radiative loss function and the escape probability is shown in Fig.~\ref{hion} with fitted curves. The ionization fraction of \ion{H}{II} is also fitted accordingly. The escape probability for continuum photons is close to 1 in the chromosphere, while deep down the photosphere the probability decreases to $\mathrm{e}^{-\tau}$, where $\tau$ is an approximated optical depth at these continua. We set $\tau$ proportional to column mass $m_c$: $\tau=\alpha m_c$, and a least-square best fit to the PDF gives $\alpha=3.05\times10^2$. The calculated radiative cooling from Balmer and higher continua is also shown in Fig.~\ref{cont} and Table~\ref{contflux}, and it turns out to be a good approximation.}
\subsection{Cooling from other sources}
{In our recipe, we only consider H, Ca, and Mg as cooling sources in the chromospheric energy balance. \cite{1989anderson} showed that in the quiet Sun, the abundant iron lines can also contribute significantly to the total radiative losses. Their contributions in flare conditions might also be important, since the NUV \ion{Fe}{II} lines are also enhanced during flares \citep{2017kowalski,2020graham}. Other candidates include the strong \ion{He}{I} 10830 \AA\ line in flares, formed in the mid-upper chromosphere and modulated by the incident coronal radiation \citep{2005ding,2014golding,2021kerr}. Further investigations are required to quantify the contributions from these lines.}
\section{Conclusion}
\label{sect4}
In this paper, we provide a new recipe to calculate the chromospheric radiative losses based on the recipe of \citetalias{2012carlsson} but for a flaring atmosphere. We redo the fittings from a grid of flare models and tabulate the three parameters: optically thin radiative loss, escape probability, and ionization fraction as functions of temperature or column density. The largest difference between our recipe and \citetalias{2012carlsson} lies in the empirical curve of ionization fraction. In the $10^4$ K temperature plateau of the flaring chromosphere, hydrogen is mostly ionized as a result of increased collisional ionization rates.
The calculated radiative cooling from our recipe is a good approximation of the actual cooling in flares, especially in regions with large cooling values, while \citetalias{1990gan} tends to underestimate and \citetalias{2012carlsson} tends to overestimate the cooling rate. {It is noted that the \ion{H}{I} Balmer and higher continua could also contribute significantly to the radiative cooling in flares, and they can be approximated in a similar way. Our recipe is valid for flares with non-thermal electron peak fluxes in the range of 10$^{10}$--10$^{11}$ erg cm$^{-2}$ s$^{-1}$.} Nevertheless, our recipe is not aimed for all kinds of solar activities, so it might not work well under atmospheric conditions that are far from flare conditions.
Currently in our recipe we are unable to consider radiative heating from Ly$\alpha$ and the Lyman continuum, and it requires further study to determine how much influence there would be if radiative heating in the flaring chromosphere is neglected.
\begin{acknowledgements}
This work was supported by National Key R\&D Program of China under grant 2021YFA1600504 and by NSFC under grants 11903020, 11733003, and 12127901, and by the Research Council of Norway through its Centres of Excellence scheme, project number 262622, and through grants of computing time from the Programme for Supercomputing.
\end{acknowledgements}
\bibliographystyle{aa}
|
{
"timestamp": "2022-03-16T01:14:10",
"yymm": "2203",
"arxiv_id": "2203.07630",
"language": "en",
"url": "https://arxiv.org/abs/2203.07630"
}
|
\section{Introduction}
Persuasive dialogue systems are designed for chatbots to communicate with and to influence users with specific goals. Such systems are often designed to benefit individual users (e.g., promoting healthy behaviors) or society at large (e.g., persuading people to make donations). \citet{wang2019persuasion} introduced this idea with the \textsc{PersuasionForGood } dataset, which contains 1,017 human-human conversations where one participant persuaded the other to donate to the charitable organization \textit{Save the Children\footnote{https://www.savethechildren.net/}}, for which 300 conversations have each sentence annotated with a dialogue act. The social and communicative dynamics behind persuasive conversation contexts are complex. A persuasive conversation by definition involves one party, the persuader, intending to change the attitude or behavior of the other party, the persuadee. Changing persuadees' attitude can involve multiple strategies such as establishing mutual trust and credibility, strategically presenting persuasive appeals, and eliciting emotional reactions from the persuadee~\cite{o2015persuasion,wilson2003perceived}.
\begin{figure}
\centering
\vspace{10pt}
\includegraphics[width=0.9\linewidth]{figs/Figs-Page-6.drawio-cropped.pdf}
\caption{Chatbot running on the baseline BART model and chatbot running on RAP responding to the same user utterance. The baseline model does not appropriately acknowledge the user's statement, whereas RAP is able to show acknowledgement and respond appropriately.}
\vspace{-10pt}
\label{fig:comparison}
\end{figure}
Thus, an effective and successful persuasive conversation does not just mechanically relay task-related information to the persuadee. There has to be a significant exchange of social and emotional content that addresses persuadee's concerns, answers specific questions, and develops positive relationships throughout the conversation. For this reason, persuasive conversations are not strictly task-oriented, but are built around tasks with additional social conversational strategies. In essence, persuasive conversations have two goals: one that is task-oriented to elicit behavioral changes, and another that is social-oriented to build trust and empathy and develop positive relationships in order to better navigate the persuasive context. Moreover, whereas task-oriented systems for dialogue contexts such as MultiWOZ~\cite{budzianowski2018multiwoz,qian2021annotation,zang2020multiwoz} and the Schema-Guided Dataset~\cite{rastogi2020towards,chen2020schema} typically contain one intent per turn, more complex contexts like \textsc{PersuasionForGood } typically have several dialogue acts for a single turn to account for the reflective and persuasive nature of the conversation. In this work, we propose the Response-Agenda Pushing Framework (\textsc{RAP}) for persuasive dialogue systems, which can explicitly handle these two goals. \textsc{RAP } hierarchically splits the tasks into ensuring a user is heard (via a module for answering factual questions and a module for providing social acknowledgement), and pushing a persuasive agenda (via an ``agenda pushing'' module). Compared to state-of-the-art end-to-end conditional generation models, \textsc{RAP } is more semantically coherent and persuasive, while being generalizable to any dataset annotated with dialogue acts. In addition, we tackle the challenge of performing multiple-sentence conditional generation in a single turn given one pragmatic argumentative strategy (e.g., ``emotional appeal'').
\section{Related Work}
Much earlier work in persuasion-like social conversations has been towards building dialogue systems for negotiation tasks, e.g., using the Craigslist Bargaining~\cite{he2018decoupling} and Deal or No Deal datasets~\cite{lewis-etal-2017-deal}. However, in negotiation tasks, the goal is to come to a consensus, whereas in persuasion tasks, the target result is a one-sided change or a ``win''for the persuader, as in a debate. Recently, there has been increasing interest in persuasive dialogues because of the rise in online-mediated persuasion scenarios (e.g. online sales, health promotion, political debates); much work focuses on understanding the social dynamics behind online persuasive conversations on social media platforms like Reddit (e.g.~\citet{atkinson-etal-2019-gets, musi2018did, srinivasan2019content, tan2016winning}). In addition, a burgeoning line of work has been invested in developing chatbots to deliver healthcare remotely and to persuade people to adopt healthier lifestyles~\cite{oh2021systematic,zhang2020artificial}. Such efforts have inspired a growing body of work towards building persuasive dialogue systems that are \textit{conditional, strategic and factual} to benefit individuals and society at large.
\input{tables/persuasion_examples_2}
Many early iterations of persuasive dialogue systems have used template-based \cite{zhao2018sogo} or retrieval-based \cite{hiraoka2015evaluation} utterance generation methods. \citet{wang2019persuasion} introduced \textsc{PersuasionForGood } and proposed designing a personalized persuasive dialogue system. \citet{wu-etal-2021-alternating} leveraged alternating pre-trained language models that separately models both speakers in a conversation, finding success in creating human-like utterances without supervision (from human annotations). \citet{li2020end} proposed an end-to-end model for tasks like persuasion, which have one-sided, non-collaborative goals. However, in such an approach, there is less semantic control over generated utterances, as they are not guaranteed to follow a particular persuasive strategy or dialogue act. In tasks beyond persuasion, conditional text generation has emerged as a popular method of controllable generation for more coherent and ``harmonious'' human-dialogue system interactions \cite{guo2021conditional}. Much earlier work in sentence-level conditional text generation has facilitated control by conditioning on entire topic statements or keyphrases \cite{hua2019sentence}. \textit{In this work, we propose using conditional generation conditioned on individual pragmatic dialogue acts to specifically control the strategic flow of a persuasive conversation.}
Other existing work in persuasion tasks has focused on strategy/policy planning \cite{georgila2011reinforcement, sakai2020hierarchical}, while \citet{CHEN202147} attempted to use RoBERTa with conditional random fields to classify persuasion strategies. Other work discussed challenges in building dialogue systems that are social in nature, stating that unlike task-oriented dialogue systems, open-domain social dialogue systems should form a consistent personality to develop users' trust, satisfy the human need for affection and social belonging, and generate interpersonal responses \cite{huang2020challenges,zhou2020design} suitable for any type of input~\cite{higashinaka2014towards}. Consistent with this need for affection and acknowledgement, \citet{zhang-danescu-niculescu-mizil-2020-balancing} find that in the setting of crisis counseling, it is necessary to balance the goals of both ``empathetically addressing the crisis situation'' and ``advancing the conversation towards a resolution.'' Additionally, \citet{sun-etal-2021-adding} were able to create more engaging task-oriented dialogues by adding ``chit-chat.'' This suggests that balancing the need for human acknowledgement with advancing towards the goal of the conversation may improve persuasion outcomes. \textit{We propose interweaving social content prior to pushing a conversational agenda in order to improve coherence, friendliness, and persuasiveness.}
Retrieval-based dialogue systems have long been considered one of the core classes of conversational systems~\cite{banchs2012iris}, often being used for specifically question answering systems~\cite{gao2019neural} due to their ability to return ``fluent and informative responses'' \cite{yang2019hybrid}. But, recent work has been able to directly improve their open-domain dialogue systems by ensembling both retrieval methods (e.g., database queries) with neural generation methods. Given a query, \citet{DBLP:journals/corr/SongYLZZ16} used database retrieval before feeding candidate responses into a bi-sequence to sequence model to improve relevance. \citet{yang2019hybrid} queried a conversation context-response database built from their training data and additionally passed each conversation context through a generation unit before finally separately ranking their candidate responses. \textit{Thus, we propose retrieving factual information to improve a persuasive dialogue system's ability to consistently and coherently address user questions, which may lead to improved perceptions of intelligence, coherence, and trustworthiness.}
\section{Dataset}
We conduct this study on the 300 annotated conversations in the \textsc{PersuasionForGood } dataset. In each conversation, one person, the ``persuader,'' tries to convince their conversational partner, the ``persuadee,'' to donate to Save the Children. The conversations last for at least 10 turns, and a user's utterance during a turn contains at least one sentence. Each sentence is annotated with one of several dialogue acts, including inquiries (e.g. ``Have you donated to a charity before?'') and various persuasive appeals (e.g. ``I'll match your donation, and together we can double the amount!''). In this work, we build a system that acts as a persuader. The full list of persuader dialogue acts used is provided along with examples in Table~\ref{dialogue_acts}. Each dialogue act is explained in detail in \citet{wang2019persuasion}.
\begin{figure*}
\centering
\includegraphics[trim={0 0 0 0},clip,width=16cm]{figs/Figs-Architecture.png}
\caption{Overview of the \textsc{RAP } framework. The user's utterance is classified by the Dispatcher (orange module), which decides whether it should be sent to the Factual Answer Module, Social Response Module, or neither (blue modules). The output from this first layer is propagated into the inputs to the Persuasive Agenda Pushing Module (purple module). The outputs from the blue and purple modules are concatenated as the final system utterance.}
\label{fig:architecture}
\end{figure*}
\section{The RAP Framework}
The dynamics of a persuasive conversation fall between that of social dialogue and that of task-oriented dialogue. One could use social chatbots like Blenderbot \cite{komeili2021internet, xu2021beyond} to handle situations such as engaging users, and one could use conditional generation models like BART \cite{lewis-etal-2020-bart} to control the flow of dialogue. However, it is difficult for just one of these models to manage both goals.
We break down the problem of generating a persuasive response into two parts: 1) generating an utterance that \textit{responds} to users' comments, questions and concerns, and 2) generating an utterance that \textit{pushes the agenda} of a conversation. In this work we propose interweaving responses with agenda-pushing within the same turn, inspired by the goal of continuously balancing both goals as in \citet{zhang-danescu-niculescu-mizil-2020-balancing}.
As outlined in Figure~\ref{fig:architecture}, our framework comprises four core components: a dispatcher to decide which response modules to invoke, a factual answer module and a social response module to tackle the problem of responding to users, and an agenda-pushing model to control the flow of a persuasive conversation.
\subsection{The Dispatcher}
Upon receiving an utterance from a user, \textsc{RAP } first invokes the Dispatcher to decide which response module(s) to invoke. It
classifies the dialogue act of the user utterance using a dialogue act classifier from \citet{shi2020refine}. As shown in Figure~\ref{fig:architecture}, if the utterance includes a factual question or task-related inquiry, the Dispatcher will invoke the corresponding Factual Answer Module. If the dialogue act instead indicates that it is a statement that shows engagement with the chatbot, the Dispatcher will invoke the Social Response Module. The output of the Factual Answer and Social Response modules is propagated to the Agenda Pusher. If the utterance is neither a factual question nor an engaged statement, the Dispatcher will directly invoke the Agenda Pusher.
\subsection{The Factual Answer Module}
In order to maintain consistency in answers, we compute the cosine distance of Sentence-BERT \cite{reimers-gurevych-2019-sentence} embeddings between the user's question and a database of question-answer mappings from the training data. The database of question-answer mappings is also built using Sentence-BERT by aggregating the answers of all of the most similar questions. We retrieve the answer to the question that has the lowest cosine distance in semantic meaning from the question asked by the user.
\subsection{The Social Response Module}
The Social Response Module comprises of a pretrained Blender Bot 2.0 instance with 3B parameters, an updated version of the open-domain BlenderBot social chatbot~\cite{roller-etal-2021-recipes}, that builds long-term memory and queries the internet\footnote{We use a publicly available implementation of Blender Bot 2.0 that makes use of Bing search available here: https://github.com/qywu/BlenderBot2}. We feed the model a context string consisting of the conversation history and generate a response in a zero-shot setting. We do not keep outputs that Blender Bot 2.0 labels as ``potentially unsafe.'' Finally, we still want to push the agenda of the conversation, regardless of whether or not the Social Response or Factual Answer modules were invoked to generate a directed response towards the user.
\subsection{The Persuasive Agenda-Pushing Module}
We ensure that the conversation follows a persuasive agenda using conditional generation with BART~\cite{lewis-etal-2020-bart}, a pre-trained Transformer-based language model.
If the Factual Answer or Social Response modules are invoked, the response is appended to the conversation history, which is given as input to the agenda-pushing model to ensure consistency and reduce repetition by blocking n-gram repeats.
\subsubsection{Conditional Generation Background}
For our agenda-pushing model, we fine-tuned BART on the Persuasion4Good dataset using HuggingFace's Transformers package~\cite{wolf-etal-2020-transformers}. However, it is not enough to just perform language modeling: \textit{an automated persuasive dialogue system should incorporate pragmatic persuasive strategies to control the flow of a conversation}. Thus, we draw inspiration from CTRL \cite{keskar2019ctrl}, a state-of-the-art Transformer model for conditional generation.
Traditionally, language modeling is framed as a problem of learning next-word prediction and the objective is to minimize the negative log likelihood, $L(D)$, over a dataset $D=\{x_1, x_2, ..., x_{|D|}\}$.
However, CTRL conditions on a control code $c$, reformulating next-word prediction as $P(x|c)$ (equation~\ref{Pxc}),
\begin{equation}
P(x|c) = \displaystyle \prod_{i=1}^n P(x_i|x_{<i}, c)
\label{Pxc}
\end{equation}
and reformulating the negative log likelihood conditionally (equation~\ref{Lc}).
\begin{equation}
L_c(D) = - \displaystyle \sum_{k=1}^{|D|} \log(p_\theta(x_i^k|, x_{<i}^k, c))
\label{Lc}
\end{equation}
\subsubsection{Conditional Generation with Pragmatic Persuasive Strategies}
\label{training}
In CTRL, the control codes were used to control aspects of language such as style and content. In our study, we create a system that conditions on pragmatic dialogue acts (e.g., persuasive strategies). The agenda of dialogue acts is listed in order in Table~\ref{dialogue_acts} along with an example of each.
This ordering was determined in \citet{wang2019persuasion} as the most probable dialogue act at each turn.
To this end, we fine-tune BART on the Persuasion4Good dataset, randomly selecting $80\%$ of the conversations as a training set. and $10\%$ as a validation set. A design decision of note is the construction of each training instance. Since the Persuasion for Good dataset contains multiple sentences (and consequently, multiple dialogue acts) per turn, one must choose between having each training instance represent one sentence as the target utterance, or a concatenation of several sentences as the target utterance. We ultimately chose to follow the latter in order for the model to learn more coherent generation. However, multiple-sentence conditional text generation also results in a more complicated task than classic single-sentence generation tasks.
Drawing inspiration from~\citet{li2020end}, each training instance $i$ is ultimately represented as a concatenation of the \textit{history of the persuader and persuadee utterances}, the \textit{previous dialogue act}, and the \textit{planned dialogue act} on turn $i$ (i.e., the ground-truth annotated dialogue act associated with the target utterance).
While one can train a conditional generation model according to $L_c(D)$ through methods such as concatenating control codes to the end of the input sequence, we find that on the \textsc{PersuasionForGood } dataset, such models cannot learn to consistently generate utterances according to the correct dialogue act. We thus add a penalty during loss computation, resulting in $L_p(D)$
(equation~\ref{Lp}):
\begin{equation}
L_p(D) = \displaystyle L_c(D) + \alpha * [f_{dc}(y) \neq c]
\label{Lp}
\end{equation}
where $f_{dc}(y)$ is the output of a dialogue act classifier as described in \citet{shi2020refine} (a GPT-2 based model achieving the state-of-the-art on the \textsc{PersuasionForGood } task: $0.66$ F1), $y$ is the generated utterance of a model given $x_{<i}^k, c$, and $\alpha$ is a tunable penalty for generating an utterance that does not match dialogue act $c$ (i.e., when $f_{dc}(y) \neq c$). $\alpha$ is tuned throughout the training process, in addition to other hyperparameters such as the learning rate.\footnote{For each hyperparameter setting, we used a fixed decoding method --- beam sampling with n-gram blocking.}
\section{Evaluation}
We evaluate \textsc{RAP } against a baseline chatbot using BART fine-tuned as described in Section~\ref{training} in an end-to-end manner. This allows us to directly evaluate the impact of integrating factual information and social content and persuasive strategies in contrast to a conversation only driven by persuasive strategies.
We evaluate the performance of the conditional generation model by calculating the dialogue act accuracy on a withheld test set consisting of $10\%$ of all conversations. As language generation is non-deterministic, we average the dialogue act accuracy across ten passes. We chose BART over Blenderbot in the Persuasive Agenda-Pushing Module because Blenderbot did not achieve as strong of a dialogue act accuracy. This is likely because Blenderbot is better-suited for social dialogue, whereas the dialogue act utterances are largely task-oriented in nature. Additionally, we specifically do not use metrics such as perplexity to compare the BART baseline and \textsc{RAP } because RAP is a result of several different components, and not all of which do we train or fine-tune. Additionally, because of the penalty added in $L_p$, training perplexity is no longer interpretable. It also cannot be compared to other models in other work that has used the \textsc{PersuasionForGood } dataset such as \citet{li2020end}, as the model sizes differ. Most importantly, the primary objective is to build a more persuasive dialogue system, making it imperative to emphasize users' perception and conversation experience. Thus, to compare between the two frameworks, we primarily rely on feedback from human evaluation. We additionally compare utterance-based proxies for user engagement in Table~\ref{Lengths}.
\begin{table}[]
\centering
\begin{tabular}{l|l|l}
\textbf{Utterance Statistic} & \textbf{Baseline} & \textbf{RAP} \\ \hline
\# Chatbot Words & 11.14 & 16.41 \\
\# User Words & 3.70 & 5.75** \\
\# Chatbot Sentences & 1.02 & 1.48 \\
\# User Sentences & 1.09 & 1.17**
\end{tabular}
\caption{Average number of words and sentences per turn for both the chatbot and the user in conversations with both the baseline (BART) and RAP. ** statistically significant differences in user reply length ($\alpha=0.05$).}
\label{Lengths}
\end{table}
\section{Experimental Setup}
We deployed our chatbot using the LegoEval platform~\cite{li-etal-2021-legoeval}. The chatbot is given a gender-neutral name, Jaime. The task consists of a pre-task survey, a conversation where each participant responds to the chatbot with a minimum of seven and maximum of ten conversational lines, and a post-task survey. The pre-task survey consists of questions about demographic information (e.g., age, gender, income) and a test of the Big Five personality traits~\cite{goldberg1992development}. The post-task survey asks participants about their conversation experiences. It includes an attention validation question ("What charity was the chatbot talking about?") then asks about the users' intention to donate to Save the Children and their perception of the chatbot, including evaluations on various traits such as perceived competence and warmth. The full lists of questions is outlined in Table~\ref{evaluation}. Each participant was asked to share their impression of the chatbot along each trait using a Likert scale. A score of 1 corresponds to ``strongly disagree'' and 5 corresponds to ``strongly agree.'' We recruited 111 students from a Natural Language Processing class at a mid-size private university in the United States. Three participants did not correctly answer the validation question, resulting in a final sample of 108 participants. We used a double-blinded, between-subjects design. Each participant was given a link that randomly assigned the participant to either the chatbot running on the baseline or \textsc{RAP }, and completed the task exactly once.
\input{tables/results_2.tex}
\section{Results}
In this section, we discuss the results of comparing \textsc{RAP } and baseline only using BART, the impact of individual components of \textsc{RAP }, and qualitatively examine participant case studies.
\subsection{Analyzing the Impact of \textsc{RAP }}
Across ten passes, the BART model achieves a dialogue act accuracy of 62.38\%, and was used as a part of RAP as the Agenda-Pushing Module. In Table~\ref{Lengths}, we see that RAP yielded better engagement from the participants. On average, participants responded to \textsc{RAP } with 5.75 words per utterance compared to 3.70 words per utterance when responding to the baseline ($p$-value $< 0.001$). Participants were also more likely to respond to RAP with more than one sentence (average: 1.17 sentences per utterance) than the baseline (average: 1.09 sentences per utterance; $p$-value $< 0.01$). Additionally, in Table~\ref{evaluation}, we find that \textsc{RAP } outperforms the baseline on every single perceived trait. Most notably, we see a statistically significant difference on the competence and confidence of \textsc{RAP }, indicating \textsc{RAP } is perceived to be more capable and confident in engaging in substantial topics and persuasive contents. Beyond statistical significance, we see that \textsc{RAP } receives better evaluations on \textit{every} single metric in comparison to the baseline, including persuasiveness, intelligence, trustworthiness, naturalness, and increasing the user's intention to donate.
\subsection{Analyzing Individual Module Contributions}
Due to constraints on our sample size, we could not run full ablation studies where we remove individual modules of the model. Instead, we analyze the perception of \textsc{RAP } in conversations that invoke each of the Social and Factual Answer modules. These findings are also reported in Table~\ref{evaluation}. We additionally find that each of the Social and Factual Answer modules outperform the baseline on conversations in which they were invoked. Notably, we saw that the chatbot was perceived as friendlier and significantly more competent after invoking the Social Response module. However, while there was a difference in the perceived persuasiveness of the chatbot, the difference was much smaller. This implies that perhaps social content is less closely coupled to the persuasiveness of individual arguments. After conversations invoking the Factual Response module, we indeed see the biggest increase in perception of intelligence across all conditions, although the difference is not statistically significant. We also see the largest increase in perceptions of competence. Most surprisingly, we find the biggest increase in friendliness after conversations that invoke the Factual Answer Module. This could imply that ensuring that users' questions are answered is very important in making their voices feel heard and acknowledged.
Surprisingly, there were even modules that received statistically significant differences in ratings from the baseline even when not viewed in aggregate with \textsc{RAP } --- this is the case for both the Social and Factual Answer Modules on competence and confidence. The Factual Answer module also received a statistically significantly higher rating on friendliness, whereas the difference for \textsc{RAP } was not statistically significant. Moreover, in several cases, conversations which invoked the Factual Answer module received the best-performing scores on average. Both of these findings are likely due to the fact that in nearly all cases where the Factual Answer module was invoked, the Social Response module was also invoked, but the inverse is not true. This may also indicate that the results in the Invoked Factual column is the most holistic representation of the complete \textsc{RAP } framework.
\subsection{Qualitative Case Studies}
\label{qualitative}
We find that participants who actively engaged \textsc{RAP } were able to hold coherent, intelligent conversations. \autoref{fig:comparison} shows an example of a participant who had previously heard of Save the Children. The participant had commented on their view of the importance of Save the Children, and the chatbot running using RAP was able to acknowledge their opinion (``I agree''), while further elaborating on their discussion topic (``There is a lack of support for children ... in war zones''). This statement was used to condition the agenda-pushing emotional appeal (``It's so hard to imagine what it's like for a child to grow up facing the daily threat of violence''). The full conversation is provided in Table~\ref{example_1} in the Appendix. User anecdotes included mentioning that they were ``pleasantly surprised'' by the ability of \textsc{RAP } to acknowledge them with remarks like ``I agree.'' On the other hand, participants who only interacted with the baseline complained that their questions went unanswered (e.g. User: ``Do you know who is their founder?'' Chatbot: ``They are an international NGO ...''), and they questioned whether their input was considered.
Despite these improvements, \textsc{RAP } does not seem to handle current events well. In general, conditioning on social content and factual information appears to greatly improve the quality of the Agenda-Pushing Module's generation. However, when Blender Bot 2.0 cannot generate a safe output, the Agenda-Pushing Module does not seem to handle such out-of-domain instances well.
One participant commented on the ongoing war in Ukraine. Blender Bot 2.0 was unable to produce a safe output, leaving the Agenda-Pushing Module to come up with a relevant response. However, Ukraine never appears in the training data, so the module's conditional generation model instead mentions conflicts in several other countries, and performs self-modeling. Such behavior can come across as dismissive or tone-deaf towards the user. The full conversation is provided in Table~\ref{example_2} of the Appendix.
\section{Discussion}
Overall, we find that \textsc{RAP } and each of its individual modules is able to outperform state-of-the-art conditional generation models on \textsc{PersuasionForGood }. One of the core advantages of end-to-end conditional generation models is that they are easily transferrable to different datasets. But, \textsc{RAP } is also easily transferrable --- the only requirement is that the dataset contains a set of dialogue acts with sufficient data to train a classifier, as the biggest bottleneck is being able to use a dialogue classifier for $L_p$ and in building the Dispatcher.
On smaller datasets, it may even be possible to perform transfer learning using a classifier pre-trained on the \textsc{PersuasionForGood } dataset. The Social Response Module is directly transferrable, as we are able to achieve high quality results using it in a zero-shot manner, and the Factual Answer Module uses Sentence-BERT to group together training data.
\subsection{Limitations}
Due to the cost of human evaluation, our sample size is relatively small, 51 and 57 people for the two conditions. This limitation restricted us from performing a full ablation in which we evaluated chatbots which used each module individually.
We hope to obtain larger samples in the future to better evaluate the efficacy of our system.
The average human evaluation scores are relatively low. Considering the sample consists of students enrolled in Natural Language Processing, they are likely of a more technical background, with higher standards for chatbots than the average user on Mechanical Turk. Moreover, because the sample did not enter as participants out of personal interest in Save the Children, they are less likely to be interested in childrens' charities than an individual on the internet who goes out of their way to interact with such a chatbot. Anecdotally, we see in Section~\ref{qualitative} that individuals who do have some sort of inclination towards charitable organizations are actually quite positive and receptive towards the chatbot. In this regard, we are likely limited by the funds necessary to acquire a sample whose interests better align with \textsc{PersuasionForGood}.
While the dialogue act accuracy of the Agenda-Pushing module is only $62.3\%$, this is likely because it is bottlenecked by $f_{dc}$ in equation~\ref{Lp}; the F1-score of the classifier is only $0.66$ (the state-of-the-art on the \textsc{PersuasionForGood } dataset), implicitly limiting the upper bound of any generation model that is reliant on it. We find from users' conversation experiences that the chatbot more than sufficiently presents persuasive strategies. If one has a dialogue act classifier with stronger performance, they would be able to improve the ability of their agenda-pushing model to learn persuasive strategies even further. We additionally find that \textit{without} a dialogue act classifier (and consequently, without $L_p$), BART is unable to achieve a dialogue act accuracy higher than $30\%$ on the \textsc{PersuasionForGood } dataset.
\section{Conclusion}
Overall, we find perceptual improvements by specifically integrating social content and factual information into persuasive dialogues with \textsc{RAP } compared to a strong end-to-end conditional generation model like BART. While existing methods like \citet{li2020end, wu-etal-2021-alternating} achieve strong performance on automatic metrics like perplexity, \textsc{RAP } directly emphasizes upon users' conversational experience with a modular design rooted in social science theory. \textsc{RAP } is generalizeable and may even be applied towards persuasive contexts outside of charitable conversations, e.g., in the case of therapy and crisis counseling~\cite{zhang-danescu-niculescu-mizil-2020-balancing} where there are also split goals (ensuring users feel heard and pushing a conversational agenda). Future work on persuasive dialogue systems could consider implementing a strategy planner using supervised learning. Additionally, researchers could consider looking for relationships between personality data, persuasive strategies, and persuasion outcomes.
\section{Ethical Considerations}
All participants were told they were talking to a chatbot developed by Columbia University researchers. This ensures transparency in experiment design, so that participants will never feel ambiguity or discomfort with respect to whether they are speaking with a human or a chatbot. Participants also gained additional insight about their own communication styles based on the results of their Big Five personality test.
Persuasion is a tricky social dynamic. It has been heavily studied, and the intention of this work, like the \textsc{PersuasionForGood } dataset used, is that persuasive dialogue systems should only ever be created for social good. All related applications discussed are intended to create good for the world at an individual and societal level.
\section{Acknowledgements}
Thanks to Intel for supporting this work through a research gift. We thank Kun Qian and Yu Li for helpful discussions and feedback. We are grateful to all of our study participants for their help in evaluating our systems.
|
{
"timestamp": "2022-03-17T01:15:17",
"yymm": "2203",
"arxiv_id": "2203.07657",
"language": "en",
"url": "https://arxiv.org/abs/2203.07657"
}
|
\section*{Abstract}
Visual Question Answering (VQA) needs a means of evaluating the strengths and weaknesses of models. One aspect of such an evaluation is the
evaluation of \textit{compositional generalisation}, or the ability of a model to answer well on scenes whose scene-setups are different from the training set. Therefore, for this purpose, we need datasets whose train and test sets differ significantly in composition. In this work, we present several quantitative measures of compositional separation and find that popular datasets for VQA are not good evaluators. To solve this, we present Uncommon Objects in Unseen Configurations (UOUC), a synthetic dataset for VQA. UOUC is at once fairly complex while also being well-separated, compositionally. The object-class of UOUC consists of 380 clasess taken from 528 characters from the Dungeons and Dragons game. The train set of UOUC consists of 200,000 scenes; whereas the test set consists of 30,000 scenes. In order to study compositional generalisation, simple reasoning and memorisation, each scene of UOUC is annotated with up to 10 novel questions. These deal with spatial relationships, hypothetical changes to scenes, counting, comparison, memorisation and memory-based reasoning. In total, UOUC presents over 2 million questions. UOUC also finds itself as a strong challenge to well-performing models for VQA. Our evaluation of recent models for VQA shows poor compositional generalisation, and comparatively lower ability towards simple reasoning. These results suggest that UOUC could lead to advances in research by being a strong benchmark for VQA.
\section{Introduction}
The field of Visual Question Answering (VQA) deals with the development of machine learning models that can understand visual scenes, and answer questions about them. These questions can deal with the properties of objects present in the scenes, or with the relations among the objects. Evaluating models involves measuring their ability to answer questions correctly. Two aspects for this are: a model's ability to handle complex scenes, and its ability to generalise well to scenes whose compositions differ from the scenes it was trained on. These we refer to as \textit{expressivity} and \textit{compositionality}.
Why are these two important? Expressivity is important, as useful data is often complex. Compositionality is important as compositional models learn concepts independent of the objects the concepts are associated with on the train set. To explain this, consider that the concept of 'front of' is independent of objects that are in front other objects. Thus, this relationship is understood across a variety of objects. This is likewise for many properties, such as colour or size. Therefore, learning of such concepts can be measured by evaluating models on data with new objects that showcase same concepts.
Thus, a dataset for this two-fold evaluation must be \textit{complex} and exhibit \textit{compositional separation} between the train and test scenes. The first property necessitates at least a fair diversity in object-complexity. The second is so that the train and test scenes have a minimal overlap in composition. This is so that the learning of concepts can be evaluated outside of instances seen in training.
\begin{figure}
\centering
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\textwidth]{images/group0scene0_test.png}
\caption{}
\label{subfig:example_test_scene}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/clevr_image.png}
\caption{}
\label{subfig:clevr_example_scene}
\end{subfigure}
\begin{subfigure}[t]{0.12\textwidth}
\includegraphics[width=\textwidth]{images/img1.png}
\caption{}
\label{subfig:sample_first_object}
\end{subfigure}
\begin{subfigure}[t]{0.12\textwidth}
\includegraphics[width=\textwidth]{images/img2.png}
\caption{}
\label{subfig:example_second_object}
\end{subfigure}
\begin{subfigure}[t]{0.12\textwidth}
\includegraphics[width=\textwidth]{images/img3.png}
\caption{}
\label{subfig:example_third_object}
\end{subfigure}
\caption{Fig. \ref{subfig:example_test_scene} is an example scene in the test set of UOUC. Every pair of objects in it are never seen in the train set - this makes for a strong evaluation of relationships such as spatial ones. Fig. \ref{subfig:clevr_example_scene} is an example scene in CLEVR. Note that the complexity of objects is low, due to them being simple geometric objects of only three classes. In contrast, consider the objects in Fig. \ref{subfig:example_test_scene}, or in Fig. \ref{subfig:sample_first_object}, \ref{subfig:example_second_object} and \ref{subfig:example_third_object}. These possess greater complexity. This, combined with compositional separation, makes for a challenge to models.}
\end{figure}
Datasets that are based on natural scenes are significantly complex. Examples of these are VQA-v1 \cite{VQA}, VQA-v2 \cite{balanced_vqa_v2}, Visual Genome \cite{krishna2017visual} and GQA \cite{Hudson_2019_CVPR}. They, however, do not have a mechanism for compositional separation. Moreover, the presence of natural biases in the co-occurrence structures of objects could lead to the presence of compositional similarities in the train and test datasets.
On the contrary, a dataset such as CLEVR \cite{johnson2017clevr} offers a structured generation of scenes, where certain aspects of the train scenes differ from the test scenes. Specifically, the CoGenT dataset of CLEVR presents a colouring-based compositional separation, where certain objects are coloured differently in the train and test sets. However, CLEVR suffers from the fact that its object-set is small and simple - consisting of cylinders, cubes and spheres. Each of these objects have only three other properties, namely colour, material and size. Thus, despite the generated compositional difference of colouration, train and test sets of CoGenT have several similarities. Another weakness is that due to the simplicity of the object-classes, models may easily learn to recognise objects and thus exhibit high performance.
A major source of this compositional similarity in the train and test sets of not only CLEVR, but also other VQA datasets, is the presence of common co-occurrences of object-pairs. In other words, several object-pairs occur together in scenes of both train and test data. The presence of this common information can make answering questions easier for models, while obscuring the real extent of compositional generalisation.
As a means of solving this problem, we present Uncommon Objects in Unseen Configurations (UOUC). UOUC is significantly more complex than CLEVR. As a rough comparison, the object-class of UOUC is 380 in number, and comes from 528 characters of the Dungeons and Dragons game. Unlike the simple objects of CLEVR as seen in Figure \ref{subfig:clevr_example_scene}, these exhibit significant complexity as can be seen in the example scene in Figure \ref{subfig:example_test_scene} and the example objects in Figures \ref{subfig:sample_first_object}, \ref{subfig:example_second_object} and \ref{subfig:example_third_object}. More scenes are present in the supplementary material. UOUC borrows the idea from CLEVR that synthetic generation is a means of designing compositionally-separated train and test sets.
Unlike CLEVR, however, UOUC does not use a simple colouration-based condition. Instead, UOUC is designed so that no pair of objects co-occur in both the train and test sets. Thus, the scenes of the test set are compositionally different in a more complex manner.
In order to use this separation condition effectively, each scene of UOUC is annotated with up to 10 questions. The first four of these deal with the learning of spatial relationships between objects, namely front, back, left, and right. Since no objects co-occur commonly, a model must learn to understand the notion of a spatial relationship independent of the instances it has seen during training. The next two questions deal with simple reasoning such as counting and comparison. The next question deals with pure perception, and queries the presence of an object in a scene. The last three questions deal with memorisation and memory-based reasoning. Memorisation related to non-perceptual attributes is an under-explored area of VQA. Specifically, we wish to make an initial direction in understanding the complexity of memory-based reasoning, with respect to perception-based reasoning.
UOUC thus consists of 200,000 questions in the training set, and 30,000 questions in the test set, with over 2 million questions for all the scenes.
Our experiments using recent models show that compositional generalisation is hard, whereas simple reasoning is relatively easy. Memory-based reasoning is still easier for the best-performing models. In general, UOUC offers a challenge for VQA in terms of compositional generalisation, that could lead to advances in the field.
The organisation of the rest of the paper is as follows. Section 2 presents related work. Section 3 presents a quantitative comparison of UOUC with other datasets. Section 4 presents a description of UOUC in terms of its construction, object-classes and their attributes, and associated questions. Section 5 presents a description of experiments done using recent models on UOUC. Section 6 presents a discussion of these results. We then present a discussion of possible future work and extensions of UOUC.
\section{Related Work}\label{rela}
There is much work in the field of VQA involving contributions in terms of data sets and models. Several models have been proposed and evaluated on various data sets. MUTAN \cite{ben2017mutan} is a model that combines visual and question features in an efficient manner, achieving good performance on VQA-v1. Likewise, \cite{kazemi2017show} presented a model that combines features from CNNs, attention, and a sequence-model to again achieve good performance on VQA-v1 and VQA-v2. More recently, LCGN and MAC \cite{hudson2018compositional} were proposed. LCGN \cite{hu2019language} is a language-conditioned graph network that uses iterative message-passing for VQA. MAC is a compositional model for VQA that uses memory and control, integrated in a single unit. LCGN and MAC achieve good accuracies on CLEVR and GQA.
Data sets for VQA that many models for VQA use are VQA-v1, VQA-v2, GQA, Visual Genome, and CLEVR. VQA-v1 was one of the early data sets for VQA. It uses images from the COCO data set \cite{chen2015microsoft}, and provides annotations in the form of questions. VQA-v2 is a balanced version of this data set that associates each question in VQA-v1 with two images that have a different answer. Visual Genome is a highly-annotated data set of real images for VQA, with a huge object-class. GQA is a data set of real scenes that annotates images with questions that involve multiple steps of reasoning. The answer distribution of the questions has been balanced. Moreover, GQA also introduces metrics for evaluating models that move beyond accuracy. CLEVR is a generated data set for VQA that uses questions that are compositional in nature, and can use multiple steps of reasoning. CLEVR offers a data set termed CoGenT that uses a separation condition, in terms of object-properties, in its train and test set. Compositional models that understand properties independent of objects would be able to answer questions that relate to the properties of this separation condition.
\section{How does UOUC compare to popular datasets}
In this section, we present a comparison of UOUC with several popular datasets for VQA. Our comparison, like our motivation for UOUC, is on the two fronts of complexity and compositional separation.
\subsection{Comparing the richness and complexity}
We aim to provide a measure of complexity, or richness, by stating the number of questions, objects, and scenes in the datasets. Table \ref{tab:numimq} provides this information. As can be seen, UOUC has an object set much larger than CLEVR-CoGenT. Moreover, and as mentioned before, based on a comparison between objects in Figure \ref{subfig:example_test_scene} and Figure \ref{subfig:clevr_example_scene}, we can safely say that UOUC has more complex scenes. UOUC is also quite comparable in terms of the number of scenes, the size of the object-class and number of questions to datasets such as VQA-v1 and VQA-v2. This establishes UOUC as a perceptually harder synthetic dataset for VQA than CLEVR.
\subsection{Comparing compositional separation}
As stated before, two measures of compositional separation between the train and the test data are the number of common co-occurring pairs and relations among them. We use this idea to present four metrics that, when close to zero, indicate a high degree of compositional separation.
\subsubsection{Common co-occurrences}
The first of these, termed \textit{AvgCoPair}, is the average number of co-occurring pairs of objects in a test scene, that have co-occurred in a train scene. Complementing this is a second score, termed \textit{AvgCoPairOcc}, which gives the average number of times a commonly co-occurring pair is found in the train dataset.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Name & Images & Questions & Classes\\
\hline
VQA-v1 (Real)& 205K& 614K &80\\
\hline
VQA-v2 (Real)& 205K& 1.1M & 80\\
\hline
Visual Genome& 108K& 1.7M & 33,877 \\
\hline
GQA & 113K & 22M & 1703 \\
\hline
CLEVR-CoGenT & 130K & 1.3M & 3 \\
\hline
\textbf{UOUC} & \textbf{230K} & \textbf{2M} & \textbf{380} \\
\hline
\end{tabular}
\caption{The number of images, objects, questions for some data sets (approximate).}
\label{tab:numimq}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Data set & Visual Genome& GQA & \textbf{UOUC} & CoGenT\\
\hline
AvgCoPair & 269.93 & 155.95 & \textbf{0.00} & 20.89 \\
\hline
AvgCoPairOcc & 2,270.14 & 4,023.83 & \textbf{0.00} & 318K \\
\hline
AvgCoRel & 6.20 & 45.79 & \textbf{0.00} & - \\
\hline
\hline
\end{tabular}
\caption{Measure of compositional separation between the train and test sets. Lower indicates better separation. Note that UOUC is the best separated among all the datasets, making it an ideal dataset for evaluating compositional generalisation.}
\label{tab:relsc}
\end{table}
The first measure gives an extent of common information, per test image, present between the train and test set. The second measure gives the extent of the presence of this common information in the train set. For example, if two objects - a child and a cake, were to be present in the same scene for some images in both the train the and test datasets, there is some compositional overlap. This compositional overlap can lead to models obtaining higher performance because they have seen these together in the train dataset. Moreover, as mentioned before, this also leads to a difficulty in measuring compositional generalisation for relationships that may have existed between these. If multiple such instances of such co-occurrences are found in the train dataset, then it becomes easier for a model to memorise this information and then answer questions on the test dataset for the common pairs.
\subsubsection{Common relationships}
The next two measures, termed AvgCoRel and AvgCoRelOcc, extend this concept to relationships between objects instead of only co-occurrences. The presence of the same relationships between the same objects in both the train and test set is a stronger case of compositional overlap. Further a larger number of occurrences of any such overlap in the train dataset makes their learning easier for models, and thus impacts a proper evaluation.
Using the same example of a child and a cake, if both the train and the test data have common instances of the relation 'eating' between them, there is a strong compositional overlap. If multiple instances of a child eating cake exist in the train dataset, then a model can find it easy to use this in answering questions on the test dataset.
Table \ref{tab:relsc} gives these four metrics for Visual Genome, GQA, UOUC, and CLEVR. We use the mean of 5 random 70-30 train-test splits for Visual Genome to obtain these results, in the manner of \cite{krishna2017visual}. We use the validation set as the test set for GQA, as required information about the test set is not provided publicly. As CLEVR uses no annotated relationships between objects, we compute only AvgCoPair and AvgCoPairOcc. Lastly, as VQA-v1 and VQA-v2 do not present required information for the computation of these four scores, we do not present them.
First, we see that Visual Genome, GQA, and CLEVR have non-zero scores for the computed measures. Thus, there is a high level of compositional overlap between the train and test splits. UOUC has all the scores zero, as no common co-occurring pairs or relationships exist. This is by design. Thus, UOUC offers a stronger evaluation of compositional generalisation for models.
\subsection{Comparison by performance}
A final comparison of datasets is based on the level of challenge it offers to recent models. Such a comparison can be based on the accuracy of answering questions. Based on Table \ref{tab:accuracies}, we see that models that achieve decent performance on VQA-v2 and GQA, and more importantly high performance on CLEVR, perform poorly on UOUC, especially on compositional generalisation. This allows us to state that UOUC can lead to further advances for compositional generalisation in VQA.
\section{Details of UOUC}
UOUC, like CLEVR, is synthetically generated so as to allow for a compositional separation of the train and test sets. We outline the process of construction.
\textbf{Downloading and pre-processing object models:} All the objects were downloaded from \url{https://www.prusaprinters.org/social/39782-mz4250/prints} as .STL files. They were pro-processed to make the scales uniform, and the the 3d models of the objects face the viewer. In total, 528 models were used to create 380 classes of objects. The objects are used with the permission of the creator and are licensed under a Creative Commons License 4.0 (Non-Commercial) license.
\textbf{Categorisation and grouping of objects:}
The 380 classes were further categorised into 5 categories. These categories are presented in bold in Table \ref{tab:categories}. The categorisation was done manually. After categorisation, the objects are again grouped randomly into 10 groups. The process is such that each group has roughly similar numbers of objects of each category. This grouping does not introduce any new properties to objects, but is extremely important to the compositional separation of the train and test sets.
\textbf{Construction of the base scene:} Since the scenes are to be generated using the 3d models of the objects, we use the Blender \cite{blender} software to first create 3d scenes which are then rendered to 2d scenes. A requisite for this is a 3d base scene. Figure \ref{fig:base_scene} shows the scene we used. The scene is made so that the objects are placed in the 19 hexagonal structures.
\textbf{Construction of the scene-structures:} Before generating the scenes of the train and test sets, we first generate text-files that store their structures. The structure of a scene consists of the positions of each of the objects in the scene, the colour of the objects, their rotation about the z-axis, and the camera-rotation. For the train set set, objects are rotated with an angle chosen randomly between -30 and 30 degrees; the camera is rotated randomly with an angle chosen between -20 and 20 degrees. For the test set, these extents are -45 and 45 degrees for the objects, and -30 and 30 degrees for the camera. These are stored in the text-files, which are then used for the generation of the scenes and the questions and answers. The grouping mentioned earlier is used here. For the train set, scenes are generated so that only objects within a group co-occur, while the test set has that objects only from different groups co-occur. The choice of the objects is otherwise random, with a scene having at least 4 objects, and at most 6. Thus, the train and test sets compositionally-separated. Each group generates 20,000 scenes, and the test set contains 30,000 scenes.
\textbf{Generation of the scenes:} The scenes of UOUC are generated using the scene-structures and the blender software. Based on the scene-structure objects are assigned a colour from one of orange, yellow, green, violet, brown, and light-yellow. The objects are then rotated and positioned according to the scene-structure for that scene.
\textbf{Generation of the questions and answers:} The text-files find another use in the generation of the questions and answers for the scenes. Each scene of UOUC has up to 10 questions associated with it. The procedure for the generation of questions and answers is based on using predefined templates for each question and filling-in these templates using the logic for that question, and using certain properties of objects. These properties, apart from colour, are mentioned in Table \ref{tab:categories} for each category. The properties are all categorical, and can assume more than 2 values across the objects of a category. The assignment of these properties is based on the role of the object-classes in literature and media. This was done manually. The reader may note that no prior knowledge of Dungeons and Dragons is necessary for using UOUC. Apart from these properties, each object is also assigned a random team, which is one of 'A', 'B', and 'C'. The purpose of this attribute is test the memorisation ability of models.
We describe each of the questions below. Broadly, they can be categorised into 4 categories: Spatial relationship-based (\textbf{SRB}), perceptual (\textbf{P}), simple reasoning (\textbf{SR}), and memory-based (\textbf{MB}). Examples of each of the questions are given, in order of introduction, in Figure \ref{fig:example_of_scene}.
\begin{table}[h!]
\begin{tabular}{|p{0.2\textwidth}|p{0.2\textwidth}|p{0.2\textwidth}|p{0.2\textwidth}|p{0.2\textwidth}|}
\hline
\textbf{Adventurer} & \textbf{Dragon} & \textbf{Animal} & \textbf{Monster} & \textbf{Mythical} \\
\hline
Species & Name & Type & Type & Type \\
\hline
Class & Pose & Predator-level & Name & Name \\
\hline
Gender & - & Prey-level & - & Gender \\
\hline
Weapon & - & Name & - & - \\
\hline
Mount & - & Food-habit & - & - \\
\hline
Attackable & - & - & - & - \\
\hline
\end{tabular}
\caption{Categories and their attributes.} \label{tab:categories}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group0scene0.png}
\caption{}
\label{fig:example_of_scene}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/base-blend.png}
\caption{}
\label{subfig:base_scene}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/box__1_.png}
\caption{}
\label{subfig:bounding_box}
\end{subfigure}
\caption{Examples of each question-type are given, in order in Figure \ref{fig:example_of_scene}. \textbf{Q1.} Is a green regular-animal front of a blue spinosaurus? \textbf{A1.} no. \textbf{Q2.} Is there a light-yellow nithe dragon left of a blue spinosaurus that is back of a green goat? \textbf{A2.} no. \textbf{Q3.} Is there a green goat back of a light-yellow nithe dragon and right of the green coven-horror? \textbf{A3.} no. \textbf{Q4.} Swapping the position of a green coven-horror with a green goat, is a light-yellow nithe dragon front of it? \textbf{A4.} yes. \textbf{Q5.} Are there greater, equal or lesser number of dinosaurs than dragons? \textbf{A5.} equal.
\textbf{Q6.} Upon removal of blue dinosaur how many blue dinosaur are present? \textbf{A6.} 0. \textbf{Q7.} Is there a orange abeloth dragon in the scene? \textbf{A7.} no. \textbf{Q8.} What is the predation-level of the green goat? \textbf{A8.} 1. \textbf{Q9.} Will goat attack spinosaurus? \textbf{A9.} no. \textbf{Q10.} Which category does nithe belong to? \textbf{A10.} dragon. Figure \ref{subfig:base_scene} shows the base scene. Figure \ref{subfig:bounding_box} shows a scene with bounding boxes for the objects in the image.}
\end{figure}
\textbf{Q1. Checking for relationships (\textbf{SRB})} This question presents a description of two objects and then asks if a spatial relationship exists between them.
\textbf{Q2. Checking for chains of relationships (\textbf{SRB})} This question provides the description of three objects, and asks if the first satisfies a given spatial relationship with the second, and the second satisfies a second given spatial relationship with the third.
\textbf{Q3. Checking for satisfying relationships of two types (\textbf{SRB})}
This question provides the descriptions of three objects, and asks if the first satisfies a given spatial relation with the second, and also a given second relationship with the third.
\textbf{Q4. Text-only swapping and checking for a relationship (\textbf{SRB})}
This question provides a description of three objects, suggests a swap of position between two, and asks if the third satisfies a given spatial relationship with the first after swapping. This is new kind of question, similar to some suggested in \cite{beckham2020visual}, that changes the scene in text, and asks a question. A model that has learnt to separate an entity and relationships it may be in would find it easier to answer this, than a model that has not.
\textbf{Q5. Comparing based on attributes (\textbf{SR})}
This question gives two descriptions of objects, and asks if the number of objects satisfying the first is lesser than, greater than, or equal to the number of objects satisfying the second.
\textbf{Q6. Count of text-only removal of object (\textbf{SR})} This question suggests a text-only removal of an object, satisfying a description. Then it asks the count of objects satisfying another description. This question is also based on a text-only change of a scene, that necessitates a model understand the concept of removing an object, and counting.
\textbf{Q7. Checking for an object (\textbf{P})}
This question provides a description of an object and asks if that object is there in the scene.
\textbf{Q8. Stating the properties of an object (\textbf{MB})}
This question provides a description of an object and asks a property of that object.
\textbf{Q9. Checking for non-spatial relationships (\textbf{MB})}
Apart from spatial relationships, animals are related to other animals and adventurers by a property-based relationship. Each animal has a predation-level and a prey-level, indicative of some notion of which animal is likely to attack. An adventurer has a property of being attackable, which gives if an animal of sufficient predation-level can attack it. An animal can attack an adventurer if its predation-level is greater than 2 and if the adventurer is attackable. An animal can attack another animal if its predation-level exceeds the prey-level of the other animal. This question gives a memory-based relationship that tests memory-based compositional generalisation.
\textbf{Q10. Stating the category or team of an object (\textbf{MB})}
This question provides a description of an object and asks the category or team of the object. While answering this question can be done by only text, it is useful in teaching a model the properties it needs for memory-based compositional generalisation.
\textbf{A note about memory-based properties and questions:} Why are questions that require text-only memorisation included in the VQA tasks of UOUC? For one, we wish to use these as a sanity-check for the models. Secondly, we wish to study if VQA models can also use visual aspects to improve upon text-only processing. Thirdly, we wish to study if memory-based tasks are harder than perceptual tasks. Note that the main challenge of UOUC is given by the first 4 questions, and then the next 3 questions. The memory-based questions are primarily investigative in nature, and can be used as a sanity-check for models.
Apart from these annotations, 2d and 3d bounding boxes for each object in the scenes of UOUC have been provided. An example is provided Figure \ref{subfig:bounding_box}. The bounding boxes could be used for object-detection features.
\subsection{Certain statistics for UOUC:}
We present the following statistics for UOUC in Figure : the proportion of each category in the object-classes, the proportion of the number of object-instances category-wise, and the proportion of the mean of the instances of each object, seen category-wise.
\begin{figure}
\centering
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=\linewidth]{images/distribution.png}
\caption{}
\label{subfig:distribution_objects}
\end{subfigure}
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=\linewidth]{images/Word_length.png}
\caption{}
\label{subfig:word_length}
\end{subfigure}
\caption{Figure \ref{subfig:distribution_objects} gives the proportion of objects per category (purple) bar, the proportion of object-instances per category for the train set (light-green bar), and the proportion of object-instances per category for the test set (cyan bar). The dotted line shows the distribution of the mean number of instances for each object, seen category-wise. Figure \ref{subfig:word_length} shows the average word-length per question-type, for the train and test set (figures are best viewed in colour).}
\end{figure}
We see that the category 'Adventurers' has the maximum proportion of objects and object-instances. This is because the original data of the 3d models had a large number of such objects. However, due to the random and unbiased sampling of objects in the scenes, each object, regardless of category, can be seen to be almost as equally likely to be present in a scene. Moreover, the distributions of the object-instances and categories are similar for both the train and test sets, ensuring that the models are challenged primarily based on question-answering, rather than by other factors relating to the sampling of objects.
We also plot the average word-length per question-type for both the train and test sets in Figure. We see that the question-lengths are not particularly long, and that they are similar for both the train and test sets. This again ensures that the challenge of answering questions on the test-set of UOUC is based on the logic of the questions, rather than other extraneous factors. Other statistics are given in the supplementary material.
\textbf{Availability of UOUC:} UOUC will be publicly available after the publication of this paper.
\section{Experiments on recent VQA models}
We trained and tested 4 recent VQA models on UOUC. These are MAC \cite{hudson2018compositional}, LCGN \cite{hu2019language}, SAAA \cite{kazemi2017show}, and MUTAN \cite{ben2017mutan}. MAC and LCGN have achieved high accuracies on CLEVR, and decent performances on GQA. SAAA and MUTAN have achieved decent performances on VQA-v1 and VQA-v2. These results are presented in Figure \ref{subfig:comparison_mac_lcgn} and Figure \ref{subfig:comparison_saaa_mutan}.
In order to use a backbone for the visual features, we trained a ResNet-50 and a ResNet-101 on the problem of detecting the presence of objects in a scene. The precision and recall for these models are: 0.41 and 0.44 for resnet-50, and 0.51 and a recall of 0.32 for resnet-101. Since the ResNet-50 achieves better performance , we use it as the backbone for all the VQA models. This causes a slight deviation from some of the models such as LCGN, which uses a ResNet-101 in the original work.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{images/Question_Scores.png}
\caption{The accuracy per question-type for each model. The average accuracy is given by the dotted line (figure is best viewed in colour).}
\label{fig:accuracies}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Model & \textbf{SRB} & \textbf{SR} & \textbf{P} & \textbf{MB}\\
\hline
MAC & 50.69 & 72.10 & 93.30 & 87.69\\
\hline
LCGN & 50.65 & 72.78 & 93.88 & 87.10\\
\hline
SAAA & 50.13 & 22.45 & 45.57 & 22.68\\
\hline
MUTAN & 50.16 & 2.22 & 49.69 & 30.01\\
\hline
l-only & 50.69 & 63.32 & 66.30 & 76.38\\
\hline
i-only & 1.03 & 0.76 & 1.02 & 0.92\\
\hline
\end{tabular
\vspace{0.5cm}
\caption{Accuracies (in\%) for the models on UOUC. Performance on the various types of questions are presented. l-only refers to language only. i-only refers to image-only.}
\label{tab:accuracies}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=\linewidth]{images/comparison-1.png}
\caption{}
\label{subfig:comparison_mac_lcgn}
\end{subfigure}
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=\linewidth]{images/comparison-2.png}
\caption{}
\label{subfig:comparison_saaa_mutan}
\end{subfigure}
\caption{Figure Comparison of performances of models on various VQA datasets for VQA and the two main families of questions of UOUC - SRB and SR (figures are best viewed in colour). \ref{subfig:comparison_mac_lcgn} compares the performance of MAC and LCGN on GQA (purple), CLEVR (red), SRB(cyan), and SR (violet). Figure \ref{subfig:comparison_saaa_mutan} compares the performance of SAAA and MUTAN on VQA-v1 (purple), VQA-v2 (red), SRB (cyan), and SR (violet).}
\end{figure}
We further trained an image-only model and a language-only model for the VQA task. This was done in order to estimate the extent of any biases in the images and questions towards answering the questions. The image-only is a standard CNN-model, with a fully-connected layer for classification. It uses the ResNet-50 as a pre-trained backbone. The language-only model is a transformer model, with a fully-connected layer for classification. A low performance of these models can be inferred as some evidence for lower bias in the dataset. Since their purpose is only to indicate bias, we have not reported results on the other datasets.
Certain training details, like the source of their implementations and the number of epochs, for all these models are given in the supplementary material. The accuracies for the models for each question, along with the average accuracies, is given in Figure \ref{fig:accuracies}. The average accuracy for each family of questions (\textbf{SRB}, \textbf{P}, \textbf{SR}, and \textbf{MB}) for each model is given in Table \ref{tab:accuracies}.
\section{Discussion and analysis of the results}
\subsection{A general analysis of accuracies}
A general analysis of the performance of the models, as seen in Table \ref{tab:accuracies} and
Figure \ref{fig:accuracies}, is that the best-performing models are MAC and LCGN. We also see that SAAA and MUTAN perform poorly in most questions, in contrast to their decent performances on VQA-v1 and VQA-v2, as seen in Figure \ref{subfig:comparison_saaa_mutan}. We hypothesise that the reason for this performance is that SAAA and MUTAN are not able to generalise to the image and question-distribution of the test-set. Also, the image-only model performs poorly, as seen in Figure \ref{fig:accuracies} and Table \ref{tab:accuracies}, indicating low image-bias. We, therefore, do not discuss beyond this in detail for them, focusing on the best-performing models and language-bias.
For the rest of our discussion, we base our observations on Table \ref{tab:accuracies} and Figure \ref{fig:accuracies}. The comparison between datasets we present follows Figure \ref{subfig:comparison_mac_lcgn}.
\subsection{Analysis of performance per-question}
\textbf{Spatial relationship-based}
Answering questions 1, 2, 3, and 4 requires models to be able to generalise compositionally with respect to spatial relationships. All the models perform almost randomly on these questions. This indicates an inability to learn the spatial relationships in this more complex setting, as well as an inability to generalise compositionally. We believe that these four questions could be a good evaluator of compositionality. This is more so, considering that MAC and LCGN achieve close to 99\% accuracy on CLEVR, and a non-random performance on GQA. We also see that the language-only model perform poorly on these questions, indicating little bias.
\textbf{Simple reasoning-based}
Answering questions 5 and 6 requires a model to be able to reason based on comparison and counting. We see that MAC and LCGN perform reasonably well at this, achieving over 70\%. However, there is yet a significant scope for improvement. This, again, indicates that UOUC is a challenging dataset for VQA. An interesting point to note is that, while both question 4 and question 6 deal with hypothesised changes to scenes, the best-performing models achieve non-random performance on question 6 alone. This could be seen as evidence that models can even learn to understand complex tasks, such as text-only changes, but compositional generalisation is not easy to demonstrate. The language-only model performs at slightly higher accuracy for these questions, indicating some language-bias. However, this is not significantly high, and the gap between the best-performing models and this model are significant. This shows that the bias is not high enough that the questions can be answered with high accuracy from text alone.
\textbf{Perception}
Question 7 evaluates a model's ability to recognise objects in a scene, given a description. We see that MAC and LCGN perform very well, exceeding 90\% on this question. This leads us to two conclusions. One, that pure perception is comparatively easy for advanced models. Two, that the low performance on the previous questions is not purely due to the complexity of the objects. In other words, the compositional separation between the train and test sets poses a greater challenge than the complex appearances of objects. The language-model achieves around 66\% accuracy, which indicates certain bias in the question-generation. However, again, the gap between the best-performing models and the language-models is quite large, indicating that visual cues are needed to achieve good accuracy.
\textbf{Memory-based}
Questions 8, 9, and 10 can be answered by text alone. However, visual cues can help in the answering of question 9. We see that MAC and LCGN perform well on all these questions, as does the language-only model. Surprisingly, SAAA and MUTAN achieve non-random performance on question 9. We believe further investigation is necessary in understanding this behaviour. We also guess that MAC and LCGN achieve are able to use visual cues along with the text to achieve good performance on question 9. The language-only model performs poorly at memory-based compositional generalisation. Thus, another guess for the higher performance of MAC and LCGN on question 9 could be the presence of 'reasoning-based' architectural design. Again, more investigation is necessary to understand this.
\section{Conclusion}
We introduce a dataset, UOUC, to evaluate compositional generalisation in VQA models. We show that existing datasets do not simultaneously offer complex perceptual tasks and strong evaluations of compositonal generalisation. UOUC fills this need, by presenting a reasonable large and complex object-set, while ensuring compositional separation between the train and test datasets.
UOUC has 200,000 train scenes and 30,000 test images, with objects drawn from an object set of 380 classes that come from 528 characters of the Dungeons and Dragons game. UOUC is constructed such that no objects co-occur commonly in the train and test sets.
Evaluation of models on UOUC shows that more understanding of compositional generalisation is needed for models, and that UOUC is a strong challenge for the same. We also investigated the use of memorisation questions in VQA. The low performance of certain VQA models suggests their use as a sanity-check. Improvements to UOUC can be done by considering more complex, game-style questions. This would broaden the scope of models that can be evaluated using UOUC. Other improvements can be a harder dataset that uses free positioning. However, currently the easier version is still a hard challenge. We believe that using UOUC as a test for compositional generalisation will allow for more trust in the performance of VQA models.
\section{Supplementary material for the paper}
\section{More scenes of UOUC (in reference to section 1)}
We present more examples of scenes and objects, so that the complexity of UOUC may be appreciated better. Figure \ref{image} presents examples of scenes. Please note that while the base scene is not complex, the individual objects exhibit great complexity. Figure \ref{fig:object} presents examples of objects at a better resolution for a better feel of the appearance.
\begin{figure*}
\begin{center}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group0scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group2scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group4scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group5scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group6scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group7scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group8scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group9scene0.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene1.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene100.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene1000.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene1001.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10000.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10001.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10002.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10003.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10005.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10006.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10007.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10008.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10009.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10010.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10011.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10012.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10013.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene10015.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group1scene1001.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene1.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene100.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene1000.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene1001.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10001.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10002.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10003.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10004.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10005.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10006.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10007.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10008.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10009.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10010.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10011.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10012.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10013.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10014.png}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/group3scene10000.png}
\end{subfigure}
\caption{Examples scenes of UOUC.}
\label{image}
\end{center}
\end{figure*}
\begin{figure}
\centering
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img4.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img5.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img6.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img7.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img8.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img9.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.15\textwidth}
\includegraphics[width=\linewidth]{images/img10.png}
\caption{}
\end{subfigure}
\caption{Examples of objects in UOUC. Note the complexity and diversity of appearance.}
\label{fig:object}
\end{figure}
\section{Some further statistics for UOUC (in reference to section 4)}
We present statistics relating to the extent of uniformity in the presence of attributes in the scenes of the train and test set, and also in the answers of each question-type.
In \ref{fig:my_label1}, we provide a representation of the relative numbers of objects with certain attributes for the train data. In \ref{fig:my_label2}, a representation of such statistics is given for the test data set. Each rectangle of a colour, within the figures, is representative of an attribute. Boxes within the rectangles represent specific values taken by the corresponding attribute. For example, consider the blue rectangle for the train data. This rectangle represents the category of the objects in the scene. The size of the boxes in this rectangle represent the proportions of the occurrences of the values of the categories. A larger size of a box indicates a larger proportion. A well-balanced attribute would have boxes of similar sizes. While some boxes are labelled, others which relate to attributes with lower proportions are not labelled.
We see that most objects, in the train and test data, are categorised as adventurers. This is because most objects are adventurers. However, objects from other categories also are present in a noticeable fraction. Thus, while adventurer-objects are present in most scenes, objects of other categories are likely to be present in most scenes as well.
We see that most objects that have a gender-property are given 'male' as the property. This was not by conscious design. Most objects of categories that were associated with a gender had features generally accepted to be of male gender. We believe that this data set has no direct application in scenarios where gender-bias could affect people, and hence believe it has no negative social impact. This much said, we would definitely aim at an extension of the object-class of UOUC to have it gender-balanced.
We see that there is a significant imbalance in the objects having mounts - most do not. Observing the presence of certain number of attributes that occur infrequently in objects, it can be expected that even for memorisation-questions, for high performance, models must be able to have a means of keeping such information. A test for this would be to construct a test data set, consisting of objects with these properties. This has not been done in our work, as compositionality is our main area of study. The fact that the train and test sets have a similar distribution of attributes, indicates that no additional difficulty has been introduced beyond object-co-occurrence.
In \ref{fig:label}, we present a representation of the uniformity of answers for each question-type for the train scenes. In \ref{fig:label1}, we present the statistics for the test scenes. The representation is similarly interpreted as in the representation of attributes. First, we see that question 1, 2, 3, 4, and 7 are all well-balanced. Each of these has only two possible answers. Question 9, which deals with non-spatial, memory-based relationships, is not as balanced in its answer-distribution as most pairs of animal-animal and animal-adventurer are non-attacking. This comes from a natural consequence of having fewer animals with a large predation-level.
Question 5 is not particularly poor in its balance, even if one of its answers occurs more frequently than the others. This is because its other answers also occur at a good proportion. Question 10 and Question 6, on the other hand, have certain answers that occur, comparatively, infrequently.
Question 8 has multiple possible answers, many of which occur infrequently. This is related to the fact that the objects themselves have a similar distribution of attributes. Again, we see that the train and test sets are similar in distribution, indicating no added difficulty in the test set apart from the co-occurrence of objects.
\begin{figure*}
\centering
\begin{subfigure}[t]{\textwidth}
\includegraphics[width=\linewidth]{images/train.png}
\caption{The distribution of properties among objects of the train data.}
\label{fig:my_label1}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\includegraphics[width=\linewidth]{images/count.png}
\caption{The distribution of properties among objects of the test data.}
\label{fig:my_label2}
\end{subfigure}
\caption{Larger rectangles indicate a greater proportion of instances of the properties (figures best viewed in colour).}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[t]{\textwidth}
\includegraphics[width=\linewidth]{images/train_count.png}
\caption{The distribution of answers among questions of the train data}
\label{fig:label}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\includegraphics[width=\linewidth]{images/test_count.png}
\caption{The distribution of properties among questions of the test data}
\label{fig:label1}
\end{subfigure}
\caption{Larger rectangles indicate a greater proportion of answers of a question-type (figures best viewed in colour).}
\end{figure*}
\section{Some other details about the training of models (in reference to section 5)}
The following are the sources from where we took the implementations of the models we used for VQA. We used the pytorch implementation of MAC from \url{https://github.com/rosinality/mac-network-pytorch}, SAAA from \cite{kazemi2017show} from \url{https://github.com/Cyanogenoid/pytorch-vqa}, the pytorch implementation of MUTAN from \url{https://github.com/Cadene/vqa.pytorch}, and LCGN from \url{https://github.com/ronghanghu/lcgn/tree/pytorch}.
We trained MAC, MUTAN, LCGN, and SAAA for 50 epochs. We trained each of the image-only and language-only models for 100 epochs. Other hyperparameters such as the learning rate, the number of layers, the scheduling of lr etc. are similar to what is used in the original source.
\bibliographystyle{plain}
|
{
"timestamp": "2022-03-16T01:15:48",
"yymm": "2203",
"arxiv_id": "2203.07664",
"language": "en",
"url": "https://arxiv.org/abs/2203.07664"
}
|
\section{Introduction}
Future facilities to enable experimental enquiry of the five intertwined science drivers~\cite{P5}, identified by the particle physics community, have been studied extensively. Three of the five HEP science drivers identified by the community are: the usage of the Higgs boson as a new tool for discovery, the identification of the new physics of dark matter, and exploration of unknown new particles, interactions and physical principles. Progress in these areas will be enabled by new collider facilities, at high energies and high luminosities.
A Higgs Factory would provide improved precision over the LHC, resolving Higgs properties 10 times better, and will enable a broad range of investigations across the fields of fundamental physics, including the mechanism of electroweak symmetry breaking, the origin of the masses and mixing of fundamental particles, the predominance of matter over antimatter, and the nature of dark matter. The cleaner $e^+e^-$ environment aided by beam polarization available at linear colliders could become a sensitive probe to reveal more subtle phenomena.
The International Linear Collider~\cite{ILC} (ILC), based on superconducting RF technology, has the most advanced design.
The science case for the electron-positron Higgs factory has been well developed by the ILC community. Further refinements of the physics case were also made by the Future Circular Collider community, using their plans for a circular electron-positron facility (FCC-ee)~\cite{FCC} and CEPC~\cite{CEPCStudyGroup:2018ghi}. As opposed to linear machines, circular colliders are strongly limited by synchrotron radiation above 350–400 GeV center of mass energy. Moreover a linear collider allows for trigger-less operation, which could also open up to new physics signatures that are going undetected at the LHC.
The international community, represented by the ICFA, has expressed interest in the ILC as a global project, and the ILC is now under consideration for construction in Japan.
However, for a long time now, Japan has not initiated a process to host this collider. In view of this, it is appropriate to consider other options.
The FCC-ee machine is pipe-lined 7 years after the end of the HL-LHC program and expected to start in 2048. It will require a large upfront civil engineering cost to build a 100-km tunnel. The ultimate goal of a proton machine (FCC-hh), which starts after the FCC-ee, is also acknowledged as attractive.
Yet again, it is prudent to investigate alternate plans based on technologies which could enable compact designs and possibly provide a roadmap to extend the energy reach at future colliders.
The goals for a worthwhile alternative plan are two-fold: a) a near-term, cost-effective Higgs factory that could fit within an existing laboratory site and b) a longer-term prospect for accessing higher energies in lepton collisions, again in a cost-effective fashion.
The Cool Copper Collider (C$^{3}$~)~\cite{cccwhitepaper} is a relatively new proposal to build a Higgs Factory with a 250 GeV energy collision energy based on a technology that offers the option for an adiabatic upgrade to 550 GeV, and possible extension to the TeV-scale. Beam parameters for C$^{3}$~-250 and C$^{3}$~-550 are given in Table~\ref{tab:cccpar}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c c c }
\hline
CM Energy [GeV] & 250 & 550 \\
Luminosity [x10$^{34}$/cm$^2$s] & 1.3 & 2.4 \\
Gradient [MeV/m] & 70 & 120 \\
Effective Gradient [MeV/m] & 63 & 108 \\ %
Length [km] & 8 & 8 \\
Num. Bunches per Train & 133 & 75 \\
Train Rep. Rate [Hz] & 120 & 120 \\
Bunch Spacing [ns] & 5.26 & 3.5 \\
Bunch Charge [nC] & 1 & 1 \\
Crossing Angle [rad] & 0.014 & 0.014\\
Site Power [MW] & $\sim$150 & $\sim$175 \\
\hline
\end{tabular}
\caption{Beam parameters for C$^{3}$~. The final focus parameters are preliminary.\label{tab:cccpar}}
\end{center}
\end{table}
While several competing technologies for electron-positron colliders exist for a Higgs Factory, only high accelerating gradient linear machines are likely affordable to get to the TeV-scale. We will then probe the Higgs self-coupling and top-Yukawa coupling to a precision that sheds light on a broad class of puzzles around the Higgs boson. Such a TeV-scale upgrade also opens the gateway to access to TeV-scale new physics, especially in unexplored/underexplored weakly coupled states that may well escape detection at the HL-LHC. New physics of this kind are well-motivated by general theoretical considerations, such as supersymmetry, compositeness considerations, generic models with hidden dynamics, and dark matter.
C$^{3}$~ can also probe a broad range of TeV-scale new physics, potentially explaining the g-2 and flavor physics anomalies.
Broader impacts of C$^{3}$~ technologies are also important to consider. Its high-gradient technology will enable compact electron and X-ray photon sources, which are in high demand for medical, materials science, and other applications. An invigorated community of particle and accelerator physicists, pushing the limits of technology for particle acceleration, detection, measurement and analysis, are likely to attract and train a strong workforce.
The C$^{3}$~ concept is thus an attractive strategy to address our community's need for an $e^+e^-$ Higgs factory. The purpose of this document is to encourage our community to support R\&D and participate in defining the emerging C$^{3}$~ option as part of the Snowmass community study process.
\section{The C$^{3}$~ Concept}
C$^{3}$~ is based on a fundamental study and optimization of electromagnetic cavities for high accelerating fields on axis (the gradient) and low breakdown rates. This optimization yields a design with an iris too small to propagate the fundamental cavity mode. This is solved by a second discovery of a distributed manifold supply to each cavity separately, with proper phase and proper fraction of the RF supply through the wave-guide coupling. The resulting structure, although far too complex for tradition assembly techniques, is straightforwardly built from about two meter long Cu slabs machined by numerically controlled milling machines. It is noteworthy that C$^{3}$~ could not have been realized without modern supercomputers for the RF design and modern NC machining techniques.
The linac is cooled to $\approx 80$~K by liquid nitrogen to reduce the RF power requirements, and increase the acceleration gradient, upwards of 150 MeV/m~\cite{nasr2021experimental,bane2018advanced}. Thus, the acceleration gradient of the C$^{3}$~ linacs~\cite{cccwhitepaper} is an order of magnitude increase over the SLC, a factor 4 over that of the ILC\cite{ILC}, and a factor of two over the normal-conducting NLC design~\cite{NLC}. The C$^{3}$~ plans to reuse the final focus design of the ILC, which is optimized for up to 1 TeV operation, recouping much of the progress made in its design.
The proposed C$^{3}$~ has an 8 km footprint that can reach 250 GeV center of mass energy using innovative technologies, with the possibility of a relatively inexpensive upgrade to 550 GeV on the same footprint, adding only more RF sources.
The linac is constructed from 9~m cryomodules, each of which houses eight 1~m distributed coupling, accelerating structures supported on four 2~m rafts. Each raft supports two 1 m accelerator sections and a quadrupole with a mechanically integrated Beam Position Monitor, coarse and fine alignment movers, and position sensor fixtures. The cryomodule has four penetrations for RF power, each with two waveguides, with each waveguide powering one accelerating structure. The total cryogenic thermal load for the complex at 250~GeV is 9~MW. This thermal load is removed through nucleate boiling of liquid nitrogen. Liquid nitrogen flows by gravity through the cryomodule to replace the nitrogen that has evaporated. Thus, the linac must be horizontal with straight segments of about 1~km. Nitrogen gas is removed from the linac at penetrations that are spaced at approximately 1~km and transported to a re-liquification plant before being reintroduced into the linac.
Great effort has been expended towards the design and optimization of the accelerator complex for earlier linear collider designs. This excellent work can be leveraged to understand the pre-conceptual layout for the full C$^{3}$~ accelerator complex. In particular, the ILC designs for the electron and positron sources, the damping rings, and the beam delivery system can be taken over directly for C$^{3}$~, with small changes to accommodate the C$^{3}$~ beam format. As the design matures, these systems will be revised and further optimized.
The baseline electron (polarized) and positron (unpolarized) sources are conventional LC designs. For the electron source, this consists of a polarized DC gun, buncher, and accelerator. However, we are also exploring the possibility of a polarized RF gun. For the positron source, the closest design relevant to the C$^{3}$~ bunch structure is the CLIC design~\cite{linssen2012physics} (see sec. 3.1.3.2). Positron polarization is a possible upgrade once the full RF system is installed. An additional electron bunch train would be extracted from the Main Linac at high energy (125 GeV) and sent to an undulator for polarized photon production and transport to a positron production target. The positron beam must be transported to a damping ring before being accelerated by the Main Linac.
For the damping rings, a conservative design has two damping rings, one for the electrons and one for the positrons. A pre-damping ring is also utilized for the positron beam. Four electron and positron bunch trains are stored in each of the damping rings. The main damping rings have a $\sim$900~m circumference. The electron damping ring might be eliminated with the advent of a polarized RF photo-injector.
Scaling the beam delivery system (BDS) for a maximum single beam energy of 275~GeV from 500~GeV for the ILC TDR~\cite{Bambade:2019fyw} reduces the length of the BDS by 500~m. Furthermore, we also remove the upstream polarimeter in favor of the downstream polarimeter reducing the length by an additional 200~m. The assumption of a single downstream polarimeter will be weighed against the benefits of including an upstream polarimeter during the CDR preparatory phase. For C$^{3}$~, the downstream polarimeter can measure the polarization and the depolarization from beam-beam interactions by comparing interacting and separated beams. The length for the BDS is approximately 1.5~km on each side. Preliminary simulations of for 250~GeV CM indicate that a 1.2~km BDS is feasible for that energy~\cite{cccwhitepaper}.
\section{The C$^{3}$~ Higgs Factory at 250 GeV}
The planned High Luminosity era of the LHC (HL-LHC), starting in 2029\footnote{This refers to the updated schedule presented in January 2022~\cite{LHCschedule}. As the LHC schedule is evolving, the starting date of HL-LHC could change.}, will extend the LHC dataset by a factor 10 allowing an increase in the precision for most of the Higgs boson couplings measurements~\cite{cepeda2019higgs}. Studies based on the 3000 fb$^{-1}$ HL-LHC dataset estimate that we could achieve 2-4\% precision on the couplings to $W$, $Z$, and third generation fermions. But the couplings to first and second generation fermions will still largely not be accessible and the self-coupling will only be probed with O(50\%) precision.
There are good reasons to measure the Higgs boson properties at higher precision than will be possible at the HL-LHC. With the basic Higgs mechanism for mass generation now demonstrated, the next task for Higgs studies is to search for the influence of new interactions that can explain why the Higgs field has the properties required in the SM. If the new particles associated with these interactions are too heavy to be produced at the HL-LHC, they can still make an imprint on the pattern of Higgs boson couplings. If we wish to prove the existence of these effects and to understand their pattern, we will need a very precise understanding of the Higgs boson properties, with measurements of 1\% or better. Through global analyses, such high precision measurements can also improve the interpretation of LHC data and lead to a stronger comprehensive map of where new physics might lie.
An $e^+e^-$ Higgs factory has a distinctive duty to gain insight on the Higgs Yukawa couplings at the next level beyond the third generation fermions. In the SM, the Higgs Yukawa couplings are exactly proportional to mass. This simplistic picture deserves close scrutiny. Tagging of charm and strange quarks, as previously demonstrated at SLC/LEP, gives effective probes for advancing this program. There is a broader program to investigate the potential deeper connection of Higgs boson with flavor and CP violation.
C$^{3}$~ follows a program very similar to that proposed for the ILC~\cite{Bambade:2019fyw}. We thus expect similar results, subject to considerations described below, for similar integrated luminosities and detector capabilities~\cite{cccwhitepaper}. For C$^{3}$~ we assume a conventional positron source with zero polarization, as opposed to 30\% for ILC. There is almost no difference between the two cases in the expected precision in Higgs boson couplings for a given luminosity~\cite{Fujii:2018mli}. Positron polarization does allow the collection of additional datasets that may be useful in controlling systematic errors. C$^{3}$~ is compatible with the addition of a polarized positron source as an upgrade. For energies well above 500 GeV, the enhanced cross section for $e^-_Le^+_R$ collisions makes this advantageous.
C$^{3}$~ is expected to be cost-effective for reaching the energy of 500-600~GeV. But its key point is that it provides a more secure basis for extension to higher energies.
\section{Upgrade to 550 GeV}
Although most of the gain in precision in Higgs boson couplings will be realized already at the 250~GeV stage, there are important reasons to continue the $e^+e^-$ program to an energy of 500—600~GeV. First, this energy is above the crossover point at which the $WW$ fusion reaction $e^+e^-\to \nu\bar\nu h$ overtakes the $e^+e^-\to Zh$ reaction and becomes the dominant mode of Higgs boson production. This means that, in the 550~GeV data, the Higgs boson is mainly produced by a different mechanism with different sources of systematic errors. The C$^{3}$~-550 then additional dataset will provide independent channels for the Higgs coupling measurements. The analysis of the measurements at two different energies will provide a more robust interpretation of the Higgs couplings by also breaking degeneracies. This would be key to interpret possible deviations from the SM predictions observed at 250 GeV that then can be cross-checked in this 550 GeV dataset. The $\sigma\cdot BR$ for $WW$ fusion to $h$ with decay to $WW^*$ depends quartically on the Higgs-$W$ couplings and thus is a very powerful addition to the dataset. Second, this energy is needed to give a substantial cross section for the process $e^+e^-\to Zhh$, which allows us to measure the Higgs self-coupling. The expected 20\% error on the Higgs self-coupling will allow us to exclude or demonstrate at 5~$\sigma$ the large enhancements to the self-coupling typically needed in models of electroweak baryogenesis~\cite{DiMicco:2019ngk}. Third, this energy choice is well above the threshold for $e^+e^-\to t\bar t$, far enough that top quark pairs are produced with relativistic velocities.
For the measurement of the top quark Yukawa coupling through $e^+e^-\to t \bar{t} h$, it is even important to go above 500~GeV.
The statistical error on this measurement decreases by a factor 2 when one chooses 550~GeV rather than 500~GeV as the CM energy~\cite{Fujii:2015jha}, as the $e^+e^-\to t \bar t h$ cross section rapidly increases as a function of $\sqrt{s}$. In principle, higher energy gives more of an advantage, but this must be balanced against increased cost and footprint. For the purpose of this paper, we have set the design energy of C$^{3}$~ at 550~GeV.
A crucial difference between models in which the Higgs boson is elementary and those in which the Higgs are composite is that, in the latter class, the top quark is also partially composite, with modified $W$ and $Z$ couplings. The 550~GeV measurements have great power to test for this property. Indeed, at 500-600 GeV, the top quarks become relativistic and it is then possible to separate the various independent form factors. Beam polarization is also very important for these measurements~\cite{Schmidt:1994bq}. Moreover, at 550 GeV, an $e^+e^-$ collider is as powerful as HL-LHC in searches for $Z’$ bosons and light quark and lepton compositeness and provides a complementary set of tests~\cite{fujii2019tests}.
\section{Upgrade to TeV-scale}
The C$^{3}$~ linacs can potentially be lengthened to increase the center of mass energy to about 1 TeV. Prior planning of the site for length extension and possible relocation of upstream components is necessary. Assuming one can get to this TeV-scale, the potential for $e^+e^- \to \nu \bar{\nu}$ HH measurement becomes possible with an integrated luminosity of a few ab$^{-1}$ ~\cite{Barklow:2017awn}.
New and exciting physics opportunities across the board are enabled with a TeV-scale upgrade. The Higgs precision enabled by the 550 GeV run will be further improved, particularly the Higgs trilinear and top-Higgs Yukawa couplings.
These precision coupling improvements allow us to probe compositeness scale well beyond TeV, which is outside the realm of the LHC probes~\cite{deBlas:2018mhx}.
Weakly coupled new physics could be buried under the QCD background from a hadron collider environment. The TeV-scale C$^{3}$~ will provide definite answers and insights about new physics within the kinematic threshold. For instance, weak scale lepton partners (such as sleptons) and electroweak states (such as electroweakinos) that are well motivated can be discovered at C$^{3}$~. In particular, various recent hints from experimental data, such as the muon g-2 anomaly~\cite{Athron:2021iuf} and lepton-flavor-universality-violations in $B$ meson systems~\cite{Buttazzo:2017ixm}, all have a large fraction, if not the entire, of motivated and allowed regions at the TeV-scale. The TeV upgrade of C$^{3}$~ can pair-produce these states and detect them or access them through the precision measurement on their loop-induced effects. Dark matter candidates, or more general, missing energy, can be revealed through missing energy searches~\cite{Han:2020uak}. Other exotic signatures, such as long-lived particles or dark showers, can also be probed at the upgrade C$^{3}$~~\cite{deBlas:2018mhx}. TeV-scale lepton colliders share a lot of exciting physics opportunities and potential; we can see them in reviews, e.g.~\cite{EuropeanStrategyforParticlePhysicsPreparatoryGroup:2019qin,AlAli:2021let}.
\section{Site Options}
A C$^{3}$~ $e^+e^-$ collider could in principle be sited anywhere in the world. Projects of its magnitude will be presented and reviewed in international fora, and a community decision will be made regarding the actual site selection. Nevertheless, we note that the C$^{3}$~ offers a unique opportunity to realize an affordable energy frontier facility in the US in the near term. We note that the entire C$^{3}$~ program could be sited within the existing US national laboratories.
For instance, the Fermilab site can fit a 7-km footprint linear machine entirely within its boundaries, (see Fig.~\ref{fig:LC_7kmLayout}) in North East-South West (NE-SW) orientation. The 8-km footprint currently proposed C$^{3}$~, reaching up to 550 GeV, can be accommodated at Fermilab, with about 5 km of the footprint inside the Lab site and extending the facility under the Common Wealth Edison power company's easement to the north of the Lab site - North-South (N-S) orientation. It is also possible to further extend the footprint up to 12 km in this orientation, still keeping the interaction region of the collider within the Lab campus. This siting location, shown in Fig. ~\ref{fig:LC_8kmLayout}, was, in fact, one of the options studied for the ILC at Fermilab. Using the full 12 km length can provide upgrade paths to 750 GeV collision energy or higher.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{7km_C3_at_FNAL.pdf}
\caption{Possible locations for a 7-km footprint C$^{3}$~ linear collider on Fermilab site.}
\label{fig:LC_7kmLayout}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{8km_C3_at_FNAL_site.pdf}
\caption{The 8-km footprint consisting of 5 km inside the Fermilab site and extending the facility under the Common Wealth Edison power company's easement.}
\label{fig:LC_8kmLayout}
\end{figure}
Perhaps, further optimization of the final focus could let the 8 km machine for energy upgrade up to 550 GeV fit within the boundary of the laboratory itself, using NE-SE orientation. The details of these siting options are discussed in~\cite{FCGwhitepaper}.
The initial exploration at the C$^{3}$~ Higgs Factory could position the US to lead the drive to the TeV-scale, requiring larger machines. Sites such as the DOE Hanford site have room to accommodate even bigger footprint machines within their site boundary.
If the ILC goes forward in Japan, an energy upgrade using C$^{3}$~ accelerators could be built, re-using the ILC damping rings, tunnel, and other conventional facilities. Such a machine could reach the TeV-scale as well.
\section{Detectors for C$^{3}$~ Higgs Factory}
A detector for C$^{3}$~~Higgs Factory must provide hermetic coverage with excellent charged particle momentum resolution, muon, electron and photon identification, and neutral particle energy resolution. The detector must have sufficient segmentation to enable good jet reconstruction capabilities, using particle flow techniques. For instance, the resolutions should enable reconstruction of the recoil mass in the ZH final state. It is anticipated that a material profile of less than 20\% X$_0$ is required for the tracker. There are several $e^+e^--$collider detector designs in existence, which could be adapted for operation at C$^{3}$~. While detailed simulation studies are required to fully assess the effects of the timing structure of the collisions and beam induced backgrounds in the detector, which vary from one collider to the other, we anticipate that several existing designs can be straightforwardly adapted for C$^{3}$~. Additionally, new detector technologies that have been developed in the LHC program could potentially be suitable for C$^{3}$~ detector elements, for example MAPS technology for silicon tracking and highly segmented calorimetry, or dual readout calorimetry. As the C$^{3}$~ community grows, we anticipate that several detector concepts and roles of emerging technologies will come to play in defining a C$^{3}$~ detector. If there is scientific community interest, and support, for implementing two complementary detector designs, C$^{3}$~ is also compatible with an ILC-like push-pull two detectors scheme, leveraging multiple state-of-the-art options in calorimetry and tracking.
The time structure of the bunch crossings is different for C$^{3}$~ compared to ILC: 120 Hz and 5 ns bunch separation at the C$^{3}$~ versus 5 Hz and 336 ns bunch separation at the ILC. Thus the existing detector designs cannot directly be used for C$^{3}$~ studies. Nevertheless, the ILC detector concepts of ILD and SiD can be adapted to work with the short time between bunch crossings in a C$^{3}$~ bunch train. For instance, the CLICdp detector, which is a SiD-like detector, represents an evolution of an all-silicon tracker design to deal with both the different time structure and higher energies~\cite{linssen2012physics,Robson:2018enq,CLICdp:2018vnx}. Similarly, there are adaptations for the different and always-on time structures of CEPC and FCC-ee, based on ILD and CLICdp, respectively~\cite{CEPCStudyGroup:2018ghi,Bacchetta:2019fmz}.
Presently, an adaptation of the SiD detector to C$^{3}$~ environment has begun, as discussed here. This concept is centered on a compact, cost-constrained design that addresses the full range of searches for new physics and precision measurements at a future high energy electron-positron linear collider. SiD is a silicon-based detector in high magnetic field with excellent tracking and particle-flow calorimetry, suitable for C$^{3}$~ environment with small modifications. Major design features are a robust silicon vertexing and tracking system, highly segmented calorimeters optimized for particle-flow, and a compact design with a 5T solenoid field. The SiD vertex detector consists of five barrel layers and forward/backward disks, with power pulsing and forced-air cooling. Requirements include a 3 micron (or less) hit spatial resolution, and a material budget of 0.1\% X$_{0}$. The combination of the 5T magnetic field and the envelope of the pair background allows a first sensor layer at 14 mm from the beam.
Currently, the group working on the SiD-like detector for C$^{3}$~ has prepared a list of items to study. The main questions to be addressed for the C$^{3}$~ include examination of the need for single bunch tagging and quantification of its required time resolution. For instance, the pair background and occupancy studies need to be repeated for the specific C$^{3}$~ environment. For both the vertex detector and the main tracker, forced-air cooling and power pulsing could still be achieved at C$^{3}$~, but needs to be studied. The 8.3 ms intra-train interval for C$^{3}$~ is more than adequate for power pulsing which only requires a few 100 microseconds to settle after turning on. Other possible areas to explore are the radius of the vertex detector inner layer, the beam pipe design, and the central magnetic field value - in relation to the pair background.
This SiD-like detector with a dedicated optimization of the vertex detector and other subsystems can be well adapted to C$^{3}$~ time structure to achieve the physics goals. This is why we expect the projected physics performance for ILC can also be achieved by C$^{3}$~~\cite{cccwhitepaper}.
\section{Broader Impacts}
\subsection{Applications of Compact Linacs}
R\&D on the C$^{3}$~ concept is already being pursued at SLAC, UCLA, INFN, LANL and Radiabeam, along with closely related research in high gradient RF acceleration with CERN, KEK, PSI, MIT, and many other partners in the high gradient research community~\cite{HG2021}. There is direct synergy with the development of compact electron accelerators for medical applications~\cite{Maxim2019,lu2021,Snively2020vhee,Nanni2020vhee}, high-flux x-ray sources for cargo screening applications \cite{weatherford2020modular} and the creation of compact X-ray free electron lasers~\cite{rosen2020,graves2020} (FELs). The small size and low cost of compact X-ray sources based on C$^{3}$~ technology will find a wide range of customers, including individual university research laboratories and major medical research centers.
\subsection{Workforce Development}
The particle physics community has traditionally attracted brilliant students from the incoming classes in most university physics departments. The opportunity to work on fundamental problems while using state-of-the-art technologies, and develop them as necessary, is the leading motivator for the students. The C$^{3}$~ will provide much needed impetus to make particle physics programs attractive to the young and future generations. Due to the long lead times involved in design, construction and operation of the programs, it is important to involve the new generation entering graduate classes early in the design process. Many of those early students who will develop expertise in compact accelerator development or new detector technologies will build the C$^{3}$~ and conduct research using it, while others will contribute to the society at large.
\subsection{Future HEP Facilities}
The C$^{3}$~ facility would represent a significant investment in the future of the Energy Frontier. Its design must allow for the extension of the facility's reach in energy as we are guided by the physics we discover during the operation of the facility. Indeed, this is inherent to the C$^{3}$~ concept even in its initial stages through the facility's approach for scaling from 250 GeV to 550 GeV center of mass energy. While one possibility for extension in energy is the use of rf accelerator technology, either with a larger facility footprint or through the development of a higher gradient main linac, we should also consider the adoption of advanced accelerator technology for a future upgrade. During the timeline of the C$^{3}$~ R\&D demonstration plan and the eventual preparation of a C$^{3}$~ TDR, advanced accelerator concepts will be working towards furthering their own technical milestones for laser and beam driven plasma wakefield and structure wakefield \cite{LPA,PWFA,SWA}. These studies may be advanced enough to inform future requirements on the facility (for example tunnel diameter, configuration of the BDS, distribution of power and cooling) that may be minimally invasive during initial construction, but prohibitively expensive for a future upgrade. Keeping this potential avenue in mind and open may save significant infrastructure and construction costs for future upgrades.
\section{Towards a C$^{3}$~ Conceptual Design}
Developing a conceptual design with a defensible cost estimate for a C$^{3}$~ Higgs Factory is a requirement for proposing a project to the US funding agencies and attracting worldwide community interested in Higgs studies at C$^{3}$~. The R\&D required to prepare a conceptual design for C$^{3}$~ includes the following elements in a proposed demo facility~\cite{CCCdemo}, that will provide a full demonstration of the
C$^{3}$~ Main Linac technology on the GeV scale.
The outstanding technical achievements of the ILC, CLIC, and NLC collaborations are central to the rapid realization of the C$^{3}$~ proposal. Many of the subsystems for the accelerator complex are interchangeable between linear collider concepts with manageable modifications to account for variations in pulse format and beam energy. Because these subsystem designs are already mature, the C$^{3}$~ demonstration facility can focus on the specific set of technical milestones associated with the C$^{3}$~ concept itself:
\begin{itemize}
\item Development of a fully engineered and operational cryomodule including linac supports, alignment, magnets, BPMs, RF/electrical feedthroughs, liquid and gaseous nitrogen flow, and safety features.
\item Operation of the cryomodule under the full thermal load of the Main Linac and maximum liquid nitrogen flow velocity over the accelerators in the cryomodule, demonstrating an acceptable level of vibrations.
\item Operation with a multi-bunch photo-injector to induce wake fields, using high charge bunches
and a tunable-delay witness bunch.
\item Achievement of 120~MeV/m accelerating gradient in single bunch mode for an energy gain of 1~GeV in a single cryomodule, including tests at higher gradients to establish breakdown rates.
\item Acceleration and wakefield effect measurements with a fully damped-detuned structure.
\item Development, in partnership with industry, of the baseline C-band RF source unit that will be installed with the Main Linac. The RF source unit will be modified from existing industrial product lines.
\end{itemize}
\section{Outlook}
With a five-year C$^{3}$~ demonstrator project (described in ~\cite{CCCdemo}), the project will be ready for US DOE project review processes, on the way to attracting international participation in the C$^{3}$~ facility.
The timeline for the developing the CDR is driven by the technical timeline for maturing the main linac technology as part of the C$^{3}$~ demonstration. Not all objectives of the demonstration are required to develop the CDR. However, early stage technology maturation and demonstrations would allow for the necessary detail of the CDR. The CDR would also rely heavily on the complimentary efforts of the ILC and CLIC community which have advanced the readiness of the injector complex, damping rings, and BDS.
The C$^{3}$~ demonstrator project will involve multiple national laboratories and some university groups. The demonstrator itself could revitalize the US HEP community, attracting bright students and postdoctoral fellows, especially those graduating from the LHC project, which is underway at CERN. The commercialization of the accelerating structures and RF sources, through the demonstrator project, will enable long term partnerships with industry to build the C$^{3}$~ and also build a new market for small-footprint X-ray sources.
The signatories of this white paper strongly believe that our community strategy for understanding the Higgs physics is significantly enhanced by the development of the C$^{3}$~ project. We urge colleagues participating in the Snowmass community studies to strongly support the C$^{3}$~ demonstrator project, with the aim of preparing a conceptual design report within five years.
\bibliographystyle{atlasnote}
|
{
"timestamp": "2022-03-16T01:14:51",
"yymm": "2203",
"arxiv_id": "2203.07646",
"language": "en",
"url": "https://arxiv.org/abs/2203.07646"
}
|
"\\section{Introduction}\nA naive application of the philosophy of the renormalization group suggest(...TRUNCATED)
| {"timestamp":"2022-03-16T01:11:18","yymm":"2203","arxiv_id":"2203.07587","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction} \\label{sec:introduction}\nDecision making is a critical component in any a(...TRUNCATED)
| {"timestamp":"2022-03-16T01:09:35","yymm":"2203","arxiv_id":"2203.07558","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction}\n\nLet $P$ be a set of $n$ points in the Euclidean plane.\nThroughout the(...TRUNCATED)
| {"timestamp":"2022-03-16T01:11:04","yymm":"2203","arxiv_id":"2203.07584","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{intro}\n\nHip-hop solutions of the $2N$-body problem are solutions s(...TRUNCATED)
| {"timestamp":"2022-03-16T01:12:50","yymm":"2203","arxiv_id":"2203.07609","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nThe classical random walk is one of the most important and widely used mod(...TRUNCATED)
| {"timestamp":"2022-03-16T01:16:13","yymm":"2203","arxiv_id":"2203.07674","language":"en","url":"http(...TRUNCATED)
|
End of preview.