Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 9
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 57736)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 9
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
Globally, the energy system is responsible for about 73.2\% of greenhouse gas emissions \cite{ritchie2020}. Deep reductions of greenhouse gas emissions in the energy system are key for achieving a net-zero greenhouse gas future to limit the rise in global temperatures to 1.5°C and to prevent the daunting effects of climate change \cite{allen2018summary}. In response, the global energy system is undergoing an energy transition from the traditional high-carbon to a low or zero carbon energy system, mainly driven by enabling technologies like internet of things \cite{fuller2020digital} and high penetration of variable renewable energy sources (RES) like solar and wind \cite{bouckaert2021net}. Although RES are key for delivering a decarbonised energy system which is reliable, affordable and fair for all, the uncertainties related to their energy generation as well as energy consumption remains a significant barrier, which are unlike the traditional high-carbon system with dispatchable sources \cite{paul2010role}.
Smart energy networks (SEN) (also known as micro-grids), which are autonomous local energy systems equipped with RES, energy storage system (ESS) as well as various types of loads are an effective means of integrating and managing high penetrations of variable RES in the energy system \cite{harrold2022renewable}. Given the uncertainties with RES energy generation as well as the energy demand, ESSs such as a battery energy storage system (BESS) have proved to play a crucial role in managing the uncertainties while providing reliable energy services to the network \cite{arbabzadeh2019role}. However, due to low capacity density, BESS cannot be used to manage at-scale penetration of variable RES \cite{desportes2021deep}.
Hydrogen energy storage systems (HESS) are emerging as a promising high capacity density energy storage carriers to support high penetrations of RES. This is mainly due to falling costs for electricity from RES and improved electrolyzer technologies whose costs have fallen by more than 60\% since 2010 \cite{qazi2022future}. During periods of over generation from the RESs, HESSs convert the excess power into hydrogen gas, which can be stored in a tank. The stored hydrogen can be sold externally as fuel such as for use in fuel-cell hybrid electric vehicles \cite{correa2017performance} or converted into power during periods of minimum generation from the RES to complement other ESSs such as the BESS.
The SEN combines power engineering with information technology to manage the generation, storage and consumption to provide a number of technical and economic benefits such as increased utilization of RES in the network, reduced energy losses and costs, increased power quality, and enhanced system stability \cite{harrold2020battery}. However, this requires an effective smart control strategy to optimise the operation of the ESSs and energy demand to achieve the desired system economics and environmental outcomes.
Many studies have proposed control strategies that optimise the operation of ESSs to minimise the utilization costs \cite{vivas2020suitable, cau2014energy, enayati2022optimal, hassanzadehfard2020design}. Others have proposed control models for optimal sizing and planning of the micro-grid \cite{castaneda2013sizing,liu2021optimal, pan2020optimal}. Other studies have modelled the optimal energy sharing in the micro-grid \cite{tao2020integrated}. Despite a rich history, the proposed control approaches are model-based, in which they require the explicit knowledge and rigorous mathematical models of the micro-grid to capture complex real-world dynamics. Model errors and model complexity makes them difficult to apply and to optimise the ESSs in real-time. Moreover, even if an accurate and efficient model without errors exists, it is often a cumbersome and fallible process to develop and maintain the control approaches in situations where uncertainties of the micro-grid are dynamic in nature \cite{nakabi2021deep}.
In this paper, we propose a model-free control strategy based on reinforcement learning (RL), a machine learning paradigm, in which an agent learns the optimal control policy by interacting with the SEN environment \cite{sutton2018reinforcement}. Through trial and error, the agent selects control actions that maximise a cumulative reward (e.g. revenue) based on its observation of the environment. Unlike the model-based optimisation approaches, model-free-based algorithms do not require explicit knowledge and rigorous mathematical models
of the environment, making them capable of determining optimal control actions in real-time even for complex control problems like peer-to-peer energy trading \cite{samende2022multi}. Further, artificial neural networks can be combined with RL to form deep reinforcement learning (DRL), making model-free approaches capable of handling even more complex control problems \cite{mnih2015human}. Examples of commonly used DRL-based algorithms are value-based algorithms such
as Deep Q-networks (DQN) \cite{mnih2015human} and policy-based algorithms such as deep
deterministic policy gradient (DDPG) \cite{lillicrap2015continuous}.
\subsection{Related Works}
Application of DRL approaches for managing SENs has increased in the past decade. However, much progress has been for SENs having a single ESS (e.g. BESS) \cite{harrold2020battery, nakabi2021deep, wan2021data,8742669, sang2022deep, 9585298, mbuwir2020reinforcement}. With declining costs of RES, additional ESSs like HESS are expected in SENs to provide additional
system flexibility and storage to support
further deployment of RES. In this case, control approaches that can effectively schedule the hybrid operation of BESS and HESS become imperative.
Recent studies on optimised control of SENs having multiple ESSs like a hybrid of BESS and HESS are proposed in \cite{desportes2021deep, chen2022optimal, zhu2022optimal, tomin2019deep, yu2021optimal}. In \cite{desportes2021deep, chen2022optimal}, a DDPG-based algorithm is proposed to minimise building carbon emissions in a SEN which includes BESS, HESS and constant building loads. Similarly, operating costs are minimised in \cite{zhu2022optimal} using DDPG and in \cite{tomin2019deep} using DQN. However, these studies use a single control agent to manage the multiple ESSs. Energy management of a SEN is usually a multi-agent problem where an action of one agent affects the actions of others, making the SEN environment to be non-stationary from an agent’s perspective \cite{samende2022multi}. Single agents have been found to perform poorly in non-stationary environments \cite{lowe2017multi}.
A multi-agent based control approach for optimal operation of a hydrogen based multi-energy systems is proposed in \cite{yu2021optimal}. Despite the approach addressing the drawbacks of the single agent, flexibility of the electrical load is not investigated. With the introduction of flexible loads like heat pumps which run on electricity in SENs \cite{heat2019delivering}, the dynamics of the electrical load is expected to change the technical-economics and the environmental impacts of the SEN.
Compared with the existing works, we investigate a SEN that has a BESS, HESS and a schedulable energy demand. We explore the energy cost and carbon emission minimisation problem of a such a SEN while capturing the time-coupled storage dynamics of the BESS and the HESS, as well as the uncertainties related to RES, varying energy prices and the flexible demand. A multi-agent deep deterministic policy gradient (MADDPG) algorithm is developed to reduce the system cost and carbon emissions, and to improve the utilisation of RES while addressing the drawbacks of a single agent in a non-stationary environment. To the authors’ knowledge, this study is the first to comprehensively apply the MADDPG algorithm to optimally schedule operation of the hybrid BESS and HESS as well as the energy demand in a SEN.
\subsection{Contributions}
The main contributions of this paper are on the following aspects:
\begin{itemize}
\item We formulate the system cost minimisation problem of the SEN, complete with BESS, HESS, flexible demand, solar and wind generation as well as dynamic energy pricing as a function of energy costs and carbon emissions cost. The system cost minimisation problem is then reformulated as a continuous action based Markov game with unknown probability to adequately obtain the optimal energy control policies without explicitly estimating the underlying model of the SEN and relying on future information.
\item A data-driven self-learning based MADDPG algorithm that outperforms a model-based solution and other DRL-based algorithms used as a benchmark is proposed to solve the Markov game in real-time. This also includes the use of a novel real-world generation and consumption data set collected from the Smart Energy Network Demonstrator (SEND) project at Keele University\footnote{https://www.keele.ac.uk/business/businesssupport/smartenergy/}.
\item We carry out a simulation analysis of a SEN model for five different scenarios to demonstrate the benefits of integrating a hybrid of BESS and HESS as well as scheduling the energy demand in the network.
\item Simulation results based on SEND data show that the proposed algorithm can increase cost savings and reduce carbon emission by 41.33\% and 56.3\% respectively compared with other bench-marking algorithms and baseline models.
\end{itemize}
The rest of the paper is organized as follows. Description of the SEN environment is presented in Section \ref{sec__1}. Formulation of the optimisation problem is given in Section \ref{problem_formulation}. A brief background to RL and the description of the proposed self-learning algorithm is presented in Section \ref{sec:MADRL}. Simulation results are provided in Section \ref{case study}, with conclusions presented in Section \ref{conclusion}.
\section{Smart Energy Network}\label{sec__1}
The SEN considered in this paper is a grid-connected micro-grid with RES (solar and wind turbines), hybrid energy storage system (BESS and HESS) and the electrical energy demand as shown in Fig. \ref{fig:SEN}. The aggregated electrical demand from the building(s) is considered to be a price responsive demand i.e., the demand can be reduced based on electricity price variations or shifted from the expensive price time slots to the cheap price time slots. At every time slot \(t\), solar and wind turbines provide energy to meet the energy demand. Any excess generation is either used to charge the BESS and/or converted into hydrogen by the electrolyzer or exported to the main grid at a feed-in tariff \(\pi_t\). In the events that energy generated from solar and wind turbines is insufficient to meet the energy demand, the deficit energy is either supplied by the BESS and/or fuel-cell or imported from the main grid at a time-of-use (ToU) tariff \(\lambda_t\).
In the following sub-sections, we present models of solar, wind, BESS, HESS (i.e., electrolyzer, tank and fuel cell) and flexible demand adopted in this paper.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{smart_energy_network.png}
\caption{Basic structure of the grid-connected smart energy network, which consists of solar, wind turbines (WT), flexible energy demand, battery energy storage system (BESS), and hydrogen energy storage system (HESS). The HESS consists of three main components, namely electrolyzer (EL), storage tank and fuel-cell (FC). Solid lines represent electricity flow. Dotted lines represent flow of hydrogen gas.}
\label{fig:SEN}
\vspace{-\baselineskip}
\end{figure}
\subsection{PV and Wind Turbine Model}
Instead of using mathematical equations to model the solar and wind turbine, we use real energy production data from the solar and the wind turbine as these are un-dispatchable under normal SEN operating conditions. Thus, at every time step \(t\), power generated from solar and wind turbine is modelled as \(P_{pv,t}\) and \(P_{w,t}\) respectively.
\subsection{BESS Model}
The key property of a BESS is the amount of energy it can store at time \(t\).
Let \(P_{c,t}\) and \(P_{d,t}\) be the charging and discharging power of the BESS respectively. The BESS energy dynamics during charging and discharging operation can be modelled as follows \cite{9454450}
\begin{equation}
E_{t+1}^b = E_t^b + \Big(\eta_{c,t}P_{c,t} - \frac{P_{d,t}}{\eta_{d,t}}\Big)\Delta t,\;\;\;\;\forall t \label{eq:batE}
\end{equation}
where \(\eta_{c,t} \in (0,1]\) and \(\eta_{d,t} \in (0,1]\) are dynamic BESS charge and discharge efficiency as calculated in \cite{9534877} respectively, \(E_t^b\) is the BESS energy (kWh) and \(\Delta t\) is the duration of BESS charge or discharge.
The BESS charge level is limited by the storage capacity of the
BESS as
\begin{equation}
E_{min}\le E_t^b \le E_{max} \label{ineq_B}
\end{equation}
where \(E_{min}\) and \(E_{max}\) are lower and upper boundaries of the BESS charge level.
To avoid charging and discharging the BESS at the same time, we have
\begin{equation}
P_{c,t}\cdot P_{d,t} = 0,\;\;\;\;\forall t \label{dot_b}
\end{equation}
That is, at any particular time \(t\), either \(P_{c,t}\) or \(P_{d,t}\) is zero.
Further, the charging and discharging power is limited by maximum battery terminal power \(P_{max}\) as specified by manufacturers as
\begin{equation}
0\leq P_{c,t}, P_{d,t}\leq P_{max},\;\;\;\;\forall t\label{ineq_pc}
\end{equation}
During operation, the BESS wear cannot be avoided due to repeated BESS charge and discharge processes. The wear cost can have a great impact on the economics of the SEN. The empirical wear cost of the BESS can be expressed as \cite{han2014practical}
\begin{equation}
C_{BESS}^t = \frac{C_b^{ca}|E_t^b|}{L_c\times2\times\text{DoD}\times E_{nom}\times (\eta_{c,t} \times \eta_{d,t})^2}
\end{equation}
where \(E_{nom}\) is the BESS nominal capacity, \(C_b^{ca}\) is the BESS capital cost, DoD is the depth of discharge at which the BESS is cycled, and \(L_c\) is the BESS life cycle.
\subsection{HESS Model}
In addition to the BESS, a HESS is considered in this study as a long-term energy storage unit. The HESS mainly consists of an electrolyzer (EL), hydrogen storage tank (HT) and fuel cell (FC) as shown in Fig. \ref{fig:SEN}. The electrolyzer uses the excess electrical energy from the RESs to produce hydrogen. The produced hydrogen gas is stored in the hydrogen storage tank and later used by the fuel cell to produce electricity whenever there is a deficit energy generation in the SEN.
Dynamics of hydrogen in the tank associated with the generation and consumption of hydrogen by the electrolyzer and fuel cell respectively is modelled as follows \cite{vivas2020suitable}
\begin{equation}
H_{t+1} = H_t + \Big(r_{el,t}P_{el,t} - \frac{P_{fc,t}}{r_{fc,t}}\Big) \Delta t ,\;\;\;\;\forall t \label{hess_eq}
\end{equation}
where \(P_{el,t}\) and \(P_{fc,t}\) are the electrolyzer power input and fuel cell output power respectively, \(H_t\) (in Nm\(^3\)) is hydrogen gas level in the tank, \(r_{el,t}\) (in Nm\(^3\)/kWh) and \(r_{fc,t}\) (in kWh/Nm\(^3\)) are the hydrogen generation and consumption ratios associated with the electrolyzer and fuel cell respectively.
The hydrogen level is limited by the storage capacity of the tank as
\begin{equation}
H_{min}\leq H_t\leq H_{max},\;\;\;\;\forall t \label{ineq_H}
\end{equation}
where \(H_{min}\) and \(H_{max}\) are the lower and upper boundaries imposed on the hydrogen level in the tank.
As the electrolyzer and the fuel cell cannot operate at the same time, we have
\begin{equation}
P_{el,t}\cdot P_{fc,t} = 0,\;\;\;\;\forall t\label{dot_h}
\end{equation}
Furthermore, power consumption and power generation respectively associated with the electrolyzer and fuel cell is restricted to their rated values as
\begin{align}
0\leq P_{el,t}&\leq P_{max}^{el},\;\;\;\;\forall t \label{ineq_pe}\\
0\leq P_{fc,t}&\leq P_{max}^{fc},\;\;\;\;\forall t\label{ineq_pf}
\end{align}
where \(P_{max}^{el}\) and \(P_{max}^{fc}\) are the rated power values of the electrolyzer and fuel cell respectively.
If the HESS is selected to store the excess energy, the cost of producing hydrogen through the electrolyzer and later becoming fuel cell energy is given as \cite{dufo2007optimization}
\begin{equation}
C_t^{el-fc} = \frac{(C_{el}^{ca}/L_{el}+C_{el}^{om})+(C_{fc}^{ca}/L_{fc}+C_{fc}^{om})}{\eta_{fc,t}\eta_{el,t}}
\end{equation}
where \(C_{el}^{ca}\) and \(C_{fc}^{ca}\) are electrolyzer and fuel cell capital costs, \(C_{el}^{om}\) and \(C_{fc}^{om}\) are the operation and maintenance costs of the electrolyzer and the fuel cell, \(\eta_{el,t}\) and \(\eta_{fc,t}\) are the electrolyzer and fuel cell efficiencies, \(L_{el}\) and \(L_{fc}\) are the electrolyzer and the fuel cell lifetimes respectively.
The cost of meeting the deficit energy using the fuel cell with the hydrogen stored in the tank as fuel is given as \cite{cau2014energy}
\begin{align}
C_t^{fc} = &\frac{C_{fc}^{ca}}{L_{fc}} + C_{fc}^{om}
\end{align}
The total cost of operating the HESS at time \(t\) can be expressed as follows
\begin{align}
C_{HESS}^t =
\begin{cases}
C_t^{el-fc},\;\;\;\text{if}\;\;P_{el,t} > 0 \\
C_t^{fc},\;\;\;\text{if} \;\;P_{fc,t} > 0 \\
0,\;\;\;\text{otherwise}
\end{cases}
\end{align}
\subsection{Load Model}
We assume that the total energy demand of the SEN has a certain proportion of flexible energy demand that can be reduced or shifted in time due to the energy price. Thus, at every time \(t\), the actual demand may deviate from the expected total energy demand. Let the total energy demand before energy reduction be \(D_t\) and the actual energy demand after reduction be \(d_t\). Then the energy reduction \(\Delta d_t\) can be expressed as
\begin{align}
\Delta d_t = D_t - d_t\;\;\;\;\forall t \label{dem_eq}
\end{align}
As reducing the energy demand inconveniences the energy users, the \(\Delta d_t\) can be constrained as follows
\begin{align}
0\le \Delta d_t \le \zeta D_t\;\;\;\;\forall t l\label{dem_ch}
\end{align}
where \(\zeta\) (e.g., \(\zeta = 30\%\)) is a constant factor that specifies the maximum percentage of original demand that can be reduced.
The inconvenience cost for reducing the energy demand can be estimated using a convex function as follows
\begin{align}
C_{inc.}^t = \alpha_d\Big(d_t - D_t\Big)^2\;\;\;\;\forall t \label{inc_cost}
\end{align}
where \(\alpha_d\) is a small positive number that quantifies the amount of flexibility to reduce the energy demand as shown in Fig. \ref{fig_inc}. A lower value of \(\alpha_d\) indicates that less attention is paid to the inconvenience cost and a larger share of the energy demand can be reduced to minimise the energy costs. A higher value of \(\alpha_d\) indicates that high attention is paid to the inconvenience cost and the energy demand can be hardly reduced to minimise the energy costs.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{inc_cost.png}
\caption{Impact of \(\alpha_d\) parameter on the inconvenience cost of the energy demand, when \(D_t = 250\) kW and when \(d_t\) takes values from 0 to 450 kW.}
\label{fig_inc}
\vspace{-\baselineskip}
\end{figure}
\subsection{SEN Energy Balance Model}
Local RES generation and demand in the SEN must be matched at all times for stability of the energy system. Any energy deficit and excess must be imported and exported to the main grid respectively.
The power import and export at time \(t\) can be expressed as
\begin{align}
P_{g,t} = d_{t} + P_{c,t} + P_{el,t} - P_{pv,t} - P_{w,t} - P_{d,t} - P_{fc,t} \label{eq_balance}
\end{align}
where \(P_{g,t}\) is power import if \(P_{g,t} > 0\) and power export otherwise. We assume that the SEN is well sized and that \(P_{g,t}\) is always within the allowed export and import power limits.
Let \(\pi_t\) and \(\lambda_t\) be the export and import grid prices at time \(t\) respectively. As grid electricity is the major source of carbon emissions, the cost of utilising the main grid to meet the supply-demand balance in the SEN is the sum of both the energy cost and the environmental cost due to carbon emissions as follows
\begin{align}
C_{grid}^t =\Delta t
\begin{cases}
\lambda_tP_{g,t} + \mu_{c}P_{g,t},\;\;\;\text{if}\;\;P_{g,t}\geq 0 \\
-\pi_t|P_{g,t}|,\;\;\;\text{otherwise}
\end{cases}
\end{align}
where \(\mu_{c}\in [0,1]\) is the carbon emission conversion factor of grid electricity.
\section{Problem Formulation}\label{problem_formulation}
The key challenge in operating the SEN is with how to optimally schedule the operation of the BESS, HESS and the flexible energy demand to minimise energy costs and carbon emissions as well as to increase renewable energy utilisation. The operating costs associated with PV and wind generation are neglected for being comparatively smaller than those for energy storage units and energy demand \cite{vivas2020suitable}.
\subsection{Problem Formulation}
As the only controllable assets in the SEN considered in this paper are the BESS, HESS and the flexible energy demand, the control variables can be denoted as a vector \(\mathbf{v}_t = \{P_{c,t}, P_{d,t}, P_{el,t},P_{fc,t}, \Delta d_t\}\). The \(P_{g,t}\) can be obtained according to (\ref{eq_balance}). We formulate a system overall cost-minimizing problem as a function of the energy costs and the environmental cost as follows
\begin{equation*}
\mathbf{P1:}\quad
\begin{aligned}
\min_{\mathbf{v}_t}: &\sum_{t=1}^{T}\Big( C_{BESS}^t + C_{HESS}^t + C_{inc.}^t + C_{grid}^t\Big)\\
\textrm{s.t.:} & (\ref{eq:batE})-(\ref{ineq_pc})\;\&\;(\ref{hess_eq})-(\ref{ineq_pf})\;\&\: (\ref{dem_eq}),(\ref{dem_ch}),(\ref{eq_balance})
\end{aligned}\label{opt}
\end{equation*}
Solving this optimisation problem using model-based optimisation approaches suffers from three main challenges, namely uncertainties of parameters, information and dimension challenges. The uncertainties are related to RES, energy price as well as energy demand, which makes it difficult to directly solve the optimisation problem without statistical information of the system. As expressed in (\ref{eq:batE}) and (\ref{hess_eq}), control of the BESS and HESS is time-coupled and actions taken at time \(t\) have an effect on future actions to be taken at time \(t+1\). Thus, for optimal scheduling, the control policies should also consider the future `unknown' information of the BESS and the HESS. Moreover, the control actions of the BESS and the HESS are continuous in nature and bounded as given in (\ref{ineq_pc}), (\ref{ineq_pe}), (\ref{ineq_pf}), which increases the dimension of the control problem.
In the following sub-sections, we overcome these challenges by first, re-formulating the optimisation problem as a continuous action Markov-game and later solving it using a self-learning algorithm.
\subsection{Markov-Game Formulation}
We reformulate \textbf{P1} as a Markov decision process (MDP) which consists of a state space \(\mathcal{S}\), an action space \(\mathcal{A}\), a reward function \(\mathcal{R}\), a discount factor \(\gamma\) and a transition probability function \(\mathcal{P}\) as follows:
\subsubsection{\textbf{State Space}} The state space \(\mathcal{S}\) represents the collection of all the state variables of the SEN at every time slot \(t\) including RES variables (\(P_{pv,t}\;\&\;P_{w,t}\)), energy prices (\(\pi_t\;\&\;\lambda_t\)), energy demand \(D_t\) and state of the ESSs (\(E_t^b\;\&\:H_t\)). Thus, at time slot \(t\), the state of the system is given as
\begin{equation}
s_t = \Big(P_{pv,t}, P_{w,t}, E_{b,t}, H_{t}, D_{n,t},\pi_t,\lambda_t\Big),\;\;s_t \in \mathcal{S}
\end{equation}
\subsubsection{\textbf{Action Space}} The action space denotes the collection of all actions \(\{P_{c,t}, P_{d,t}, P_{el,t}, P_{fc,t}, \Delta d_t\}\), which are the decision values of \textbf{P1} taken by the agents to produce the next state \(s_{t+1}\) according to the state transition function \(\mathcal{P}\). To reduce the size of the action space, action variables for each storage system can be combined into one action. With reference to (\ref{dot_b}), the BESS action variables \(\{P_{c,t}, P_{d,t}\}\) can be combined into one action \(P_{b,t}\) so that during charging (i.e. \(P_{b,t} < 0\)), \(P_{c,t} = |P_{b,t}| \;\&\; P_{d,t} = 0\). Otherwise, \(P_{d,t} = P_{b,t} \;\&\; P_{c,t} = 0\). Similarly, the HESS action variables \(\{P_{el,t}, P_{fc,t}\}\) can be combined into one action \(P_{h,t}\). During electrolysis, (i.e. \(P_{h,t} < 0\)) , \(P_{el,t} = |P_{h,t}| \;\&\; P_{fc,t} = 0\). Otherwise, \(P_{fc,t} = P_{h,t} \;\&\; P_{el,t} = 0\). Thus, at time \(t\), the control actions of the SEN reduces to
\begin{equation}
a_t = \Big(P_{b,t}, P_{h,t}, \Delta d_t\Big),\;\;a_t \in \mathcal{A}
\end{equation}
The action values are bounded according to their respectively boundaries given by (\ref{ineq_pc}), (\ref{ineq_pe}), (\ref{ineq_pf}) and (\ref{dem_ch}).
\subsubsection{\textbf{Reward Space}} The collection of all the rewards received by the agents after interacting with the environment forms the reward space \(\mathcal{R}\). The reward is used to evaluate the performance of the agent based on the actions taken and the state of the SEN observed by the agents at that particular time. The first part of the reward is the total energy cost and environmental cost of the SEN
\begin{equation}
r_t^{(1)} = -\Big( C_{BESS}^t + C_{HESS}^t + C_{inc.}^t + C_{grid}^t\Big)
\end{equation}
As constraints given in (\ref{ineq_B}) and (\ref{ineq_H}) should always be satisfied, the second part of the reward is a penalty for violating the constraints as follows
\begin{align}
r_t^{(2)} =-
\begin{cases}
\mathcal{K},\;\;\;\text{if}\;(\ref{ineq_B})\;\text{or}\; (\ref{ineq_H})\;\text{is violated}\\
0,\;\;\;\;\text{otherwise}
\end{cases}
\end{align}
where \(\mathcal{K}\) is a predetermined large number, e.g. \(K = 20\).
The total reward received by the agent after interacting with the environment is therefore expressed as
\begin{equation}
r_t = r_t^{(1)} + r_t^{(2)},\;\;r_t \in \mathcal{R}
\end{equation}
The goal of the agent is to maximize its own expected reward \(R\)
\begin{equation}
R = \sum_{t=0}^{T}\gamma^tr_t\label{cumm_reward___}
\end{equation}
where \(T\) is the time horizon and \(\gamma\) is a discount factor, which helps the agent to focus the policy by caring more about obtaining the rewards quickly.
As electricity prices, RES energy generation and demand are volatile in nature, it is generally impossible to obtain with certainty the state transition probability function \(\mathcal{P}\) required to derive an optimal policy \(\pi (s_t|a_t)\) needed to maximize \(R\). To circumvent this difficulty, we propose the use of RL as discussed in Section \ref{sec:MADRL}.
\section{Reinforcement Learning}\label{sec:MADRL}
\subsection{Background}
A RL framework is made up of two main components, namely the environment and agent. The environment denotes the problem to be solved. The agent denotes the learning algorithm. The agent and environment continuously interact with each other \cite{sutton2018reinforcement}.
At every time \(t\), the agent learns for itself the optimal control policy \(\pi (s_t|a_t)\) through trial and error by selecting control actions \(a_t\) based on its perceived state \(s_t\) of the environment. In return, the agent receives a reward \(r_t\) and the next state \(s_{t+1}\) from the environment without explicitly having knowledge of the transition probability function \(\mathcal{P}\). The goal of the agent is to improve the policy so as to maximise the cumulative reward \(R\).
The environment has been described in Section \ref{problem_formulation}. Next, we describe the learning algorithms.
\subsection{Learning Algorithms}
In this section we present three main learning algorithms considered in this paper, namely DQN (a single agent and value-based algorithm), DPPG (a single agent and policy-based algorithm), and the proposed multi-agent DDPG (a multi-agent and policy-based algorithm).
\subsubsection{DQN} The DQN algorithm was developed by Google DeepMind in 2015 \cite{mnih2015human}. It was developed to enhance a classic RL algorithm called Q-Learning \cite{sutton2018reinforcement} through the addition of deep neural networks and a novel technique called experience replay. In Q-learning, the agent learns the best policy \(\pi (s_t|a_t)\) based on the notion of an action-value Q-function as \( Q_{\pi}(s,a) = \mathbb{E}_{\pi}\left[R|s_t = s, a_t = a\right]\).
By exploring the environment, the agent updates the \(Q_{\pi}(s,a)\) estimates using the Bellman Equation as an iterative update as follows:
\begin{equation}
Q_{i+1}(s_t,a_t) \leftarrow Q_{i}(s_t,a_t) + \alpha h
\label{eq:bell}
\end{equation}
where \(\alpha \in (0,1]\) is the learning rate and \(h\) is given by:
\begin{equation}
h = \left[r_t + \gamma\underset{a}{\text{max}}Q_{\pi}(s_{t+1},a) - Q_{i}(s_t,a_t) \right]
\end{equation}
Optimal Q-function \(Q^{*}\) and policy \(\pi^{*}\) is obtained when \(Q_i(s_t, a_t) \rightarrow Q^{*}(s_t, a_t)\) as \(i\rightarrow \infty\).
As the Q-learning represents the Q-function as a table containing values of all combinations of states and actions, it is impractical for most problems. The DQN algorithm addresses such by using a deep neural network with parameters \(\theta\) to estimate the optimal Q-values, i.e. \(Q(s_t, a_t;\theta) \approx Q^{*}(s_t, a_t)\) by minimizing the following loss function \(L(\theta)\) at each iteration \(i\):
\begin{equation}
L_i(\theta_i) =\mathbb{E} \left[\Big(y_i - Q(s_t,a_t;\theta_i)\Big)^2\right]
\end{equation}
where \(y_t = r_t + \gamma\underset{a}{\text{max}}Q(s_{t+1},a_t;\theta_{i-1}) \) is the target for iteration \(i\).
To improve training and for better data efficiency, at each time step \(t\), an experience, \(e_t = \langle s_t,a_t, r_t, s_{t+1}\rangle \) is stored in a replay buffer \(\mathcal{D}\). During training, the loss and its gradient is then computed using a mini-batch of transitions sampled from the replay buffer. However, DQN and Q-learning both suffer from an overestimation problem as they both use the same action-value to select and evaluate the Q-value function, making them impractical for problems with continuous action spaces.
\subsubsection{DDPG} DDPG algorithm is proposed to \cite{lillicrap2015continuous} to handle control problems with continuous action spaces, which otherwise are impractical to be handled by Q-lerning and DQN. The DDPG consists of two independent neural networks: an actor network and a critic network. The actor network is used to approximate the policy \(\pi(s_t|a_t)\). The input to the actor network is the environment state \(s_t\) and the output is the action \(a_t\). The critic network is used to approximate the Q-function \(Q(s_t, a_t)\) and is only used to train the agent and the network is discarded during the deployment of the agent. The input to the critic network is the concatenation of the state \(s_t\) and the action \(a_t\) from the actor network and the output is the Q-function \(Q(s_t, a_t)\).
Similar to the DQN, the DDPG stores an experience, \(e_t = \langle s_t,a_t, r_t, s_{t+1}\rangle \) in a replay buffer \(\mathcal{D}\) at each time step \(t\) to improve training and for better data efficiency. To add more stability to the training, two target neural networks, which are identical to the (original) actor network and (original) critic network are also created. Let the network parameters of the original actor network, original critic network, target actor network, and target critic network be denoted as \(\theta^{\mu}\), \(\theta^{Q}\), \(\theta^{\mu^{'}}\), and \(\theta^{Q^{'}}\) respectively.
Before training starts, $\theta^{\mu}$ and $\theta^{Q}$ are randomly initialized and the \(\theta^{\mu^{'}}\), and \(\theta^{Q^{'}}\) are initialized as $\theta^{\mu^{'}} \leftarrow \theta^{\mu}$ and $\theta^{Q^{'}} \leftarrow \theta^{Q}$.
To train the original actor and critic networks, a min-batch of \(B\) experiences \(\langle s_t^j,a_t^j, r_t^j, s_{t+1}^j\rangle \Big|_{j=1}^B\), is randomly sampled from \(\mathcal{D}\), where \(j\in B\) is the sample index. The original critic network parameters $\theta^{Q}$ are updated through gradient descent using the mean-square Bellman error function
\begin{equation}
L\left(\theta^{Q}\right) = \frac{1}{B}\sum\limits_{j=1}^B\Big(y_j - Q\left(s_t^j, a_t^j;\theta^{Q}\right)\Big)^2
\label{critic_loss}
\end{equation}
where $Q\left(s_t^j, a_t^j;\theta^{Q}\right)$ is the predicted output of the original critic network and \(y_j\) is its target value expressed as
\begin{equation}
y_j = r_t^j + \gamma Q^{'}\left(s_{t+1}^j, \mu^{'} (s_{t+1}^j; \theta^{\mu^{'}});\theta^{Q^{'}}\right)
\end{equation}
where \(\mu^{'}(s_{t+1}^j; \theta^{\mu^{'}})\) is the output (action) from the target actor network and \(Q^{'}\left(s_{t+1}^j, \mu^{'}(s_{t+1}^j; \theta^{\mu^{'}});\theta^{Q^{'}}\right)\) is the output (Q-value) from the target critic network.
At the same time, parameters of the original actor network are updated by maximising the policy objective function \(J(\theta^{\mu})\)
\begin{equation}
\nabla_{\theta^{\mu}}J(\theta^{\mu}) = \frac{1}{B}
\sum\limits_{j=1}^B\nabla_{\theta^{\mu}}\mu\left(s;\theta^\mu\right)
\nabla_{a}Q\left(s,a;\theta^Q\right)
\label{policy_grad}
\end{equation}
where \(s=s_t^j\), \(a = \mu(s_t^j;\theta^\mu)\) is the output (action) from the original actor network and \(Q\left(s,a;\theta^Q\right)\) is the output (Q-value) from the original critic network.
After the parameters of the original actor network and original critic network are updated, the parameters of the two target networks are updated through soft update technique as
\begin{equation}
\begin{cases}
\theta^{Q^{'}} \gets \tau \theta^{Q} + \left(1 - \tau\right)\theta^{Q^{'}}\\
\theta^{\mu^{'}} \gets \tau \theta^{\mu} + \left(1 - \tau\right)\theta^{\mu^{'}}
\end{cases}
\label{target_update}
\end{equation}
where \(\tau\) is the learning rate.
To ensure that the agent explores the environment, a random process \cite{uhlenbeck1930theory} is used to generate a noise \(\mathcal{N}_t\), which is added to every action as follows
\begin{equation}
a_t = \mu\left(s_t;\theta^\mu)\right) + \mathcal{N}_t
\label{action}
\end{equation}
However, as discussed in \cite{lowe2017multi}, the DDPG algorithm performs poorly in non-stationary environments.
\subsection{The Proposed MADDPG Algorithm}\label{sec:MADDPG}
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{achtecture.png}
\caption{The multi-agent environment structure of the smart energy network.}
\label{MA_structure}
\vspace{-\baselineskip}
\end{figure}
Each controllable asset of the SEN (i.e., BESS, HESS and flexible demand) can be considered an agent, making the SEN environment a multi-agent environment as shown in Fig. \ref{MA_structure}. With reference to Section \ref{problem_formulation}, the state and action spaces for each agent can be defined as follows.
The BESS agent's state and action as \( s_t^1 = (P_{pv,t}, P_{w,t}, E_{b,t}, D_{n,t}, \pi_t,\lambda_t)\) and \(a_t^1 = (P_{b,t})\) respectively. The HESS agent's state and action as \( s_t^2 = (P_{pv,t}, P_{w,t}, D_{n,t}, H_{t}, \pi_t,\lambda_t)\) and \(a_t^2 = ( P_{h,t})\) respectively and the flexible demand agent's state and action as \( s_t^3 = (P_{pv,t}, P_{w,t}, D_{n,t}, \pi_t,\lambda_t)\) and \(a_t^3 = (\Delta d_t)\) respectively. All the agents coordinate to maximise the same cumulative reward function given by (\ref{cumm_reward___}).
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{MADDPG.png}
\caption{MADDPG structure and training process. The BESS agent and demand agent has the same internal structure as the HESS agent.}
\label{MA_structure}
\vspace{-\baselineskip}
\end{figure*}
With the proposed MADDPG algorithm, each agent is modelled as a DDPG agent, where, however, states and actions are shared between the agents during training as shown in Fig. \ref{MA_structure}. During training, the actor network uses only the local state to calculate the actions while the critic network uses states and actions of all agents in the system in evaluating the local action. As actions of all agents are known by each agent's critic network, the entire environment is stationary during training. During execution, critic networks are removed and only actor networks are used. This means that with MADDPG, training is centralized while execution is decentralized.
A detailed pseudo-code of the proposed algorithm is given in Algorithm \ref{algorithm}.
\begin{algorithm}
\caption{MADDPG-based Optimal Control of a SEN}
\begin{algorithmic}[1]
\STATE Initialize shared replay buffer \(\mathcal{D}\)
\FOR {each agent \(k = 1,\cdots,3\)}
\STATE Randomly initialize (original) actor and critic networks with parameters $\theta^{\mu}$ and $\theta^{Q}$ respectively
\STATE Initialize (target) actor and critic networks as $\theta^{\mu^{'}} \leftarrow \theta^{\mu}$ and $\theta^{Q^{'}} \leftarrow \theta^{Q}$ respectively
\ENDFOR
\FOR {each episode \(eps = 1,2,\cdots,M\)}
\FOR {each agent \(k = 1,\cdots,3\)}
\STATE Initialize a random process $\mathcal{N}_t$ for exploration
\STATE Observe initial state \(s_t^k\) from the environment
\ENDFOR
\FOR {each time step $t= 1,2,\cdots,T$}
\FOR {each agent \(k = 1,\cdots,3\)}
\STATE Select an action according to (\ref{action})
\ENDFOR
\STATE Execute joint action \(\mathbf{a}_t = \langle a_t^1, a_t^2, a_t^3\rangle\)
\FOR {each agent \(k = 1,\cdots,3\)}
\STATE Collect reward \(r_t^k\) and observe state \(s_{t+1}^k\)
\STATE Store \(\left\langle a_t^k, s_t^k,r_t^k, s_{t+1}^k\right \rangle\) into \(\mathcal{D}\)
\STATE Update $s_{t}^k$ $\gets$ $s_{t+1}^k$
\STATE Randomly sample minibatch of $B$ transitions \(\left\langle a_t^j, s_t^j, r_t^j, s_{t+1}^j \right \rangle\Big|_{j=1}^B\) from \(\mathcal{D}\)
\STATE Update (original) critic network by (\ref{critic_loss})
\STATE Update (original) actor network by (\ref{policy_grad})
\STATE Update target networks by (\ref{target_update})
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{algorithm}
\end{algorithm}
\section{Simulation Results}\label{case study}
\subsection{Experimental Setup}
In this paper, real-world RES (solar and wind) generation and consumption data which is obtained from the Smart Energy Network Demonstrator (SEND)\footnote{https://www.keele.ac.uk/business/businesssupport/smartenergy/} is used for the simulation studies. We use the UK's time-of-use (ToU) electricity price as grid electricity buying price, which is divided into peak price \pounds0.234/kWh (4pm-8pm), flat price \pounds0.117/kWh (2pm-4pm \& 8pm-11pm) and the valley price \pounds0.07/kWh (11pm-2pm). The electricity price for selling electricity back to the main grid is a flat price \(\pi_t=\)\pounds0.05/kWh, which is lower than the ToU to avoid any arbitrage behaviour by the BESS and HESS. A carbon emission conversion factor\footnote{https://www.rensmart.com/Calculators/KWH-to-CO2} \(\mu_c = 0.23314\)kgCO\(_2\)/kWh is used to quantify the carbon emissions generated for using electricity from the main grid to meet the energy demand in the SEN. We set the initial BESS state of charge and hydrogen level in the tank as \(E_0 = 1.6\)MWh and \(H_0 = 5\)Nm\(^3\) respectively. Other technical-economic parameters of the BESS and HESS are tabulated in Table \ref{sim_params}. A day is divided into 48 time slots, i.e., each time slot is equivalent to 30 minutes.
\begin{table}[h!]
\centering
\caption{BESS and HESS Simulation Parameters.}
\begin{adjustbox}{width = \columnwidth, center}
\begin{tabular}{l l
\hline
ESS & Parameter \& Value\\
\hline
\hline
\multirow{3}{*}{BESS}& \(E_{nom}=\)2MWh, \(P_{max}=102\)kW, \(DoD=\)80\%\\
&\(E_{min}=\)0.1MWh, \(E_{max}=\)1.9MWh, \(L_c=\)3650\\
&\(C_b^{ca}\)=\pounds210000, \(\eta_{c,t}=\eta_{d,t}=\)98\%\ \\
\hline
\multirow{3}{*}{HESS}& \(H_{min}=\)2Nm\(^3\), \(H_{max}=\)10Nm\(^3\), \(P_{max}^{el}=\)3kW \\
&\(P_{max}^{fc}=\)3kW, \(\eta_{fc,t}=\)50\%, \(\eta_{el,t}=\)90\%\\
&\(L_{fc}=L_{el}= 30000\)h, \(r_{fc,t}=\)0.23Nm\(^3\)/kWh\\
& \(r_{el,t}=\)1.32kWh/Nm\(^3\), \(C_{el}^{om}=C_{fc}^{om}\)=\pounds 0.174/h\\
&\(C_{el}^{ca}\)=\pounds 60000, \(C_{fc}^{ca}\)=\pounds 22000\\
\hline
\hline
\end{tabular}
\end{adjustbox}
\label{sim_params}
\end{table}
The actor and critic networks for each MADDPG agent are designed using hyper-parameters tabulated in Table \ref{hyper_params}. We use Rectified Linear Unit (ReLU) as an activation function for the hidden layers and the output of the critic networks. A Tanh activation function is used in the output layer of each actor network. We set the capacity of the replay buffer to be \(\mathbf{K} = 1\times 10^6\) and the maximum training steps in an episode to be \(T = 48\). The Algorithm \ref{algorithm} is developed and implemented in Python using PyTorch framework \cite{paszke2019pytorch}.
\begin{table}[t!]
\centering
\caption{Hyper-parameters for each Actor and Critic Network.}
\begin{adjustbox}{width = \columnwidth, center}
\begin{tabular}{l l l
\hline
Hyper-parameter& Actor Network & Critic Network\\
\hline
\hline
Optimizer & Adam & Adam \\
Batch size & 256 & 256\\
Discount factor & 0.95 & 0.95\\
Learning rate & $1\times 10^{-4}$ & $3\times10^{-4}$\\
No. of hidden layers & 2 & 2\\
No. of neurons & 500 & 500\\
\hline
\hline
\end{tabular}
\end{adjustbox}
\label{hyper_params}
\end{table}
\subsection{Benchmarks}
We verify the performance of the proposed MADDPG algorithm by comparing it with other three bench-marking algorithms:
\begin{itemize}
\item Rule-based (RB) algorithm: This is a model-based algorithm which follows the standard practise of wanting to meet the energy demand of the SEN using the RES generation without guiding the operation of BESS, HESS and flexible demands towards periods of low/high electricity price to save energy costs. In the event that there is surplus energy generation, the surplus is first stored in the short-term BESS, followed by the long-term HESS and any extra is sold to the main grid. If the energy demand exceeds RES generation, the deficit is first provided by the BESS followed by the HESS and then the main grid.
\item DQN algorithm: As discussed in Section \ref{sec:MADRL}, this is a value-based DRL algorithm, which intends to optimally schedule the operation of the BESS, HESS and flexible demand using a single agent and a discretised action space.
\item DDPG algorithm: This is a policy-based DRL algorithm, which intends to optimally schedule the operation of the BESS, HESS and flexible demand using a single agent and a continuous action space as discussed in Section \ref{sec:MADRL}.
\end{itemize}
\subsection{Algorithm Convergence}
We analyse the convergence of the MADDPG algorithm by training the agents with 5900 episodes, with each episode having 48 training steps. In Fig. \ref{training_rewards}, the average rewards obtained for each episode are plotted against the episodes and compared to the DRL-based bench-marking algorithms.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{reward_plot.png}
\caption{Training processes of the DQN, DDPG and MADDPG algorithms}
\label{training_rewards}
\vspace{-\baselineskip}
\end{figure}
As shown in Fig. \ref{training_rewards}, all algorithms achieve convergence after 2000 episodes. The DQN reach convergence faster than MADDPG and DDPG due to the DQN's discretised and low-dimensional action space, making the determination of the optimal scheduling policy relatively easier and quicker than the counterpart algorithms with continuous and high-dimensional action spaces. As a discretised action space cannot accurately capture the complexity and dynamics of the SEN energy management, the DQN algorithm converges to the worst optimal policy given by the lowest average reward value (-16572.5). On the other hand, the MADDPG algorithm converges to a high average reward value (-6858.1), which is slightly higher than the reward value (-8361.8) for the DDPG, mainly due to enhanced cooperation between the operation of the controlled assets.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\textwidth]{maddpg_action_vs_net_demand.png}
\caption{Control action results (for a 7 day-period) by BESS, HESS and Flexible demand agents with response to net demand.}
\label{net_demand_actions}
\vspace{-\baselineskip}
\end{figure}
\subsection{Algorithm Performance}
In this section, we demonstrate the effectiveness of the proposed algorithm for optimally scheduling the BESS, HESS and the flexible demand to minimise the energy and environmental costs. Fig. \ref{net_demand_actions} shows the scheduling results with response to the SEN net demand for a period of 7 days, i.e., \(T= 336\) hours. As shown in Fig. \ref{net_demand_actions}, the BESS and HESS accurately charge (negative power) and discharge (positive power) whenever the net demand is negative (i.e., RES generation exceeds energy demand) and positive (i.e., energy demand exceeds RES generation) respectively. Similarly, the scheduled demand is observed to be high and low whenever the net demand is negative and positive respectively.
Fig. \ref{ToUactions} shows that in order to minimise the multi-objective function given by \(\textbf{P1}\), the algorithm prioritises the flexible demand agent to aggressively respond to price changes compared to the BESS and HESS agents. As shown in Fig. \ref{ToUactions}, the scheduled demand reduces sharply whenever the electricity price is the highest and increases when the price is lowest compared to the actions by the BESS and HESS.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\textwidth]{maddpg_action_vs_ToU.png}
\caption{Control action results (for a 7 day-period) by BESS, HESS and Flexible demand agents with response to ToU.}
\label{ToUactions}
\vspace{-\baselineskip}
\end{figure}
Together, Fig. \ref{net_demand_actions} and Fig. \ref{ToUactions} demonstrate how the algorithm allocates different priorities to the agents to achieve a collective goal, which is to minimise carbon costs, energy and operational costs. In this case, the BESS and HESS agents are trained to response more aggressively to changes in energy demand and generation, and maximise the benefits thereof like minimum carbon emissions. On the other hand, scheduling the flexible demand guides the SEN towards low energy costs.
\begin{table}[t!]
\centering
\caption{Cost Savings and Carbon Emissions for Different SEN Models.}
\begin{adjustbox}{width = \columnwidth, center}
\begin{tabular}{l| c c c c c
\hline
\textbf{Models} & \textbf{Proposed} & No BESS & No HESS & No Flex. Demand & No assets\\
\hline
\hline
BESS & \checkmark & $\times$ & \checkmark & \checkmark & $\times$\\
HESS & \checkmark & \checkmark & $\times$ & \checkmark & $\times$\\
Flex. Demand & \checkmark & \checkmark & \checkmark & $\times$ & $\times$\\
\hline
\hline
\textbf{Cost Saving} (£) &1099.60 & 890.36& 1054.58 & 554.01& 451.26\\
\hline
\textbf{Carbon Emission} (kgCo\(_2\)e)& 265.25 & 1244.70 & 521.92& 1817.37 & 2175.66\\
\hline
\hline
\end{tabular}
\end{adjustbox}
\label{benefitsofBH}
\end{table}
\subsection{Improvement in Cost Saving and Carbon Emission}
To demonstrate the economic and environmental benefits of integrating the BESS and the HESS in the SEN, the MADDPG algorithm was tested on different SEN models as shown in Table \ref{benefitsofBH}. The SEN models differ based on whether the SEN has any of the controllable assets; BESS, HESS and flexible demand or not. For example, the SEN model which only has HESS and flexible demand as controllable assets is denoted as `No BESS'. The total cost savings and carbon emissions for each model were obtained as a sum of the cost savings and carbon emissions obtained half-hourly for 7 days.
As shown in Table \ref{benefitsofBH}, integrating BESS and HESS in the SEN as well as scheduling the energy demand achieves the highest cost savings and reduction in carbon emission. For example, the cost savings and carbon emissions are 23.5\% and 78.69\% higher and lower respectively than those for the SEN model without BESS (i.e., the `No BESS' model), mainly due to improved RES utilisation for the proposed SEN model.
\subsection{Improvement in RES Utilisation}
To demonstrate improvement in RES utilisation as a result of integrating the BESS and the HESS in the SEN as well as scheduling energy demand, we use self-consumption and self-sufficiency as performance metrics. Self-consumption is defined as a ratio of RES generation used by the SEN (i.e., to meet the energy demand and to charge the BESS and HESS) to the overall RES generation \cite{luthander2015photovoltaic}. Self-sufficiency is defined as the ratio of the energy demand that is supplied by the RES, BESS and HESS to the overall energy demand \cite{long2018peer}.
Table \ref{benefitsofSS} shows that integrating the BESS and the HESS in the SEN as well as scheduling energy demand improves RES utilisation. Overall, the proposed SEN model achieved the highest RES utilisation with 59.6\% self-consumption and 100\% self-sufficiency. This demonstrates the potential of integrating HESS in future SENs for absorbing more RES, thereby accelerating the rate of power system decarbonisation.
\begin{table}[t!]
\centering
\caption{Self-consumption and self-sufficiency for Different SEN Models.}
\begin{adjustbox}{width = \columnwidth, center}
\begin{tabular}{l| c c c c c
\hline
\textbf{Models} & \textbf{Proposed} & No BESS & No HESS & No Flex. Demand & No assets\\
\hline
\hline
BESS & \checkmark & $\times$ & \checkmark & \checkmark & $\times$\\
HESS & \checkmark & \checkmark & $\times$ & \checkmark & $\times$\\
Flex. Demand & \checkmark & \checkmark & \checkmark & $\times$ & $\times$\\
\hline
\hline
\textbf{Self-consumption} &59.6\% & 48.0\%& 39.2\% & 46.0\%& 50.0\%\\
\hline
\textbf{Self-sufficiency} &100\% & 85.3\%& 95.2\% & 78.8\%& 73.4\%\\
\hline
\hline
\end{tabular}
\end{adjustbox}
\label{benefitsofSS}
\end{table}
\subsection{Algorithm Evaluation}
Performance of the proposed MADDPG algorithm was evaluated by comparing it to the bench-marking algorithms for cost savings, carbon emissions, self-consumption and self-sufficiency as shown in Fig. \ref{bench_making}.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{savings.png}
\caption{}
\label{costs}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{self_consumption_sufficiency.png}
\caption{}
\label{self_s}
\end{subfigure}
\caption{Performance of the MADDPG algorithm compared to the bench-marking algorithms for: (a) cost savings and carbon emissions, (b) self-consumption and self-sufficiency.}
\label{bench_making}
\end{figure}
The MADDPG algorithm obtained the most stable and competitive performance in all the performance metrics considered, i.e., cost savings, carbon emissions, self-consumption and self-sufficiency. This is mainly due to its multi-agent feature, thereby ensuring a better learning experience of the environment. For example, the MADDPG improved the cost savings and reduced the carbon emissions by 41.33\% and 56.3\% respectively relative to the RB approach. The rival DDPG algorithm achieved the highest cost savings at the expense of carbon emissions and self-sufficiency. As more controllable assets are expected in future SENs due to the digitisation of power systems, multi-agent based algorithms are therefore expected to play a key energy management role.
\subsection{Sensitivity Analysis of Parameter $\alpha_d$ }
The parameter \(\alpha_d\) quantifies the amount of flexibility to reduce the energy demand. A lower value of \(\alpha_d\) indicates that less attention is paid to the inconvenience cost and a larger share of the energy demand can be reduced to minimise the energy costs. A higher value of \(\alpha_d\) indicates that high attention is paid to the inconvenience cost and the energy demand can be hardly reduced to minimise the energy costs. With the change in \(\alpha_d\) values, the cost saving and carbon emission results are compared in Fig. \ref{parameter_alphas}.
As shown in Fig. \ref{parameter_alphas}, the cost savings and carbon emissions reduce and increase respectively as \(\alpha_d\) takes values from 0.0001 to 0.001, which means that the energy demand's sensitivity to price reduces with increased inconvenience levels as given by (\ref{inc_cost}). Thus, having an energy demand which is sensitive to electricity price is crucial for reducing carbon emissions and promoting the use of RES.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{parameter_alpha.png}
\caption{Cost savings and carbon emissions for different $\alpha_d$ parameters.}
\label{parameter_alphas}
\vspace{-\baselineskip}
\end{figure}
\section{Conclusions}\label{conclusion}
In this paper, we investigated the problem of minimising energy costs and carbon emissions as well as increasing renewable energy utilisation in a smart energy network (SEN) with BESS, HESS and schedulable energy demand. A multi-agent deep deterministic policy gradient algorithm was proposed as a real-time control strategy to optimally schedule the operation of the BESS, HESS and schedulable energy demand while ensuring that the operating constraints and time-coupled storage dynamics of the BESS and HESS are achieved. Simulation results based on real-world data showed increased cost savings, reduced carbon emissions and improved renewable energy utilisation with the proposed algorithm and SEN. On average, the cost savings and carbon emissions were 23.5\% and 78.69\% higher and lower respectively with the proposed SEN model than baseline SEN models. The simulation results also verified the efficacy of the proposed algorithm to manage the SEN outperforming other bench-marking algorithms including DDPG and DQN algorithms. Overall, the results have shown great potential for integrating HESS in SENs and using self-learning algorithms to manage the operation of the SEN.
\section*{Acknowledgement}
This work was supported by the Smart Energy Network Demonstrator project (grant ref. 32R16P00706) funded by ERDF and BEIS. This work is also supported by the EPSRC EnergyREV project (EP/S031863/1).
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2022-08-29T02:17:10",
"yymm": "2208",
"arxiv_id": "2208.12779",
"language": "en",
"url": "https://arxiv.org/abs/2208.12779"
}
|
\section{Example Appendix}
\end{CJK}
\end{document}
\section{Benchmark Approaches}
\label{sec:base-model}
In the context-dependent setting, many researchers focus on parsing model innovation \cite{yu2020score, hui2021dynamic, zheng2022hie}. In this work, we adopt Edit-SQL \cite{zhang2019editing}, IGSQL \cite{cai2020igsql} and extended RATSQL (EX-RATSQL) \cite{guo2021chase} as our benchmark approaches to evaluate their performances on our new dataset - SeSQL. For the three baseline approaches, we use default parameter settings in their released codes.
Please note that in our experiments, we use fixed values given in the released source codes for hyper-parameters, that is, we do not perform hyper-parameter search to find best hyper-parameter values.
Meanwhile, following \citet{guo2021chase}, we use BERT-base \cite{devlin2019bert} to enhance the three parsers.
\textbf{Edit-SQL}\footnote{\url{https://github.com/ryanzhumich/editsql}} utilizes the interaction history by editing the previous predicted query to improve the generation quality, as the adjacent NL questions are often linguistically dependent and their corresponding SQL queries tend to overlap. In the decoding process, they view an SQL query as a sequence and use an editing mechanism to reuse the previous generated SQL query at the token level.
In the encoder, in order to deal with complex table structures, they employ an question-table encoder to incorporate the context of the user question and the table schema.
It takes about 7 days to train a basic BERT enhanced EditSQL model on a V100 GPU card. The EditSQL model has about 115M parameters.
\textbf{IGSQL}\footnote{\url{https://github.com/headacheboy/IGSQL}} incorporates a graph encoder into EditSQL to capture historical information of user questions and database schema items.
In encoding phase, they not only use an interaction encoder to capture historical information of user NL questions, but also use a DB schema interaction graph encoder to utilize historical information of DB schema items.
In decoding phase, for making the prediction of SQL tokens, IGSQL introduces a gate mechanism to weigh the importance score of vocabularies from different sources, including DB schema items and the previous generated SQL query.
With a V100 GPU card, it takes about 6 days to train a basic BERT enhanced IGSQL model. And an IGSQL model has about 110M parameters.
\textbf{EX-RATSQL}\footnote{\url{https://github.com/xjtu-intsoft/chase/tree/main/Benchmark_Approaches/DuoratChar}} is an extension of RATSQL \cite{wang2020rat}, which uses a relation-aware transformer encoder and a gramma-based decoder to generate SQL Queries and performs well in the context-independent setting. Compared with RATSQL, EX-RATSQL adopts a simple concatenation context modeling approach to concatenate current question and all prior context-dependent questions with a special symbol [SEP] from the back forward. In other respects, it keeps up with RATSQL. The code of EX-RATSQL is based on DuoRAT \cite{scholak2021DuoRAT} and we set batch size as 12 and max steps as 200,000 in our experiments. Besides it, we use default values for other hyper-parameters.
It also takes about 8 days to train a basic BERT enhanced EX-RATSQL model on a V100 GPU card. An EX-RATSQL model has about 135M parameters.
In the context-independent setting, we use a widely-used approach RATSQL \cite{wang2020rat} and a SOTA approach LGESQL \cite{cao2021LGESQL} as the benchmark approaches to understand the characteristics of our dataset. We evaluate RATSQL and LGESQL on all context-independent pairs.
\textbf{RATSQL}\footnote{\url{https://github.com/microsoft/rat-sql}} utilizes a relation-aware transformer encoder to better model the connections between DB schemas and NL questions. Then it uses a grammar-based decoder to ensure the grammaticality of the generated SQL queries. In our experiments, we set batch size as 4 and random initialize seed as 0. We use default values for other parameter in the released code, and use the fixed values (listed in the released code) for hyper-parameters. Similarly, we use Chinese BERT-wwm \cite{cui2021pre} to enhance our RATSQL model.
Training RATSQL is also expensive. It takes about 6 days to train a basic BERT-wwm enhanced RATSQL model on a V100 GPU card. A RATSQL model has about 168M parameters.
\textbf{LGESQL}\footnote{\url{https://github.com/rhythmcao/text2sql-lgesql.git}} takes advantages of Dual RGAT to jointly encode the questions and the schemas. Compared with RATSQL, LGESQL pays more attention to the topological structure of edges and applies an edge-centered line graph to enhance the encoding of 1-hop edge features. In addition, it comes up with a graph pruning method as an auxiliary task to help the encoder improve the discriminative capability. Later, it uses a grammar-based decoder to generate SQL Queries as well. In our experiments, we leave the default hyperparameters unchanged.
Training a basic BERT enhanced LGESQL model spends 8 days on a V100 GPU card. The LGESQL model has about 148M parameters.
\section{Annotation Tool}
\label{sec:annotation-tool}
Figure \ref{fig:annotation_tools} shows the user interface of our annotation tool. The details of annotating process have been mentioned in Section \ref{sec:data-construct}. During the process of annotating and checking question/SQL sequences, this annotation tool helps to improve annotation speed and data quality.
\section{More Annotation Examples}
\label{sec:more-examples}
\begin{table*}[tb]
\renewcommand\tabcolsep{2.5pt}
\renewcommand\arraystretch{1.3}
\small
\centering
\scalebox{0.624} {
\begin{tabular}{l l c c}
\toprule
\textbf{\#} & \textbf{Question\&SQL Query} & \textbf{Context Dependency} & \textbf{Thematic Transition}\\
\hline
\multicolumn{2}{l}{\textbf{Session 1}}\\
\hline
\multirow{2}{*}{$q^{1}_{1}$} & {请列出所有频道的相关收视信息。} & \multirow{4}{*}{\makecell[c]{Independent}} & \multirow{4}{*}{\makecell[c]{--}}\\
~ & (Please list the relevant information of all channel.) \\ \multirow{2}{*}{$y^{1}_{1}$} & {select * from 频道收视}\\ ~ & (select * from channel\_ratings)\\
\hline
\multirow{2}{*}{$q^{1}_{2}$} & {市场份额最高的那个呢?} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Ellipsis)}} & \multirow{4}{*}{\makecell[c]{Changing display}}\\ ~ & (Tell me the one with the highest market share.)\\ \multirow{2}{*}{$y^{1}_{2}$} & select * from 频道收视 order by 市场份额 desc limit 1\\ ~ & (select * from channel\_ratings order by market\_share desc limit 1) \\
\hline
\multirow{2}{*}{$q^{1}_{3}$} & {还是把市场份额不低于10\%的都列出来吧。} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Ellipsis)}} & \multirow{4}{*}{\makecell[c]{Changing conditions}}\\ ~ & (Now list all relevent information about channels with a market share of no less than 10\% instead.) \\ \multirow{2}{*}{$y^{1}_{3}$} & {select * from 频道收视 where 市场份额 >= 0.1}\\ ~ & (select * from channel\_ratings where market\_share >= 0.1)\\
\hline
\multirow{2}{*}{$q^{1}_{4}$} & {其中直播关注度超过0.1\%的呢?} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Both)}} & \multirow{4}{*}{\makecell[c]{Changing conditions}}\\ ~ & (How about counting only those in the above results with more than 0.1\% live streaming attention?)\\ \multirow{2}{*}{$y^{1}_{4}$} & {select * from 频道收视 where 市场份额 >= 0.1 and 直播关注度 > 0.001}\\~ & (select * from channel\_ratings where market\_share >= 0.1 and live\_streaming\_attention > 0.001) \\
\hline
\multirow{2}{*}{$q^{1}_{5}$} & {帮我再查一下市场份额高于均值的吧。} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Ellipsis)}} & \multirow{4}{*}{\makecell[c]{Changing conditions}}\\ ~ & (I want to know the information of the channels whose market share is above average now, can you work out?) \\ \multirow{2}{*}{$y^{1}_{5}$} & {select * from 频道收视 where 市场份额 > ( select avg(市场份额) from 频道收视 )}\\ ~ & (select * from channel\_ratings where market\_share > ( select avg( market\_share ) from channel\_ratings )) \\
\hline
\multicolumn{2}{l}{\textbf{Session 2}}\\
\hline
\multirow{2}{*}{$q^{2}_{1}$} & {将所有基金公司按照注册资金由多到少排一下序,并告诉我它们对应的封闭式与开放式基金之和分别是多少。} & \multirow{4}{*}{\makecell[c]{Independent}} & \multirow{4}{*}{\makecell[c]{--}}\\ ~ & (Sort all funds according to their registered capital from most to least and show me the sum of their closed-end and open-end funds.) \\
\multirow{2}{*}{$y^{2}_{1}$} & {select 名称, 封闭式基金数量 + 开放式基金数量 from 基金公司 order by 注册资本(万) desc}\\ ~ & (select name , sum( num\_closed-end\_funds + num\_open-end\_funds ) from fund\_company order by registered\_capital desc) \\
\hline
\multirow{2}{*}{$q^{2}_{2}$} & {顺便给出它们分别有多少亚洲债券基金吧。} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Both)}} & \multirow{4}{*}{\makecell[c]{Changing SELECT}}\\ ~ & (By the way, how many Asian Bond Fund do they have?) \\ \multirow{2}{*}{$y^{2}_{2}$} & {select 名称, 亚洲债券基金数量, 封闭式基金数量 + 开放式基金数量 from 基金公司 order by 注册资本(万) desc}\\ ~ & (select name , Aisan\_bond\_fund , sum( num\_closed-end\_funds + num\_open-end\_funds ) from fund\_company order by registered\_capital desc
) \\
\hline
\multirow{2}{*}{$q^{2}_{3}$} & {这亚洲债券基金好像没什么参考价值,换成注册资金看看呢?} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Ellipsis)}} & \multirow{4}{*}{\makecell[c]{Changing SELECT}}\\ ~ & (The Asian Bond Fund seems useless, can you replace it with registered captial? ) \\ \multirow{2}{*}{$y^{2}_{3}$} & {select 名称, 注册资本(万), 封闭式基金数量 + 开放式基金数量 from 基金公司 order by 注册资本(万) desc}\\ ~ & (select name , registered\_captial , sum( num\_closed-end\_funds + num\_open-end\_funds ) from fund\_company order by registered\_capital desc) \\
\hline
\multirow{2}{*}{$q^{2}_{4}$} & {再把上述结果按注册资金从少到多排一下序让我瞅瞅吧。} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Coreference)}} & \multirow{4}{*}{\makecell[c]{Changing display}}\\ ~ & (Put the above result in reverse order and show me that.) \\ \multirow{2}{*}{$y^{2}_{4}$} & {select 名称, 注册资本(万), 封闭式基金数量 + 开放式基金数量 from 基金公司 order by 注册资本(万) asc}\\ ~ & (select name , registered\_captial , sum( num\_closed-end\_funds + num\_open-end\_funds ) from fund\_company order by registered\_capital asc) \\
\hline
\multicolumn{2}{l}{\textbf{Session 3}}\\
\hline
\multirow{2}{*}{$q^{3}_{1}$} & {电影分很多种类型,麻烦将各类型按对应的电影数从多到少排序。} & \multirow{4}{*}{\makecell[c]{Independent}} & \multirow{4}{*}{\makecell[c]{--}}\\ ~ & (As we all know, there are plenty of movie genres, can you sort them according to the number of movies they have from most to least?) \\
\multirow{2}{*}{$y^{3}_{1}$} & {select 类型 from 电影 group by 类型 order by count(*) desc}\\ ~ & (select genre from movie group by genre order by count(*) desc) \\
\hline
\multirow{2}{*}{$q^{3}_{2}$} & {那是哪种类型一百分钟以上的电影最多?} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Others)}} & \multirow{4}{*}{\makecell[c]{Changing conditions}}\\ ~ & (Which genre has the most movies over 100 minutes?) \\\multirow{2}{*}{$y^{2}_{2}$} & {select 类型 from 电影 where 片长(分钟) > 100 group by 类型 order by count(*) desc limit 1}\\ ~ & (select genre from movie where length > 100 group by genre order by count(*) desc limit 1) \\
\hline
\multirow{2}{*}{$q^{3}_{3}$} & {如果只统计票价超过50元的呢?} & \multirow{6}{*}{\makecell[c]{Dependent \\ (Ellipsis)}} & \multirow{6}{*}{\makecell[c]{Changing conditions}}\\ ~ & (What if we only count movies that cost more than ¥50?) \\ \multirow{4}{*}{$y^{3}_{3}$} & select 电影.类型 from 电影 join 电影上映 on 电影上映.电影id = 电影.词条id where 电影上映.票价(元) > 50 \\ ~ & group by 电影.类型 order by count(*) desc limit 1 \\ ~ & (select movie.genre from movie join movie\_released on movie\_released.id = movie.id where movie\_released.cost > 50 \\ ~ & group by movie.genre order by count(*) desc limit 1)\\
\hline
\multirow{2}{*}{$q^{3}_{4}$} & {又有哪些类型符合条件的电影超过三部?} & \multirow{6}{*}{\makecell[c]{Dependent \\ (Both)}} & \multirow{6}{*}{\makecell[c]{Changing conditions}}\\ ~ & (Which movie genres have more than three movies that meet the criteria?) \\ \multirow{4}{*}{$y^{3}_{4}$} & select 电影.类型 from 电影 join 电影上映 on 电影上映.电影id = 电影.词条id where 电影上映.票价(元) > 50 \\ ~ & group by 电影.类型 having count(*) > 3 \\ ~ & (select movie.genre from movie join movie\_released on movie\_released.id = movie.id where movie\_released.cost > 50 \\ ~ & group by movie.genre having count(*) > 3)\\
\hline
\multicolumn{2}{l}{\textbf{Session 4}}\\
\hline
\multirow{2}{*}{$q^{4}_{1}$} & {社交软件有多少个?} & \multirow{4}{*}{\makecell[c]{Independent}} & \multirow{4}{*}{\makecell[c]{--}}\\~ & (How many social software?) \\
\multirow{2}{*}{$y^{4}_{1}$} & {select count(*) from 社交APP}\\ ~ & (select count(*) from social\_APP) \\
\hline
\multirow{2}{*}{$q^{4}_{2}$} & {占内存30MB以上的呢?} & \multirow{4}{*}{\makecell[c]{Dependent \\ (Ellipsis)}} & \multirow{4}{*}{\makecell[c]{Changing conditions}}\\ ~ & (How about memory usage occupied over 30M?) \\ \multirow{2}{*}{$y^{2}_{2}$} & {select count(*) from 社交APP where 软件大小(M) > 30}\\ ~ & (select count(*) from social\_APP where memory\_usage > 30) \\
\hline
\multirow{2}{*}{$q^{4}_{3}$} & {各公司旗下分别有多少这样的软件?} & \multirow{6}{*}{\makecell[c]{Dependent \\ (Coreference)}} & \multirow{6}{*}{\makecell[c]{Changing conditions}}\\ ~ & (How many software mentioned above does each company have?) \\ \multirow{4}{*}{$y^{4}_{3}$} & select 公司.名称, count(*) from 公司 join 社交APP on 社交APP.母公司id = 公司.词条id where 社交APP.软件大小(M) > 30 \\ ~ & group by 公司.名称 \\ ~ & (select company.name, count(*) from company join social\_APP on social\_APP.parent\_company\_id=company.id \\ ~ & where social\_APP.memory\_usage > 30 group by company.name) \\
\hline
\multirow{2}{*}{$q^{4}_{4}$} & {分别总共有多少用户注册呢?} & \multirow{6}{*}{\makecell[c]{Dependent \\ (Both)}} & \multirow{6}{*}{\makecell[c]{Changing SELECT}}\\ ~ & (How many users are registered in total respectively?) \\ \multirow{4}{*}{$y^{4}_{4}$} & select 公司.名称, count(*), sum(社交APP.注册用户量(亿)) from 公司 join 社交APP on 社交APP.母公司id = 公司.词条id \\ ~ & where 社交APP.软件大小(M) > 30 group by 公司.名称\\ ~ & (select company.name, count(*), sum(social\_APP.num\_registed\_user) from company join social\_APP on social\_APP.parent\_company\_id = company.id \\ ~ & where social\_APP.memory\_usage > 30 group by company.name) \\
\hline
\end{tabular}
}
\caption{Question sequence examples in Ours.}
\label{tab:more-cases}
\end{table*}
\section{Conclusions}
This paper presents SeSQL, yet another large-scale session-level Chinese text-to-SQL dataset.
We describe its construction methodology and process in detail, and present detailed analysis about it.
We conduct benchmark experiments with three representative session-level parsers, and prove that SeSQL exhibits several important features compared with CHASE.
First, all 5,028 sessions are manually constructed from scratch, whereas only 2,003 sessions in CHASE-C are manually constructed from scratch.
Second, being used as extra training data, SeSQL can consistently improve performances on both CHASE-C and CHASE-T. This indicates SeSQL is of higher quality and has stronger generalization ability.
Third, by completing context-dependent questions, SeSQL provides 27,012 context-independent question/SQL pairs, and thus can be used as a solid dataset for future research on single-round text-to-SQL parsing.
\section{Analysis of SeSQL}
\label{sec:data-analysis}
\textbf{Basic statistics.}
As shown in Table \ref{tab:data_all_stas}, SeSQL contains 5,028 unique question sequences over 201 DBs, with 27,012 questions annotated with their corresponding SQL queries.
First of all, compared with English text-to-SQL datasets, both SeSQL and CHASE have a larger number of sessions and question/query pairs.
Second, both SeSQL and CHASE are more challenging, due to higher percentage of context-dependent and non-easy questions.
Compared with CHASE, SeSQL contains more question/query rounds per session.
We believe this owes to the seven categories of thematic transition that we design, which makes it more flexible for annotators to create next-round SQL queries.
Moreover, SeSQL has overall higher percentages of context-dependent and non-easy questions.
Looking into CHASE, as we earlier discussed, only 2,003 sessions and 7,694 question/query pairs (i.e., CHASE-C) are annotated from scratch, which are much fewer than SeSQL. In the experiments, we show that CHASE-C and CHASE-T are highly discrepant and incompatible as training and evaluation data.
Finally, SeSQL provides corresponding context-independent questions for all context-dependent ones, and thus can also serve as a single-round text-to-SQL dataset.
\input{figure/thematic1}
\textbf{Thematic transition.}
We compute the thematic transition distributions in Figure \ref{fig:thematic_dis}. We find that the most frequently occurring transitions are \emph{Changing conditions} and \emph{Changing SELECT}, which are two very common contextual thematic relations in conversational QA systems \cite{bertomeu2006contextual}. Meanwhile, it can be seen that transitions of \emph{Changing tables} and \emph{Combining queries} rarely occur. In the former case, the next-round SQL query raises another related topic, and its NL question is usually context-independent. The latter case usually leads to a very complex next-round SQL query.
\begin{table}[tb]
\small
\center
\scalebox{0.95} {
\begin{tabular}{l | c c c c c}
\toprule
Datasets & Indep. & Core. & Elli. & Both & Others\\
\hline
SParC & \textbf{47.5} & 31.6 & 25.9 & 5.0 & 0 \\
CHASE & 35.3 & 35.7 & 28.5 & 0.5 & 0 \\
\quad CHASE-C & 28.8 & \textbf{39.8} & 30.9 & 0.5 & 0 \\
\quad CHASE-T & 42.2 & 31.4 & 24.7 & 1.7 & 0 \\
SeSQL & 34.5 & 13.4 & \textbf{35.0} & \textbf{12.5} & \textbf{4.6} \\
\bottomrule
\end{tabular}
}
\caption{Context dependency distributions of existing datasets, where the reported results of SParC, CHASE, CHASE-C and CHASE-T are from \citet{guo2021chase} .}
\label{tab:dep_relation}
\end{table}
\textbf{Context dependency.} Table \ref{tab:dep_relation} shows distribution of context dependency types of different datasets. Following previous studies, there are five types of context dependency, i.e., independent (indep.), co-reference (core.), ellipsis (elli.), hybrid of co-reference and ellipsis (both) \footnote{The values of ``both'' for other three datasets are inferred from their reported results of ``Core.'' and ``Elli.''.}, and others.
First, the proportion of context-independent questions in SeSQL is much lower than SParc and CHASE-T, and is only 5.7\% higher than CHASE-C.
Second, SeSQL has the highest percentage of questions with ellipsis, and of questions with both co-reference and ellipsis.
Finally, the remaining 4.6\% of questions in SeSQL are related with previous questions in other ways than co-reference and ellipsis.
From the distribution analysis, we believe that compared with CHASE, SeSQL can be used as a new and complementary resource for research on session-level text-to-SQL parsing.
\section{Dataset Construction}
\label{sec:data-construct}
The construction of SeSQL mainly consists of five steps: 1) DB collection and cleansing, 2) initial SQL query creation, 3) subsequent question/SQL generation, 4) review and final question creation, and 5) completing context-dependent questions.
We first introduce our overall annotation workflow in Section \ref{subsec:iterative-workflow}, and then detail the five steps in Sections \ref{subsec:db-cleansing}-\ref{subsec:full-review-final-question}, and finally discuss other annotation details in Section \ref{subsec:other-details}.
\subsection{An Iterative Annotation Workflow}
\label{subsec:iterative-workflow}
In the early stage of this work, we observed that one annotator tended to have a very limited number of ways for advancing a session, probably due to thinking habits and background knowledge.
In other words, annotators usually followed a few fixed patterns to ask new questions in order to improve annotation speed.
Therefore, if we let one annotator to complete a whole session, the constructed data would probably contain strong annotator-related biases \cite{Mor2019are} and be less diverse.
To deal with this issue, we adopted an \emph{iterative annotation workflow}, as illustrated in Figure \ref{fig:interaction-system-creation-process}.
The basic idea is that one annotator only completes one NL question and one SQL query, and previous submissions are intensively reviewed by subsequent annotators.
There are six possible subtasks for an annotator to complete at a time.
\textbf{Subtask 1: knowing the context.}
The annotator first reads all previous NL questions in order to 1) know what the session is about, and 2) avoid asking identical or similar questions.
\textbf{Subtask 2: checking the previous submission.} The annotator must carefully check and correct the submission of the previous annotator, which usually consists of two parts, i.e., a current-round SQL query, and a previous-round NL question (if not first-round).
We find that this step is very important for avoiding error accumulation.
\textbf{Subtask 3: writing an NL question.}
The annotator writes a qualified NL question for the current-round SQL query.
On the one hand, the question should correctly and exactly express the meaning of the SQL query.
On the other hand, the question should be expressed in a flexible and natural manner, imitating human conversation in real life.
\input{table/set_example_table1.tex}
\textbf{Subtask 4: composing an SQL query.}
The annotator composes a new next-round SQL query, which is detailed in Section \ref{subsec:question-query-create}.
\textbf{Subtask 5: verifying corrections via interaction. }
If the annotator (A) finds and corrects mistakes in the previous submission of another annotator (B).
Our annotation tool will deliver the original submission along with the corrections to annotator B for his confirmation. If annotator B agrees with A, then the issue is settled; otherwise, a senior annotator is called to make a final decision.
\textbf{Subtask 6: making a request for ending a session.}
An annotator may make a request for ending a session after completing subtask 2, when he fails to think of anything more to ask.
Then a senior annotator handles the request.
Following \citet{yu2019sparc}, we require the number of question/SQL pairs in a session should range between 3 and 10. A session is automatically terminated if the number reaches 10.
\textbf{Discussion.} \emph{The one-annotator-one-session workflow}, adopted by CHASE, means that a session is completed by a single annotator.
As we discussed in Section \ref{sec:intro}, it may introduce strong annotator-related bias, since annotator usually have a limited number of ways to advance a session.
We observe that our iterative workflow can effectively alleviate this issue.
Another advantage of our iterative workflow is that a previous submission is reviewed timely, which can avoid error accumulation and improve data quality.
In contrast, data review can only be performed only after a whole session is completed in CHASE.
Nevertheless, it would be very expensive to compare the two workflows via strict quantitative experiments, which is beyond the scope of this work.
\subsection{DB Collection and Cleansing (S1)} \label{subsec:db-cleansing}
Collecting DBs is a non-trivial work. For simplicity, we reuse all 201 DBs with 813 tables of the DuSQL dataset\footnote{The license and data is at \url{https://www.luge.ai/\#/luge/dataDetail?id=13}.} \cite{wang-etal-2020-dusql}.
After looking into the data, we find that there are a lot of noises in the original DBs of DuSQL, which is also pointed out by \citet{guo2021chase}.
Most noises fall into four categories: 1) primary or foreign keys are not given; 2) the value type of a cell does not match its column type;
3) some cells do not have values; 4) a duplicate value occurs in the primary key column.
In order to improve the quality of DBs and make sure that all legal SQL queries can be successfully executed,
our six senior annotators have manually checked and corrected all DBs\footnote{Please note that we ask annotators not to introduce identification information and ask them to anonymize the existing identification information.} before real annotation. Each annotator handles about 35 DBs.
\subsection{Initial SQL Query Creation (S2)}
Creating suitable initial queries are crucial for session-level text-to-SQL data creation, since they directly influence subsequent annotations.
The suitability of initial queries depends on two aspects, i.e., simplicity and diversity.
Regarding to the first aspect, we find that queries at easy and medium difficulty levels are the most appropriate as initial queries.
We follow definition of difficulty levels in \citet{yu2018spider}.
The second aspect indicates that initial queries should cover as many SQL keywords as possible.
In order to satisfy both aspects, we induced 60 SQL query templates from single-round Spider and DuSQL, and each template contains some slots corresponding to masked table/column names and cell values.
Given a DB, we require the initial query matches one of the templates (simplicity), and create at most one initial query for one template (diversity).
We create 5,028 valid initial queries in total, among which 1,761 are from DuSQL, and 3,267 are written from scratch by our senior annotators.
\subsection{Subsequent Question/SQL Creation (S3)}
\label{subsec:question-query-create}
As discussed in Section \ref{subsec:iterative-workflow}, subsequent questions and queries are created by multiple randomly selected annotators, each contributing one current-round NL question and one next-round SQL query.
This subsection focuses on how to create a next-round SQL query given existing context.
Similar to context-dependent QA \cite{bertomeu2006contextual}, it is crucial to make as realistic as possible the thematic transition and context dependency between adjacent utterances, where theme refers to users' information need, and context dependency is concerned with manners in reusing previous content.
In this step (i.e., S3), we mainly consider the thematic transition, since reusing previous content is usually a natural choice for annotators.
As for the context dependency information, we follow CHASE and create explicit annotation (see Section \ref{ssec:fine-annotation}).
\textbf{Seven categories of thematic transition.}
To capture theme change and encourage diversity, we design seven transition categories to represent the relationship between the current-round and next-round queries, i.e., $y_j$ and $y_{j+1}$, as illustrated in Table \ref{tab:thematic-transition}.
Please note that we also allow annotators to compose a thematically ``unrelated'' query, which sometimes happens in real-world scenario.
Figure \ref{fig:interaction-system-creation-process} illustrates concrete operations in the bottom left corner.
Given $y_j$, the annotator first selects a transition category; then our annotation tool suggests several potential SQL templates according to $y_j$ and the selected category; finally the annotator selects an SQL template and fills it with DB elements to complete an SQL query.
\subsection{{Review \& Final Question Creation (S4)}}
\label{subsec:full-review-final-question}
\input{table/data_stas.tex}
This step is performed by our senior annotators.
If an ordinary annotator makes a request to terminate a session, the annotation tool will transmit the request to a senior annotator.
If the senior annotator agrees, then he must carefully review all previous questions and SQL queries, and correct all found mistakes.
After that, he writes an NL question for the final-round SQL query.
\subsection{Completing Context-Dep Questions (S5)
}
\label{ssec:fine-annotation}
In order to capture context dependency and make our dataset more widely applicable, we perform this step in a separate manner after all sessions are completed via the above four steps (S1-S4).
Each session is then assigned to one senior annotator. The annotator first goes through all NL questions, and decides whether each question is context-dependent.
Then, the context-dependent question is rewritten into a corresponding context-independent one.
There are in total 17,704 context-dependent questions, accounting for 65.5\% of all questions (see Table \ref{tab:dep_relation}).
As a result, \emph{SeSQL can serve as a single-round Chinese text-to-SQL dataset as well}, like DuSQL \cite{wang-etal-2020-dusql}. Moreover, it can also support reserach on question completion techniques.
\textbf{Context dependency types.}
Inspired by CHASE \cite{guo2021chase} and context-dependent QA \cite{bertomeu2006contextual},
we ask annotators to explicitly annotate the way that a context-dependent question depends on its previous questions.
There are five types, i.e., independent, co-reference, ellipsis, hybrid of co-reference and ellipsis, and others.
Such annotation can help us to better understand results of text-to-SQL parsers.
\subsection{Other Annotation Details}
\label{subsec:other-details}
\textbf{Annotators and Training.}
We recruit 28 undergraduate students as our part-time annotators, and 6 master students as senior annotators, including three co-authors of this paper. All of them come from the computer science department of our university and are familiar with the SQL language.
Before real annotation, we train all annotators for several times so that they understand the text-to-SQL parsing task, the annotation workflow, and the annotation tool, etc. During real annotation, we have also held several meetings to discuss common mistakes and settle disputes.
Our annotation project lasts for about half a year.
\textbf{Annotation tool.} We build an online browser-based annotation tool to facilitate this work. Figure \ref{fig:annotation_tools} in Appendix \ref{sec:annotation-tool} shows the annotation interface.
\textbf{Payment.}
All annotators were paid for their work based on the quality and quantity of their annotations.
According to the annotation time recorded by our annotation tool, the average salary per hour is 25 RMB for ordinary annotators, and 35 RMB for senior annotators.\footnote{The average salary is about 20 RMB for a part-time KFC employee in our city.}
A total of 106K RMB is paid to annotators.
\section{Experiments}
\label{sec:experiment}
\textbf{Datasets.}
According to the cross-domain setting, we split SeSQL such that there is no DB overlap in train/dev/test sets. Since our DBs are from DuSQL, we follow its DB split for three sets of SeSQL. Table \ref{tab:data-split} shows the data split statistics.
\textbf{Evaluation metrics.}
We use two popular metrics to evaluate model performances: Question-level Match (QM), the exact matching score over all questions, and Interaction-level Match (IM), the exact matching score over all interactions.
The exact matching score is 1 for a question only if all its predicted SQL clauses are correct, and 1 for an interaction only if the exact matching score for every question in the interaction is 1.
\textbf{Benchmark approaches.}
We adopt several competitive models that have published the corresponding source codes as the baseline approaches, i.e., EditSQL \cite{zhang2019editing}, IGSQL \cite{cai2020igsql} and extended RATSQL (EX-RATSQL) \cite{guo2021chase} for the context-dependent setting, as well as RATSQL \cite{wang2020rat} and LGESQL \cite{cao2021LGESQL} for the context-independent setting.
Due to space limitation, we show their implementation details in Appendix \ref{sec:base-model}.
\input{table/main-results.tex}
\input{table/sesql_chase_comparison.tex}
\input{table/fine-results.tex}
\subsection{Results}
\label{ssec:result}
\textbf{Overall performances.} Table \ref{tab:multi_main_result} shows the overall performances of five baseline models, where the first row shows performances of three session-level models (i.e., EditSQL, IGSQL and EX-RATSQL) on the SeSQL's session-level data, and the second row shows performances of RATSQL and LGESQL on the single-round data of SeSQL.
IGSQL and LGESQL have achieved the best performances on the session-level and single-round data, respectively.
But the results on the session-level data are far from satisfactory, reflected in two aspects. First, the best performances on IM, the primary metric in the session-level setting, only achieves 29.0\% on the test set. Second, the best QM accuracy achieved by IGSQL is 59.5\%, where the best QM accuracy on the single-round data is 71.0\%. That is, there is a large room for both QM and IM improvements on SeSQL. We believe SeSQL can facilitate the research on session-level text-to-SQL parsing.
\textbf{Comparison between SeSQL and CHASE.}
We use different combination of SeSQL and CHASE as training data, and use three separate dev sets, in order to understand data similarity and discrepancy.
To avoid DB overlap, which would corrupt the cross-DB text-to-SQL parsing task,
we remove all DBs that also appears in any of the three dev sets from each training data, along with corresponding question/SQL pairs.
Table \ref{tab:inter-chase-ours} shows the results.
First of all, it is clear that CHASE-C and CHASE-T are highly discrepant and incompatible. Using whole CHASE as training data leads to performances drop on CHASE-C dev set, compared with using only CHASE-C.
In other words, the extra CHASE-T only introduces more noisy information than helpful information.
However, using whole CHASE increases performances on CHASE-T dev set, compared with using only CHASE-T.
Second, using only SeSQL as training data achieves acceptable cross-dataset performances on both CHASE-C dev set, which is much higher than using CHASE-T as training data. The same trend goes to CHASE-T dev set.
This indicates that SeSQL possesses a higher level of generalization ability.
Third, using both SeSQL and CHASE-C as training data leads to higher performances on CHASE-C dev set than using only CHASE-C.
Similarly, using both SeSQL and CHASE-T as training data leads to higher performances on CHASE-T dev set than using only CHASE-T as well.
Such consistent improvement indicates that SeSQL is of higher quality and compatible with both CHASE-C and CHASE-T.
Finally, using either CHASE-C or CHASE-T as extra training data increases QM and IM on SeSQL dev set slightly, compared with using only SeSQL.
We suspect this may be due to the increased data volume added by both datasets.
\textcolor{black}{Despite SeSQL improves cross-dataset generalization, the model generalization ability across different datasets is still weak, even if these datasets are built on the same DBs (e.g., SeSQL and CHASE-C). We believe SeSQL can facilitate the research of text-to-SQL parsing, especially on the cross-dataset generalization of text-to-SQL models.}
\subsection{Analysis}
\label{ssec:exp-analysis}
According to the fine-grained annotation information, we report QM results on SeSQL's test set in Table \ref{tab:multi_fine_result}. There are three main findings that are applicable to all baseline models.
First, among all thematic transitions, all models do not perform well on transitions of \emph{Combining queries} (Com.) and hybrid of other transitions. As described in Section \ref{sec:data-analysis}, these transitions usually result in complex query generation.
Second, as shown in the column of ``\emph{Context Dependency}'', QM performances on context-independent (Indep.) pairs is higher than that on context-dependent pairs, i.e., the other four dependency types. \textcolor{black}{Furthermore, all base models do not perform well on questions that omit important historical information, i.e., labeled as ``ellipse''. This proves that how to effectively use historical information is challenging}.
Finally, due to the difficulty increase in SQL generation, QM performances decreases as the round increases, \textcolor{black}{which is consistent with the conclusions in other session-level datasets, e.g., SParC and CHASE}.
Then we analyze significance of fine-grained annotations
by comparing different models. From Table \ref{tab:multi_fine_result}, there are three interesting findings to verify the importance of fine-grained annotations in revealing the effectiveness of model components.
First, both IGSQL and EditSQL perform better than EX-RATSQL on non-first round questions, as they refer to the previous-round SQL query during the generation of the current-round SQL query. As all we known, in the context-dependent setting, the historical questions and generated SQL queries are very important to the current SQL generation.
Second, IGSQL outperforms EditSQL on all transition types and dependency types, where IGSQL incorporates a graph encoder into EditSQL to model DB schema items together with items mentioned in historical questions. The performances on these fine-grained annotations verify that this graph encoder effectively captures historical information of questions and DB schema items.
Third, EX-RATSQL, which only uses a relation-aware transformer to model the historical questions and DB schema items, performs best on the transition type of \emph{Tab.}, in which there is weak correlation between the historical rounds and the current round.
Based on the above conclusions, we believe these annotations can help to reveal the advantages and limitations of the model, so as to help to improve models.
\section{Introduction}\label{sec:intro}
Text-to-SQL parsing aims to automatically transform natural language (NL) questions into SQL queries based on given databases (DBs) \cite{tang2001using}. As a key technology in an NL interface for relational DBs, it has attracted increasing attention from both academic and industrial community. Researchers have done many solid and interesting fundamental works on both dataset construction \cite{zhong2017seq2sql,yu2018spider} and parsing model innovation \cite{zhang2019editing,wang2020rat}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.48\textwidth]{figure/case_intro.pdf}
\caption{An example session from SeSQL.
}
\label{fig:case_intro}
\end{figure}
Previous studies mainly focus on the single-round text-to-SQL parsing, where the input questions are context-independent.
Popular single-round datasets include WikiSQL \cite{zhong2017seq2sql} and Spider \cite{yu2018spider} for English, and
DuSQL \cite{wang-etal-2020-dusql} for Chinese.
However, in a real-world setting, it is usually difficult for users to meet their information need via a single stand-alone question.
On the one hand, users usually have several related questions to ask at the same time, instead of a single one.
On the other hand, possibly due to unfamiliarity toward the database or the system, users may need several trials until they find the suitable NL question.
Therefore, recent works go beyond single-round text-to-SQL parsing and start to tackle session-level text-to-SQL parsing \cite{yu2019sparc,cai2020igsql}, similar to the trend from single-round question answering (QA) to context-dependent QA \cite{bertomeu2006contextual}.
Figure \ref{fig:case_intro} shows a session-level example. Given a relational DB $D$, a user asks a sequence of questions, denoted by $Q = q_1, ..., q_{n}$, and the text-to-SQL engine produces a sequence of SQL queries,
denoted by $Y = y_1, ..., y_{n}$.
Questions in the same session are usually thematically related, and contextually dependent via ellipsis or co-reference as well \cite{bertomeu2006contextual}.
When generating $y_j$, the parser needs to not only look at $q_j$, but also heavily rely on the previous questions.
So far, previous researchers have constructed two session-level text-to-SQL datasets, i.e.,
SParC \cite{yu2019sparc} in English and CHASE \cite{guo2021chase} in Chinese.
SParC, containing 4,298 sessions and 12,726 question/SQL pairs, is built by extending the single-round Spider \cite{yu2018spider}.
As the first session-level Chinese dataset, CHASE contains 5,459 sessions and 17,940 question/SQL pairs \cite{guo2021chase}.
The major problem of CHASE is that it adopted a hybrid construction method.
Only 2,003 sessions are manually constructed from scratch (CHASE-C), whereas 3,456 correspond to a part of SParC after translating DBs and questions (CHASE-T).
As shown in our experiments, CHASE-C and CHASE-T are highly discrepant and incompatible as training and evaluation data, possibly due to culture and language gaps.
Moreover, only using CHASE-C may be insufficient to support model training.
This work presents \emph{SeSQL} (/\textprimstress seskju:l/), yet another large-scale session-level Chinese text-to-SQL dataset. SeSQL contains 5,028 sessions and 27,012 question/SQL pairs. All sessions are constructed manually from scratch.
This paper describes the construction methodology and process of SeSQL and presents detailed data analysis.
We summarize contributions of this work as follows.
\begin{enumerate}[label =(\arabic*),leftmargin=*]
\item SeSQL has three important features. First, based on several annotation trials, we adopt an iterative annotation workflow to encourage careful review of previous submissions, which we find is very useful for improving data quality.
Second, we design \emph{seven categories of thematic transition} for explicitly guiding annotators to creating next-round SQL queries. Third, we follow CHASE and explicitly annotate the \emph{context-dependent types} of adjacent NL questions, such as ellipses and co-reference.
\item We complete \textcolor{black}{17,704} context-dependent questions into corresponding context-independent ones, resulting in 27,012 context-independent questions. This leads to two advantages. On the one hand, SeSQL provides the largest
dataset for single-round multi-DB text-to-SQL parsing.
On the other hand, SeSQL can also support research on question completion techniques.
\item {\color{black}We conduct benchmark session-level experiments on SeSQL,} employing three competitive text-to-SQL models, i.e., EditSQL \cite{zhang2019editing}, IGSQL \cite{cai2020igsql}, and EX-RATSQL \cite{guo2021chase}.
\end{enumerate}
We will release SeSQL and the code for research usage at \emph{{http://xyz}}.
\section{Related Works}
\label{sec:relatedwork}
\textbf{Session-level text-to-SQL datasets.}
To date, there exist two representative session-level text-to-SQL datasets, i.e., English
SParC \cite{yu2019sparc} and Chinese CHASE \cite{guo2021chase}.
SParC reuses questions in the single-round dataset Spider \cite{yu2018spider} as guidance for annotators to create question sequences.
The basic idea is to transform an original Spider question into a sequence of simpler questions, with the goal of answering the original question.
As pointed out by \citet{guo2021chase}, this construction method leads to two biases: 1) high proportion of context-independent questions, and 2) high proportion of easy SQL queries.
As the first session-level Chinese dataset, CHASE is composed of two separate parts, i.e., CHASE-C and CHASE-T \cite{guo2021chase}.
For CHASE-C, they reuse 120 DBs from single-round DuSQL\cite{wang-etal-2020-dusql}, and question/SQL pairs are created from scratch by 12 college students.
For CHASE-T, they reuse a part of English SParC and employ 11 college students to translate DBs and question sequences into Chinese.
However, as shown in our experiments, CHASE-C and CHASE-T exhibit different characteristics due to culture and language gaps.
Moreover, it is inevitable that CHASE-T
inherits the biases of SParC.
\textbf{Conversational text-to-SQL parsing} belongs to a different task from session-level text-to-SQL parsing, and is also known as DB-based conversational QA.
CoSQL \cite{yu2019cosql} is an English dataset for this task.
Besides generating SQL queries, the model can ask NL questions to users
for clarifying ambiguities.
\textbf{Session-level text-to-SQL parsing approaches.}
Due to space limitation, we briefly introduce four representative approaches for session-level text-to-SQL parsing.
EditSQL \cite{zhang2019editing} generates a current-round SQL query by editing a previous-round query. Its encoder is designed to model interaction between the current-round question and all previous questions.
IGSQL \cite{cai2020igsql} extends EditSQL by introducing a graph encoder to model DB items together with those mentioned in questions.
\citet{hui2021dynamic} propose to jointly model the question sequence, DB items, and their interactions via a dynamic graph.
\citet{guo2021chase} propose extended RATSQL (EX-RATSQL), a session-level variant of RATSQL \cite{wang2020rat},
by simply concatenating all previous questions as inputs.
|
{
"timestamp": "2022-08-29T02:15:28",
"yymm": "2208",
"arxiv_id": "2208.12711",
"language": "en",
"url": "https://arxiv.org/abs/2208.12711"
}
|
\section{A glance at classical rotating black holes}\label{secKerr}
Most astrophysically significant
bodies are rotating. If a rotating body collapses, the rate of rotation
will speed up, maintaining constant angular momentum. Through a rather complicated process, the body could finally generate a black hole which would be a \emph{rotating black hole} (\emph{RBH}). From a classical point of view (\textit{no-hair conjecture} \cite{gravit}) the resulting spacetime will be described by a Kerr solution (or a Kerr-Newman solution, in the charged case).
This implies that an idealized classical model of a lonely rotating body eventually generates an axially symmetric, stationary and asymptotically flat spacetime with certain horizons, a specific causal structure and a curvature singularity.
In order to compare these characteristics with those of regular RBH, let us now briefly summarize them for the classical uncharged RBH solution. (The reader can consult, for example, \cite{Griff}\cite{gravit} and references therein for more information). In Boyer-Lidquist (B-L) coordinates $\{t,r,\theta,\phi\}$, the Kerr metric takes the form
\begin{equation}\label{gKerr}
ds^2=-\frac{\Delta}{\Sigma} (dt-a \sin^2\theta d\phi)^2+
\frac{\Sigma}{\Delta} dr^2+\Sigma d\theta^2+\frac{\sin^2\theta}{\Sigma}(a dt-(r^2+a^2)d\phi)^2,
\end{equation}
where
\[
\Sigma=r^2+a^2 \cos^2\theta, \hspace{1cm} \Delta=r^2-2 m r+a^2,
\]
$m$ is the black hole mass\footnote{In case the RBH is also charged, then it is described by using the Kerr-Newman solution in which $m$ should be replaced by $m-e^2/(2r)$, where $e$ is the total charge of the RBH.} and $a$ is a \textit{rotation parameter} that measures the (Komar) angular momentum per unit of mass \cite{gravit}. The spacetime is type D if $m\neq 0$.
If $m\neq 0$ there is a curvature singularity at $(r=0, \ \theta=\pi/2)$, as can be shown by the divergence of the curvature invariant $R_{\alpha\beta\gamma\delta} R^{\alpha\beta\gamma\delta}$.
Remarkably, for $a\neq 0$ and $\theta\neq \pi/2$, a surface defined by $t=$constant and $r=0$, the \textit{equatorial plane}, is singularity-free and has metric
\[
ds_2^2=a^2 \cos^2 \theta d\theta^2+a^2 \sin^2 \theta d\phi^2=dx^2+dy^2,
\]
where the coordinate change $x\equiv a \sin\theta \cos \phi$, $y\equiv a \sin\theta\sin\phi$ has been made to make explicit that the surface is flat.
The curvature singularity corresponds to the \textit{ring} $x^2+y^2=a^2$, while the equatorial plane corresponds to $x^2+y^2<a^2$.
In this way, the curves that reach $r=0$ with $\theta\neq \pi/2$ are reaching a regular point. In order to continue the curves it is usually argued that an analytic extension of the spacetime has to be obtained through $r=0$. The procedure requires letting the coordinate $r$ to take negative values \cite{H&E}. The $r<0$ extended spacetime can be seen as a negative mass spacetime. Causality violations occur in the extended spacetime \cite{Carter1968a}.
The metric has a coordinate singularity at $\Delta=0$, which can be easily removed by a coordinate change \cite{B&L}. At $\Delta=0$ the hypersurface $r=$constant becomes light-like and no observer can remain at the specific value for $r$, thus the hypersurface is called a \textit{null horizon}. In the Kerr case, if $m^2>a^2$ there are two roots: $r^{Kerr}_\pm=m\pm \sqrt{m^2-a^2}$, where the null horizon $r^{Kerr}_+$ is an event horizon, while the null horizon $r^{Kerr}_-$ is a Cauchy horizon. The limiting case $m^2=a^2$ has a degenerate null horizon and the spacetime is called the \textit{extreme} Kerr black hole. If $m^2<a^2$ there are no roots for $\Delta=0$ and the curvature singularity is naked. This is the so called \textit{hyperextreme} case.
\section{Kerr-like Rotating Black Holes}\label{KRBHs}
Several authors have suggested that the existence of singularities in the solutions of General Relativity has to be considered as a weakness of the theory rather than as a real physical prediction.
The problem of obtaining singularity-free models for black holes was first approached for spherically symmetric black holes. In this context, some authors introduced non-standard energy-momentum tensors mainly acting in the core of the black hole (see, for example, \cite{A-BI}\cite{A-BII}\cite{B&V}\cite{Bardeen}). However, most authors expect that the inclusion of quantum theory in the description of black holes could avoid the existence of their singularities (see, for example, \cite{A&B2005}\cite{B&R}\cite{Frolov2014}\cite{G&P2014}\cite{Hay2006}\cite{H&R2014}\cite{dust2014} and references therein).
We do not yet have a mature and reliable candidate for a quantum theory of gravity so that it is difficult to accurately describe even non-rotating (quantum) black holes. On the other hand, a glimpse back into classical black hole history indicates that finding an accurate description of a (quantum) rotating black hole could be even much more difficult: Kerr solution was only discovered following 48 years of struggle after the Einstein field equations were first developed. There are some works on approximated solutions, only valid in the slow-rotation limit \cite{P&C}\cite{Y&Y}. Unfortunately, they are not adequate enough to be used in astrophysical observations.
In this way, it is necessary to try phenomenological approaches to check for possible models of regular RBHs and their implications, including the possibility of observable astrophysical predictions.
Even if a regular RBH model comes from an approach to Quantum Gravity Theory, we will assume in this chapter that it can be reasonably well described by a manifold endowed with its corresponding metric. Nevertheless, it should be taken into account that, in the absence of a full Quantum Gravity Theory, probably one can only guarantee this to be a good description of the RBH up to the high curvature planckian regime.
Recently, there have appeared different proposals
for \emph{regular} rotating black holes spacetimes with their corresponding metrics (see section \ref{secObt}).
While they have been obtained by different approaches, most of them share a common \textit{Kerr-like form}.
The general metric corresponding to this kind of RBH, was found by G\"{u}rses-G\"{u}rsey \cite{GG} as a particular rotating case of the algebraically special Kerr-Schild metric:
\begin{equation}\label{GGg}
ds^2=(\eta_{\alpha \beta} +2 H k_\alpha k_\beta) dx^\alpha dx^\beta,
\end{equation}
where $\eta$ is the metric of Minkowski, $H$ is a scalar function and $\vec k$ is a light-like vector both with respect to the spacetime metric and to Minkowski's metric.
Specifically, in Kerr-Schild coordinates $\{\tilde{t},x,y,z\}$ the G\"{u}rses-G\"{u}rsey metric (\ref{GGg}) corresponds with the choices
\[
H=\frac{\mathcal M (r) r^3}{r^4+a^2 z^2}
\]
and
\[
k_\alpha dx^\alpha =-\frac{r (x dx+y dy) -a (x dy-y dx)}{r^2+a^2}-\frac{z dz}{r}-d\tilde{t},
\]
where $r$ is a function of the Kerr-Schild coordinates implicitly defined by
\begin{equation}\label{defr}
r^4-r^2 (x^2+y^2+z^2-a^2) -a^2 z^2 =0,
\end{equation}
$\mathcal M (r) $ is known as the \textit{mass function}
and the constant $a$ is a rotation parameter.
This metric can be written in Boyer-Lindquist-like coordinates by using the coordinate change defined by
\begin{eqnarray*}
x+i y&=&(r+i a) \sin\theta\exp\left[i\int(d\phi+\frac{a}{\Delta}dr)\right]\\
z&=&r \cos\theta\\
\tilde{t}&=&t+\int \frac{r^2+a^2}{\Delta} dr-r.
\end{eqnarray*}
where now $\Delta=r^2-2 \mathcal M(r) r+a^2$.
The resulting metric takes the form
\begin{equation}\label{gIKerr}
ds^2=-\frac{\Delta}{\Sigma} (dt-a \sin^2\theta d\phi)^2+
\frac{\Sigma}{\Delta} dr^2+\Sigma d\theta^2+\frac{\sin^2\theta}{\Sigma}(a dt-(r^2+a^2)d\phi)^2,
\end{equation}
where, again, $\Sigma=r^2+a^2 \cos^2\theta$.
Note that this metric reduces to Kerr's solution in B-L coordinates if $\mathcal M(r)=m$=constant and that it reduces to the (charged) Kerr-Newman solution if $\mathcal M(r)=m-e^2/(2r)$, where $e$ is the charge.
In order to analyze the general properties of the RBH spacetime
we will use the following null tetrad-frame:
\begin{eqnarray*}
\mathbf{l} &=&\frac{1}{\Delta} \left( (r^2+a^2) \frac{\partial}{\partial t}+\Delta \frac{\partial}{\partial r}+a \frac{\partial}{\partial \phi}\right),\\
\mathbf k &=&\frac{1}{2 \rho^2} \left( (r^2+a^2) \frac{\partial}{\partial t}-\Delta \frac{\partial}{\partial r}+a \frac{\partial}{\partial \phi}\right),\\
\mathbf m &=& \frac{1}{ \sqrt{2} \varrho } \left(i a \sin\theta \frac{\partial}{\partial t}+\frac{\partial}{\partial \theta}+i \csc\theta \frac{\partial}{\partial \phi} \right),\\
\mathbf{\bar m} &=& \frac{1}{ \sqrt{2} \bar\varrho } \left(-i a \sin\theta \frac{\partial}{\partial t}+\frac{\partial}{\partial \theta}-i \csc\theta \frac{\partial}{\partial \phi} \right),
\end{eqnarray*}
where $\varrho\equiv r+i a \cos\theta$, $\bar\varrho\equiv r-i a \cos\theta$ and the tetrad is normalized as follows $\mathbf l^2=\mathbf k^2=\mathbf m^2=\mathbf{\bar m}^2=0$ and $\mathbf l\cdot \mathbf k=-1= -\mathbf{m}\cdot \mathbf{\bar m}$.
\begin{prop}\label{PTD}\cite{TorresReg}
The RBH metric (\ref{gIKerr}) is Petrov type D and the two double principal null directions are $\mathbf l$ and $\mathbf k$.
\end{prop}
We can also define a real orthonormal basis $\{\mathbf{t}, \mathbf{x}, \mathbf{y}, \mathbf{z } \}$
formed by a timelike vector $\mathbf{t}\equiv (\mathbf{l}+\mathbf{k})/\sqrt{2}$ and three spacelike vectors: $\mathbf{z}\equiv (\mathbf{l}-\mathbf{k})/\sqrt{2}$, $\mathbf x=(\mathbf m +\bar{\mathbf m})/\sqrt{2}$ and $\mathbf y=(\mathbf m -\bar{\mathbf m}) i/\sqrt{2}$. Then,
$\mathbf t$ and $\mathbf z$ are two eigenvectors of the Ricci tensor with eigenvalue \cite{TorresReg}
\begin{equation}\label{lambda1}
\lambda_1=\frac{2 a^2 \cos^2{\theta} \mathcal M'+r \Sigma \mathcal M''}{\Sigma^2}.
\end{equation}
$\mathbf x$ and $\mathbf y$ are two eigenvectors of the Ricci tensor with eigenvalue
\begin{equation}\label{lambda2}
\lambda_2=\frac{2 r^2 \mathcal M'}{\Sigma^2}.
\end{equation}
In this way, the Ricci tensor can be written as
\begin{equation}\label{Ricci}
R_{\mu\nu}= \lambda_1\, (-t_\mu t_\nu+z_\mu z_\nu)+ \lambda_2 (x_\mu x_\nu+y_\mu y_\nu),
\end{equation}
what shows the following
\begin{prop}\cite{TorresReg}
The metric (\ref{gIKerr}) with $\mathcal M\neq$constant is Segre type [(1,1) (1 1)].
\end{prop}
Note that the $\mathcal M\neq$constant case is precisely the case we are interested in for our regular RBHs, since the $\mathcal M=$constant (i.e., Kerr's case) is singular.
\section{Regularity in Kerr-like Rotating Black Holes }\label{secRegu}
In order for the model of a RBH to be regular it should be devoid of curvature singularities. Let us now specifically analyze the absence of \emph{scalar} curvature singularities. We say that there is a \textit{scalar curvature singularity}
in the spacetime if any scalar invariant polynomial in the Riemann tensor diverges when approaching it along any incomplete curve.
It is well-known \cite{Weinberg} that an arbitrary spacetime possesses at most 14 second order algebraically independent invariants. The finiteness of \emph{all} the invariants is a necessary and sufficient condition for the absence of scalar curvature singularities.
A minimum set of reliable independent invariants for the RBH spacetime exists. This can be shown thanks to the following result by Zakhary and McIntosh \cite{ZM}
\begin{prop}
The algebraically complete set of second order invariants for a Petrov type D spacetime and Segre type [(1,1) (1 1)] is $\{\mathcal R,I,I_6,K\}$.
\end{prop}
Apart form the well-known curvature scalar $\mathcal R$, the rest of the invariants are defined as\footnote{Here the invariants are written in tensorial form. See \cite{ZM} for their spinorial form.}
\begin{eqnarray*}
I_6&\equiv&
\frac{1}{12}
{S_\alpha}^\beta {S_\beta}^\alpha,\\
I &\equiv&
\frac{1}{24}
\bar{C}_{\alpha\beta\gamma\delta}
\bar{C}^{\alpha\beta\gamma\delta},\\
K &\equiv&
\frac{1}{4}
\bar{C}_{\alpha\gamma\delta\beta}
S^{\gamma\delta} S^{\alpha\beta},
\end{eqnarray*}
where ${S_\alpha}^\beta \equiv {R_\alpha}^\beta-
{\delta_\alpha}^\beta \mathcal{R}/4$ and
$\bar{C}_{\alpha\beta\gamma\delta}\equiv
(C_{\alpha\beta\gamma\delta} +i\ *C_{\alpha\beta\gamma\delta})/2$
is the complex conjugate of the selfdual Weyl tensor
being $*C_{\alpha\beta\gamma\delta}\equiv
\epsilon_{\alpha\beta\mu\nu} C^{\mu\nu}_{\ \ \gamma\delta}/2$ the dual of the Weyl tensor.
Note that $\mathcal R$ and $I_6$ are real, while $I$ and $K$ are complex. Therefore for this type of spacetimes there are only 6 independent real scalars.
It trivially follows from our previous propositions
\begin{corollary}\cite{TorresReg}
The algebraically complete set of second order invariants for the RBH metric (\ref{gIKerr}) is $\{\mathcal R,I,I_6,K\}$.
\end{corollary}
Similarly to Kerr's case, a straightforward inspection of the metric (\ref{gIKerr}) tell us that it is singular if there are values of $r$ such that $\Delta=0$ and if $\Sigma=0$. However, $\Delta=0$ is not a scalar curvature singularity since the curvature scalars do not diverge there.
for the values of $r$ ($\neq 0$) where $\Delta=0$.
It is simply a coordinate singularity that can be removed through a coordinate change. (See section \ref{Horizons}).
Scalar curvature singularities do may appear if $\Sigma=0$ or, in other words, in $(r=0,\theta=\pi/2)$. (We already confirmed this possibility in section \ref{secKerr} for the particular case of Kerr's solution).
Now, by explicitly computing the complete set of scalars in our case, one directly gets a necessary and sufficient condition for the absence of scalar curvature singularities:
\begin{theorem}\label{teorema}\cite{TorresReg}
Assuming a RBH metric (\ref{gIKerr}) possessing a $C^3$ function $\mathcal M(r)$, all its second order curvature invariants will be finite at $(r=0,\theta=\pi/2)$ if, and only if,
\begin{equation}\label{condisreg}
\mathcal M (0)= \mathcal M' (0)= \mathcal M'' (0)=0 .
\end{equation}
\end{theorem}
The absence of curvature singularities is a necessary condition in order to have a regular rotating black hole. The theorem allows to control the specific case of \emph{scalar curvature singularities}, which arguably are the most serious type of curvature singularities. However, since scalar polynomials do not fully characterize the Riemann tensor, it does not cover the possibility of the existence of curvature singularities with respect to a parallelly propagated basis (\textit{p.p. curvature singularity})\cite{H&E}. This possibility has not yet been fully analyzed in the literature.
\section{Violation of the energy conditions}\label{ecs}
The energy conditions were first developed in the framework of Einstein's General Relativity. These are conditions imposed on the energy-momentum tensor of the spacetime as a means of ensuring plausible matter-energy contents \cite{H&E}. Even if here we are not confined to General Relativity we can take profit of the energy conditions by considering the existence of an \textit{effective energy-momentum tensor} defined through
\[
T_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2} \mathcal R g_{\mu\nu}.
\]
In our more general context, it is usually argued that it seems reasonable to demand the spacetime describing a realistic isolated RBH to fulfill the standard energy conditions in asymptotically flat regions (thus, imitating the classical RBH solutions at large distances/low curvatures). Nevertheless, probably it would be more accurate to say that one should expect extremely small violations of the energy conditions in the asymptotically flat regions. This is due to the fact that, as pointed out by Donoghue \cite{Dono}, the standard perturbative quantization of Einstein gravity leads to a well-defined, finite prediction for the leading large distance correction to Newton’s potential. Specifically, it is shown that quantum effects produce deviations in the gravitational field for spherically symmetric fields of the order $G m l_p^2/r^3$ whenever $r\gg 2 m$, being $l_p$ Planck's length. This implies an extremely small reduction of the classically expected (negative) gravitational field. In the weak field approximation this leads to an effective energy-momentum tensor that (\textit{slightly}) violates the dominant energy conditions \cite{B&R}\cite{TorresVoids}.
Let us now treat the behaviour of the energy conditions in the region around $r=0$ for regular RBH. If we take the expression obtained for the Ricci tensor (\ref{Ricci}) one can explicit $\mathbf{T}$ for a RBH as
\[
T_{\mu\nu}=-\lambda_2 (-t_\mu t_\nu + z_\mu z_\nu)- \lambda_1 (x_\mu x_\nu+ y_\mu y_\nu).
\]
Since $\mathbf{T}$ diagonalizes in the orthonormal basis $\{\mathbf{t}, \mathbf{x}, \mathbf{y}, \mathbf{z } \}$, the RBH spacetime possesses an (effective) energy-momentum tensor of type I \cite{H&E}. The (effective) density being $\mu=\lambda_2$ and the (effective) pressures being $p_x=p_y=-\lambda_1$ and $p_z=-\lambda_2$. The weak energy conditions \cite{H&E} require $\mu\geq 0$ and $\mu+p_i\geq 0$. In other words, in this case they require
\[
\lambda_2 \geq 0 \hspace{1 cm} \mbox{and} \hspace{1 cm} \lambda_2-\lambda_1\geq 0.
\]
By using this and expressions (\ref{lambda1}) and (\ref{lambda2}) it is easy to show the following
\begin{prop}\cite{TorresReg}
Assume that a \emph{regular} RBH has a function $\mathcal M (r)$ that can be approximated by a Taylor polynomial around $r=0$, then the weak energy conditions should be violated around $r=0$.
\end{prop}
Note that for this type I effective energy-momentum the violation of the weak energy condition also implies the violation of the \textit{dominant} and the \textit{strong} energy conditions.
In this way, no model with \textit{normal} matter (matter satisfying the energy conditions) can produce a \emph{regular} rotating black hole of the type (\ref{gIKerr}).
However, the violation of the WEC around $r=0$ is not problematic since it is well-known that quantum effects can violate the WEC (Casimir effect). Moreover, singularity theorems require the spacetime to fulfill some energy condition in order to predict the existence of singularities. In this sense, the violation of energy conditions just \textit{helps} to avoid the existence of singularities \footnote{Of course, regularity can also be obtained by violating other assumptions in the singularity theorems}.
\section{Extensions beyond $r=0$}\label{secr0}
As stated in section \ref{secKerr}, for Kerr's solution
one could considered the possibility of extending the spacetime through the equatorial plane.
Now, in order to analyze the general situation for regular RBHs with metric (\ref{GGg}), let us proceed with an analysis similar to the one usually carried out for the classical RBH case. Consider the representative metric component
\begin{equation}\label{gtt}
g_{tt}=-1+ \frac{2 \mathcal M (r) r^3}{r^4+a^2 z^2}.
\end{equation}
Let us imagine and observer crossing $r=0$ moving in the $z$ \textit{axis} ($x=y=0$).
If we persist in considering $r$ as non-negative, then (\ref{defr}) implies that $r=|z|$ along the trajectory of the observer, so that along it
\begin{equation}\label{gttz}
g_{tt}=-1+ \frac{2 \mathcal M (|z|) |z|}{z^2+a^2}.
\end{equation}
The numerator in the fraction indicates that the derivative of this metric component along the axis, as well as the Christoffel symbols and the extrinsic curvature of the surface can be discontinuous across the equatorial plane depending on the chosen mass function. In particular, this discontinuity occurs if the mass function is constant (Kerr's case).
Historically, the differentiability problems in Kerr's RBH have been approached:
\begin{itemize}
\item[a)] By analytically extending the spacetime through $r=0$ with negative values for $r$. This requires considering two spacetimes, one with positive $r$ and another with negative $r$ and properly identifying points in their $r=0$ surfaces by a standard procedure which is ilustrated in figure \ref{extensionr0} (see, for example, \cite{H&E})\footnote{Note also that this approach has been criticized in \cite{GC&M}.}.
\item[b)] By considering the discontinuity of the derivatives of the metric components in the equatorial plane and, thus, the discontinuities in the second fundamental form, as indicating the presence of a thin shell in the surface \cite{IsraelThin}.
\end{itemize}
\begin{figure}[ht]
\includegraphics[scale=1.1]{extensionr0.pdf}
\caption{\label{extensionr0} In Kerr's case and approach a), the extension through $r=0$ is obtained by identifying the top of the surface ($r=0, t=$constant) in the hypersurface described by coordinates $\{x,y,z\}$ with the bottom of the surface ($r=0, t=$constant) in the hyersurface described by coordinates $\{x',y',z'\}$, and vice versa. Only the $y=0$, $y'=0$ sections of theses hypersurfaces are represented here.}
\end{figure}
It was soon noticed that by following approach a), and extending through $r=0$ with negative values of $r$, can produce closed causal curves and, thus, causality problems \cite{H&E}. This will be treated, for the general case of RBH, in section \ref{secCaus},
With regard to approach b), it not only leads to the concentration of mass-energy in a infinitesimally thin surface, but it also needs its matter to move faster than light\cite{Israel}\cite{Hamity}.
At first sight, the situation for general regular RBH looks much better. Assuming that the regular RBH has a mass function $\mathcal M (r) \sim r^n$ with $n\geq 3$ around $r=0$, the metric component along the trajectory (\ref{gttz}) will not have differentiability problems in $z=0$ ($\partial_z g_{tt}(z=0)=0$) (and, in fact, it will be at least $C^n$). This suggests that the extension through the equatorial plane could not be necessary for regular RBH.
In order to show it, one has to go beyond a particular trajectory intersecting the equatorial plane and beyond the analysis of a single metric component. Let's start by noticing that while approaching a point in the equatorial plane ($x^2+y^2<a^2$), according to (\ref{defr}), the function $r$ approaches zero whenever $z$ approaches zero and vice versa. If we insist in having a positive $r$, we get, solving for $r$ in (\ref{defr}), that around $z=0$
\[
r \simeq \frac{|a|}{\sqrt{a^2-(x^2+y^2)}} |z|
\]
If we introduce this into the metric component (\ref{gtt}) and considering a mass function $\mathcal M (r) \sim r^n$ with $n\geq 3$ around $r=0$, we see that the metric component takes the form
\[
g_{tt} \simeq -1+\frac{f(x,y) |z|^{n+1}}{g(x,y) z^2+a^2},
\]
where $f$ and $g$ are finite differentiable functions in the equatorial plane. In this way, $g_{tt}$ is differentiable at the equatorial plane. (In particular, again ($\partial_z g_{tt}(z=0)=0$)).
The reader can check that the same situation is found for the rest of metric components. Let us only remark that the metric will not be analytic at the equatorial plane. Not all metric components will be infinitely differentiable. For example, even if the particular metric component (\ref{gtt}) and for odd $n$ is $C^\infty$ other metric components like
\[
g_{tz}=\frac{2\mathcal{ M}(r) r^2 z (a y+x r)}{(a^2+r^2)(a^2 z^2+r^4)}
\]
are not. Nevertheless, such a degree of differentiability is not required at all\footnote{Usually the metric is required to be at least $C^2$ \cite{H&E}. However, some authors consider this degree of differentiability too restrictive.}. In this way, regular RBH do not have differentiability problems and an extension through the $r=0$ is not needed\footnote{Let us comment that, even if not mathematically needed, the possibility of extending through $r=0$ with negative values of $r$ exists, in principle, for all regular RBH.}. An observer could cross through $r=0$ while remaining in the ($r\geq 0$) spacetime. (See figure \ref{caseB})). Furthermore, most of the problems listed above in Kerr's RBH would be nonexistent.
\begin{figure}[ht]
\includegraphics[scale=1]{caseB.pdf}
\caption{\label{caseB} For regular RBH no extension through $r=0$ is required. An observer crossing the surface ($r=0,t=$constant) from positive $z$ to negative $z$ following a non-geodesic time-like curve can stay in its original spacetime. Along the trajectory of the observer $r$ just decreases until reaching the surface $r=0$, where it increases again.}
\end{figure}
\section{Maximal extensions, null horizons and global structure}\label{Horizons}
Metric (\ref{gIKerr}) in Boyer-Lindquist-like coordinates has a coordinate singularity at $\Delta=0$ that can be eliminated through a coordinate change in order to obtain the maximally extended spacetime. The procedure is similar to the one usually carried out in Kerr's solution \cite{B&L}. For example, one can perform a coordinate change from B-L-like coordinates $\{t,r,\theta,\phi\}$ to advanced Eddington-Finkelstein-like or \textit{Kerr-like coordinates} $\{u,r,\theta,\varphi\}$, where $u$ is a light-like coordinate, through \footnote{By means of these kind of coordinate changes -advanced and retarded- the \emph{maximal} extension is obtained by following the procedure in \cite{B&L}.}
\begin{equation}\label{uphi}
u\equiv t+ \int \frac{r^2+a^2}{\Delta} dr \hspace{1cm};\hspace{1cm} \varphi=\phi+\int \frac{a}{\Delta} dr.
\end{equation}
In these Kerr-like coordinates the metric takes the form
\begin{eqnarray*}
ds^2&=&-\left(1-\frac{2 \mathcal M(r) r}{\Sigma} \right) du^2+2 dudr+\Sigma d\theta^2-2a \sin^2\theta dr d\varphi\\
&+&\left( r^2+a^2+\frac{2 a^2 \mathcal M(r) r \sin^2\theta}{\Sigma} \right) d\varphi^2-\frac{4 \mathcal M (r) r a}{\Sigma} \sin^2\theta du d\varphi
\end{eqnarray*}
and the problems with $\Delta=0$ disappear.
The causal character of the $r=$constant hypersurfaces is defined by the sign of $g^{rr}=\Delta/\Sigma$. Since $\Sigma>0$ (except at the ring $r=0,\theta=\pi/2$), the causal character of the $r=$constant hypersurfaces depends on the sign of $\Delta$.
In particular, this hypersurface will be light-like if $\Delta=0$, so that observers will not be able to remain at $r=$ constant at these particular hypersurfaces, thus called \textit{null horizons}.
We have already treated the null horizons in Kerr's solution in section \ref{secKerr}.
Now, in order to get the null horizons in the general RBH case we should solve
\begin{equation}\label{eqdelta}
\Delta=r^2-2 \mathcal M(r) r+a^2=0.
\end{equation}
Without the knowledge of a specific $\mathcal M(r)$ it is not possible to know the \emph{exact} position of the horizons. Nevertheless, one can analyze the general behaviour of the horizons by taking into account the following considerations:
\begin{itemize}
\item If we assume an asymptotically flat spacetime, at large distances $\mathcal M(r)\simeq m=$constant, so that one (approximately) recovers the behaviour for the Kerr solution. Then $\Delta>0$ and $r$ will be a spacelike coordinate.
\item For $r\simeq 0$ ($a\neq 0$) a regular RBH has $\Delta>0$ thanks to the effect of the rotation and, again, $r$ will be a spacelike coordinate. (Note that this already happens in the classical Kerr solution).
\item If we assume the existence of a RBH and, thus, the existence of an exterior horizon $r_+$ (solution of $\Delta=0$) then the continuity of $\Delta$ and the two previous items imply either a single horizon (\textit{extreme} RBH), two horizons $r_-$ and $r_+\ (>r_-) $ or, in general, an even number of horizons.
\item If no solutions of (\ref{eqdelta}) exist, then no null horizons exist and we are in a \textit{hyperextreme} case. The regular rotating astrophysical object without an event horizon is not properly a black hole. The regularity implies that, contrary to the classical case, there is not a naked singularity.
\end{itemize}
In practice, the usual regular RBH in the literature has one or two null horizons, as in the classical case. This is not surprising if one considers deviations from General Relativity as coming from Quantum Gravity effects. Then, based on a simple dimensional analysis, one could expect the Planck scale to be the most natural scale in which to expect the departure from General Relativity to occur, what would imply only strong deviations from the classical solution around $r\sim r_{Planck}$ and, thus, only small corrections to the horizons (at least for RBH with masses much larger than the planckian mass). One also expects that associated with non-singular RBH there would be a \textit{weakening of gravity}. An effect which should be very important at high curvature scales. In this way, comparing with the classical case, it is usual to obtain bigger inner horizons and smaller outer horizons.
Of course, the Planck scale approach could turn out to be too naive and bigger deviations from the classical solutions could be possible, what would be good news for the observational aspects of RBH (see section \ref{secPheno}).
Nevertheless, in order to illustrate the global causal structure of regular RBH let us follow the approach of small perturbations with respect to the classical horizons.
We will compare this regular RBH causal structure with the usual one for Kerr's RBH, where we extend the spacetime through $r=0$ into negative values for $r$. There are three possible qualitatively different causal structures for Kerr's RBH spacetime which are represented in the Penrose diagrams of figure \ref{Pbiggera} (for the case with two null horizons) and of figure \ref{PHE} (for the \textit{extreme} case and the \textit{hyperextreme} case).
\begin{figure}[htp]
\includegraphics[scale=.7]{Penrosebiggera2.pdf}
\caption{\label{Pbiggera} Penrose diagram for Kerr's RBH with two horizons. The spacetime has been extended through $r=0$ to asymptotically flat regions with negative values for $r$ (IV or IV'). The grey regions are the regions where the coordinate $r$ is timelike. Starting from the asymptotically flat region I, one could enter region II by traversing the \emph{event horizon} $r_+$. Region III could next be reached by traversing the \emph{Cauchy horizon} $r_-$. Then, the asymptotically flat region IV could be reached by passing through the regular $r=0$. Note that the diagram is valid for $\theta\neq \pi/2$. The diagram with $\theta= \pi/2$ will require to draw the ring singularity.}
\end{figure}
\begin{figure}[htp]
\includegraphics[scale=.7]{PHE.pdf}
\caption{\label{PHE} Penrose diagrams for Kerr's extreme rotating black hole (to the left) and for the hyperextreme case (to the right). In the extreme case there is only one horizon denoted by $r_\pm$ in which the coordinate $r$ is lightlike. $r$ is never timelike. $r_\pm$ acts both as an event and as a Cauchy horizon.
In the hyperextreme case there are no horizons and $r$ is always spacelike. In both cases, the spacetime has been extended through $r=0$ to an asymptotically flat region with negative values for $r$. (Note that, again, the diagrams are valid for $\theta\neq \pi/2$). The diagram with $\theta= \pi/2$ will require to draw the ring singularity.}
\end{figure}
If we are in the regular RBH case then there is no need for an extension through $r=0$. We can have three possible qualitatively different causal structures for the BH spacetime which are represented in the Penrose diagrams of figure \ref{Pbiggera3} (for the case with two null horizons) and of figure \ref{PHE3} (for the \textit{extreme} case and the \textit{hyperextreme} case).
\begin{figure}[htp]
\includegraphics[scale=.7]{Penrosebiggera3.pdf}
\caption{\label{Pbiggera3} Penrose diagram for a regular rotating black hole with two null horizons. In this case an extension through $r=0$ is not required. The grey regions are the regions where the coordinate $r$ is timelike. We have depicted a light-like geodesic (dashed blue line) that, starting from the asymptotically flat region I, enters region II by traversing the \emph{event horizon} $r_+$. Then it reaches region III' by traversing the null horizon $r_-$. The value of $r$ first decreases along the geodesic until reaching $r=0$, where it increases again. It makes it to another null horizon $r_-$, enters region II', traverses another event horizon $r_+$ to enter the asymptotically flat region I'' where it travels towards the future null infinity. (Note that, since there are not singularities, the diagram is valid for all $\theta$). }
\end{figure}
\begin{figure}[htp]
\includegraphics[scale=.7]{PHE3.pdf}
\caption{\label{PHE3} Penrose diagrams for an extreme regular rotating black hole (to the left) and for a hyperextreme case (to the right). In both cases, an extension through $r=0$ is not required. In the extreme case there is only one horizon denoted by $r_\pm$ in which the coordinate $r$ is lightlike. $r$ is never timelike. $r_\pm$ acts as an event horizon.
In the hyperextreme case there are no horizons and $r$ is always spacelike. (Note that, again, since there are not singularities, the diagrams are valid for all $\theta$).}
\end{figure}
The absence of an event horizon in the hyperextreme case is interesting, since this implies that an observer could receive information from the inner high curvature regions near $r=0$. In principle, this could be used to observationally test the different approaches to Quantum Gravity.
The problem is whether such RBHs are feasible. In the framework of General Relativity, it does not seem possible to obtain such high speed RBH ($a^2>m^2$) from a collapsing star and any attempt to overspin an existing black hole destroying its event horizon has fail, in agreement with the weak cosmic censorship conjecture. However, for regular RBHs it has been suggested that it could be possible to destroy the event horizon \cite{Li&Bambi}.
A warning is relevant here: A RBH solution should be stable in the region outside the event horizon and also inside. In the previous considerations (and figures) we have not taken into account the effects that instabilities could have in the global structure of the spacetime. Precisely, a non-trivial problem for RBH is the stability of their inner horizon. The first works in instability of inner horizons come from the study of the classical charged Reissner-Nordstr\"{o}m black hole which suffers the so called \textit{mass-inflation instability} \cite{P&I}. Studies of the instability of Kerr's horizons were developed in \cite{B&W}\cite{P&I2}. The consideration of regular black holes coming from different approaches to quantum gravity does not seem to alleviate the problem since, first, even in the non-rotating case they seem to require the existence of an (usually unstable) inner horizon and, second, even the backscattered flux of Hawking radiation coming from the black hole itself could be enough to destabilize its inner horizon \cite{TorresIns}. Other studies on the stability of regular black holes can be found in \cite{Cetal1}\cite{Cetal2}\cite{Cetal3}. Even a fine-tuned stable regular RBH can be found in \cite{FLMV}.
\section{Causality}\label{secCaus}
In general, it seems reasonable to ask a time orientable spacetime to be absent of closed causal curves. The existence of such curves would seem to lead to logical paradoxes: One could travel following these curves and arrive back before one's departure, so that one could prevent oneself from setting out in the first place.
A spacetime absent of closed causal curves is said to be \textit{causal} \cite{H&E}. If, in addition, no closed causal curve appears even under any small perturbation of the metric the spacetime is called \textit{stably causal}.
It is well-known that the usual analytical extension of the Kerr metric is non-causal. Since Kerr metric is a particular case of the metric (\ref{gIKerr}), it is natural to ask whether the maximal extensions of regular RBH should also be non-causal.
Along the lines in \cite{Maeda}, in order to examine this issue we will use proposition 6.4.9 in \cite{H&E} that states that when a time function $f$ exists in the spacetime such that its normal $\mathbf{n}\equiv\nabla_\mu f \, dx^\mu$ is timelike, then the spacetime is stably causal. ($f$ can be thought as \emph{the} time in the sense that it increases along every future-directed causal curve).
Let us choose the time coordinate $\tilde{t}$ in Kerr-Schild coordinates as our time function $f$. The timelike character of $\mathbf{n}$ can be checked as follows:
\begin{equation}\label{n2}
\mathbf{n}^2=g^{\mu\nu} \nabla_\mu \tilde{t} \nabla_\nu \tilde{t}=g^{\tilde{t}\tilde{t}}=-1-\frac{2 \mathcal M(r) r^3}{r^4+a^2 z^2}.
\end{equation}
Since we would like this to be negative, it trivially follows
\begin{prop} \cite{Maeda}
If $r \mathcal M(r)\geq 0$ for all $r$, then the model of RBH with metric (\ref{GGg}) [or (\ref{gIKerr})] will be stably causal.
\end{prop}
Note that for a regular RBH (unextended through $r=0$), it just suffices to guarantee a non-negative mass function for the spacetime to be stably causal.
\section{Thermodynamics}
The consideration of the thermodynamics of black holes started in the 1970's with a series of articles with fundamental contributions by Bekenstein and Hawking \cite{BCH}\cite{beke}. In 1975, Hawking proved that quantum mechanical effects cause Schwarzschild black holes to create and emit particles as if they were black bodies with a temperature proportional to their surface gravity. Since then, the thermodynamics of many different black holes coming from General Relativity and from alternative theories has been analyzed.
Here we would like to treat the thermodynamics of general regular RBH described by the line element (\ref{gIKerr}) at an introductory level.
In a RBH there is a family of \textit{stationary observers}, i.e., those observers moving with constant angular velocity at fixed $r$ and $\theta$ without perceiving any time variation of the gravitational field. It is easy to check that their angular velocity is
\[
\Omega=\frac{d\phi}{dt}=-\frac{g_{t\phi}}{g_{\phi\phi}}=\frac{a (a^2+r^2-\Delta)}{(r^2+a^2)^2-\Delta a^2 \sin^2\theta}.
\]
The four velocity of these observers is proportional to the killing vector
\[
\vec \xi\equiv \vec t+\Omega \vec\phi,
\]
constructed with the killing vectors $\vec t=\partial_t$ and $\vec\phi=\partial_\phi$ and the constant ($r$ and $\theta$ fixed for the stationary observers) angular velocity $\Omega$.
In the event horizon ($r_+$) the angular velocity is just
\[
\Omega(r_+)=\Omega_+=\frac{a}{r_+^2+a^2}
\]
and it can be checked that the kiliing vector on the horizon $\vec \xi\rfloor_{r_+}$ is light-like ($\vec \xi\cdot\vec \xi\rfloor_{r_+}=0$). In this way, this killing is tangent to the null geodesics generators of the event horizon.
The surface gravity $\kappa$ in the horizon can be found using \cite{Wald}
\[
\kappa^2=-\frac{1}{2} \nabla_\mu \xi_\nu\, \nabla^\mu \xi^\nu.
\]
In our regular RBH it is just
\[
\kappa=\frac{\Delta'(r_+)}{2 (r_+^2+a^2)},
\]
where the prime in $\Delta'$ stands for derivative with respect to $r$. Note that the surface gravity is constant in the event horizon, what links it to the temperature of the RBH. In fact, it is usually assumed that the temperature is just $T_+=\kappa/2\pi$ \cite{Wald} and, thus,
\[
T_+=\frac{\Delta'(r_+)}{4 \pi (r_+^2+a^2)}.
\]
In this way, whenever $\Delta'(r_+)\neq 0$ the black hole will have a non-zero temperature and will emit Hawking radiation.
Note also that a differentiable $\Delta$ requires, in the extremal case, that $\Delta'(r_\pm)= 0$. In this way, in case it exists, the temperature of an extremal RBH would be zero and it would not emit Hawking radiation.
In order to check the correctness of the result, one can compute the temperature of the particular case $\mathcal M(r)=m=$constant (i.e., the well-known Kerr's RBH) obtaining the expected result
\[
T^{Kerr}_+=\frac{r_+^2-a^2}{4\pi r_+ (r_+^2+a^2)}.
\]
Further discussions of the thermodynamic properties and Hawking radiation for particular regular RBHs can be found in \cite{A&G2}\cite{HSK}\cite{R&T}\cite{take}.
\section{Obtaining Regular Rotating Black Hole Models}\label{secObt}
Different articles dealing with regular rotating black holes propose different forms for the function $\mathcal M(r)$. Its exact expression depends on the procedure used to obtain the RBH.
In many cases the authors just propose heuristic forms for $\mathcal M$.
The idea behind this heuristic approach is to try to ascertain the main characteristics that a regular RBH should have. Thus, for instance, the possible differences between the event horizons in the proposed models and the event horizon in Kerr's solution can be analyzed and maybe observationally tested (see section \ref{secPheno}).
Of course, for a regular RBH the specific mass function $\mathcal M$ is chosen to avoid the existence of singularities. It is usually also demanded that the spacetime should be asymptotically flat. Other goals may include the (approximated) fulfillment of energy conditions beyond the event horizon, the stability of the model \cite{FLMV} or a good causal behaviour of the model. (See, for instance, \cite{A-A}\cite{B&M}\cite{LGS}\cite{Maeda}\cite{MFL}\cite{eye}). Even $d$-dimensional ($d>4$) regular RBH have been studied heuristically. (See, for instance, \cite{A&G}\cite{Amir}).
In other cases a physical approach provides a specific $\mathcal M(r)$.
Let us just mention a few of them. Some authors, inspired by the work of Bardeen \cite{Bardeen}, have taken the path of nonlinear electrodynamics, which provides the necessary modifications in the energy-momentum tensor in order to avoid singularities in the RBH \cite{D&G}\cite{Ghosh}\cite{Tosh}. Yet, another way of addressing the problem of singularities is to take into account that quantum gravity effects should play an important role in the core of black holes, so that it would seem convenient to directly derive the black hole behaviour from an approach to quantum gravity. In this way, regular RBHs deduced in the Quantum Einstein Gravity approach can be found in \cite{R&T}\cite{TorresExt}, in the framework of Conformal Gravity in \cite{BMR}, in the framework of Shape Dynamics in \cite{G&H}, inspired by Supergravity in \cite{Buri}, by Loop Quantum Gravity in \cite{C&M} and by non-commutative gravity in \cite{S&S}.
In the case of non-heuristic models, theoretically one \emph{obtains} a specific expression for the mass function and then one has to check for the avoidance of singularities and for the rest of desirable properties cited above.
The study of their event horizon is particularly important here since it may be observationally tested in the future (see section \ref{secPheno}), what could, for instance, help selecting among the different candidates to a Quantum Gravity Theory.
\subsection{Generalized Newman-Janis Algorithms}
In 1965, Newman and Janis \cite{N&J} discovered that it was possible to obtain Kerr's solution by applying an algorithm to an spherically symmetric and static seed metric: Schwarzschild's solution. A great step towards understanding the algorithm, its possibilities, generalizations and limitations was carried out in \cite{Drake&Szek}. The generalized algorithm allows to take any static spherically symmetric seed metric and obtain a rotating axially symmetric offspring from it\footnote{Be aware that the \textit{offspring} has different geometrical properties and also different physical properties. For example, the seed metric can be a perfect fluid, but the offspring will never be another perfect fluid \cite{Drake&Szek}.}. The application to regular RBH followed \cite{B&M}: One starts with a regular static and spherically symmetric black hole from a specific framework. Then one applies the generalized N-J algorithm to try to construct a regular RBH.
The generalized Newman-Janis algorithm is a five-step procedure \cite{Drake&Szek}:
\begin{enumerate}
\item Take a static spherically symmetric line element and write it in advanced null coordinates.
\item Express the contravariant form of the metric in terms of a null tetrad $Z^\mu_a$.
\item Extend the coordinates $x^\rho$ to a new set of complex coordinates
\[
x^\rho \rightarrow \tilde{x}^\rho=x^\rho+i y^\rho(x^\sigma)
\]
and let the null tetrad vectors $Z^\mu_a$ undergo a transformation
\[
Z^\mu_a \rightarrow \tilde{Z}^\mu_a(\tilde{x}^\rho, \bar{\tilde{x}}^\rho).
\]
Require that the transformation recovers the old tetrad and metric when $\tilde{x}^\rho=\bar{\tilde{x}}^\rho$.
\item Obtain a new metric by making a complex coordinate transformation
\[
\tilde{x}^\rho=x^\rho+i \gamma^\rho(x^\sigma)
\]
\item Apply a coordinate transformation $u=t+\mathcal F(r)$, $\phi=\varphi+\mathcal H(r)$ to transform the metric to Boyer-Lindquist-type coordinates.
\end{enumerate}
While the seed spacetime can be a general spherically symmetric static spacetime, we will restrict ourselves here to a specific family of seed spacetimes that will allow us to connect with our family of regular RBH (\ref{gIKerr}).
Lets say that one has found a line element for a static regular spherically symmetric black hole (in a determined framework or just in a heuristic manner). Assume that the found line element can be written in coordinates $\{t,r,\theta,\varphi\}$ as a member of the family of static regular spherically symmetric black holes with metric:
\begin{equation}\label{seed}
ds^2=-f(r) dt^2+f^{-1}(r) dr^2+r^2 d\Omega^2,
\end{equation}
where $d\Omega^2=d\theta^2+\sin^2\theta d\varphi^2$.
The function $f(r)$ can be rewritten as
\[
f(r)=1- 2 \frac{ M (r)}{r},
\]
by using the \textit{mass function} $M (r)$ defined for general spherically symmetric spacetimes \cite{M&S}.
Let us now see that \emph{the generalized N-J algorithm provides us with a means of obtaining the corresponding rotating black hole line element (\ref{gIKerr}) from the static spherically symmetric seed metric (\ref{seed})}.
The specific five steps for this case would be:
\begin{enumerate}
\item
The coordinate change $du=dt+dr/f(r)$ allows us to rewrite the metric in advanced null coordinates\footnote{Note that in the literature on the NJ algorithm there is some confusion between the advanced and the retarded ($dw=dt-dr/f(r)$) null coordinates. The first is suitable for describing black holes, the second for white holes.}
\[
ds^2=-f(r) du^2+2 du dr + r^2 d\Omega^2.
\]
\item
The null tetrad $Z^\mu_a=(l^\mu,n^\mu,m^\mu,\bar m^\mu)$ satisfying
$l_\mu n^\mu=-m_\mu\bar m^\mu=-1$ and $l_\mu m^\mu=n_\mu m^\mu=0$
can be chosen as
\[
l^\mu=-\delta^\mu_r,\hspace{1 cm} n^\mu=\delta^\mu_u+\frac{f(r)}{2}\, \delta^\mu_r,\hspace{1 cm}
m^\mu=\frac{1}{\sqrt{2} r} \left(\delta^\mu_\theta+\frac{i}{\sin\theta} \delta^\mu_\varphi\right)
\]
so that $g^{\mu\nu}=-l^\mu n^\nu-l^\nu n^\mu+m^\mu \bar m^\nu+m^\nu \bar m^\mu$. (Note that both $\vec l$ and $\vec n$ are future directed).
\item We perform the coordinate change
\[
r'=r-i\, a \cos\theta, \hspace{1 cm} u'=u-i\, a \cos\theta.
\]
and demand $r'$ and $u'$ to be real.
In this way the null tetrad transforms into ($Z'^\mu_a=Z^\nu_a \partial x^{\mu'}/\partial x^\nu$)
\[
l'^\mu=-\delta^\mu_r, \hspace{.5cm} n'^\mu=\delta^\mu_u+\frac{\bar f(r')}{2}\, \delta^\mu_r,\hspace{.5 cm}
m'^\mu=\frac{1}{\sqrt{2} r'} \left(\delta^\mu_\theta+\frac{i}{\sin\theta} \delta^\mu_\varphi+i\, a \sin\theta (\delta^\mu_u+\delta^\mu_r)\right)
\]
The function $\bar f$ comes from the complexification of $f$ and, for the moment, we only know that it must be real and that it must reproduce Kerr solution if the complexified mass function is just a constant. This is possible if, as usual \cite{B&M}\cite{Drake&Szek}\cite{N&J}, one uses the complexification
\[
\frac{1}{r}\rightarrow \frac{1}{2} \left(\frac{1}{r'}+\frac{1}{\bar r'} \right),
\]
that provides us with
\begin{equation}
\bar f=1- \frac{2 \bar{ M} (r,\theta) r}{\Sigma},\label{barf}
\end{equation}
where there is still some freedom in choosing the function $\bar{ M}(r, \theta)$.
\item
The new non-zero metric coefficients can be computed to be
\begin{eqnarray}\label{metcoef}
g_{uu}&=&-\bar f(r,\theta), \hspace{.5cm} g_{ur}=+1, \hspace{.5cm} g_{u\varphi}=-a \sin^2\theta [1-\bar f (r,\theta)]\\
g_{r\varphi}&=&-a \sin^2 \theta, \hspace{.5cm} g_{\theta\theta}=\Sigma, \hspace{.5cm} g_{\varphi\varphi}=\sin^2\theta [\Sigma +a^2 \sin^2\theta (2-\bar f)]\nonumber
\end{eqnarray}
\item In order to get the metric in Boyer-Lindquist type coordinates $\{u,r,\theta, \phi \}$ we perform the coordinate change
$u=t+\int F(r) dr$, $\varphi=\phi+\int H(r) dr$,
where
\begin{equation}\label{transBL}
F(r)=\frac{r^2+a^2}{\bar f(r,\theta) \Sigma +a^2\sin^2 \theta}\ \ \mbox{and}\ \
H(r)=\frac{a}{\bar f(r,\theta) \Sigma+a^2\sin^2 \theta}.
\end{equation}
Thus, $\bar f$ should be chosen in such a way that $F$ and $H$ must be functions of $r$ alone.
Note that (\ref{transBL}) implies
\[
\Sigma \bar f(r,\theta)+a^2 \sin^2\theta=D(r),
\]
and substituting $\bar f $ using (\ref{barf}) one immediately sees that this step requires $\bar{M}=\bar{M}(r)$, i.e., $\bar{ M}$ cannot depend on $\theta$. Thus, we arrive at the natural choice $\bar{ M}(r)= M(r)$.
In effect, in this case $F$ and $H$ are really functions of $r$ alone since
\[
F(r)=\frac{r^2+a^2}{r^2+a^2-2 \mathcal M(r) r} \ \ \mbox{and}\ \
H(r)=\frac{a}{r^2+a^2-2 \mathcal M(r) r}
\]
(see (\ref{uphi})). Therefore, in this way it is possible to write the solution in Boyern-Lindquist type coordinates as
\begin{equation}\label{metRBH}
ds^2=-\frac{\Delta}{\Sigma} (dt-a \sin^2\theta d\phi)^2+\frac{\Sigma}{\Delta} dr^2+
\Sigma d\theta^2+\frac{\sin^2\theta}{\Sigma} (a dt-(r^2+a^2) d\phi)^2.
\end{equation}
where the mass fuction $M(r)$ (appearing in $\Delta$) should just be relabelled as $\mathcal M(r)$ in order to be exactly (\ref{gIKerr}).
\end{enumerate}
Since we started with a \emph{regular} spherically symmetric seed spacetime, its mass function necessarily satisfied $M(r)= O(r^n)$ with $n\geq 3$ around $r=0$. Thus, its offspring also has a mass function $\mathcal M (r)=M(r)= O(r^n)$ and, therefore, is also devoid of scalar polynomial curvature singularities (according to theorem \ref{teorema}).
The moral of the procedure is that if one wants a regular RBH in B-L-like coordinates, then one can simply take the mass function obtained in a spherically symmetric framework with metric (\ref{seed}) and use it as the mass function in the metric (\ref{gIKerr}).
Note that, if one considers step 5 (obtaining the line element in B-L-like coordinates) as non-compulsory, then one could analyze the complexification for two different cases \cite{B&M}:
\begin{itemize}
\item[1)] \textit{Type I} in which we impose $\mathcal M=\mathcal M(r)$. This is the case that we have just considered and the usual approach in the literature.
\item[2)] \textit{Type II} in which we allow $\mathcal M=\mathcal M(r,\theta)$. The new rotating metric can be written in Kerr form (with a null coordinate in the style of Eddington-Finkelstein coordinates), but the N-J algorithm cannot be completed since it is not possible to write the rotating metric in the final Boyer-Lindquist form. Specific models of this type have been proposed and explored in \cite{B&M}\cite{E&H}\cite{EH1}.
\end{itemize}
\section{Phenomenology}\label{secPheno}
Recent developments have greatly enhanced our ability to probe theoretical predictions concerning astrophysical objects. These include, in the one hand, the direct observation of gravitational waves emanating from astrophysical sources by LIGO \cite{ligo} and, in the other hand, the images of black holes taken by the Event Horizon Telescope (EHT) \cite{Aki}\cite{Aki2}. Moreover, a considerable enhancement is expected in the near future thanks to the LISA project \cite{lisa} and the new planned ground-based observatories \cite{observ}. In this way, the physics in strong gravitational fields near black holes is becoming an important topic not only in theoretical physics but also in astrophysical phenomenology. There is now a need for compiling the maximum amount of theoretical results about realistic rotating black holes. It is hoped that the phenomenological evidence will help us to choose among the different proposals for rotating black hole models and, as a consequence, among the alternative approaches to gravitational theories.
\subsection{Shadows}
A defining characteristic of a black hole is the event horizon. To a distant observer, the event horizon casts a relatively large "shadow" with, according to General Relativity, an apparent diameter of $\sim 10$ gravitational radii that is due to the bending of light by the black hole.
Of course, the specific theoretical characteristics of this shadow depend on the alternative gravitational theory chosen and the properties of the modeled rotating black hole. Currently, there are numerous studies about RBH shadows in different frameworks for alternative theories. Just to mention a few: heuristic approaches can be found in \cite{AAAG}\cite{ASG}\cite{Am&G}\cite{E&H}\cite{LGPV}\cite{L&B}\cite{S&S0}, results from different approaches to Quantum Gravity can be found in \cite{BCY}\cite{HGE}.
Assuming a metric of the G\"{u}rses-G\"{u}rsey type (\ref{gIKerr}), a general formula for the contour of the shadow can be easilly obtained \cite{Tsuka}: The photons that would define the shadow of the regular rotating black hole are described by an action $S=S(x^\alpha)$. The momentum of the photons is
\[
p_{\mu}\equiv \frac{\partial S}{\partial x^\mu}
\]
and satisfy
\begin{equation}\label{photon}
g^{\alpha\beta} p_\alpha p_\beta=0.
\end{equation}
The stationary and axisymmetric of the spacetime described by the metric (\ref{gIKerr}) imply two conserved quantities in the trajectory of the photon: the energy $E\equiv -p_t$ and the angular momentum $L\equiv p_\phi$. If there is a separable solution for $S$, by using the definition of the momentum, we could rewrite it as
\[
S=-E t+L \phi+ S_r+ S_\theta,
\]
where we have introduced the new functions $S_r=S_r(r)$ and $S_\theta=S_\theta(\theta)$. In this way, (\ref{photon}) can now be written as
\begin{equation}\label{SrSth}
-\Delta \left(\frac{d S_r}{dr} \right)^2+ \frac{[(r^2+a^2) E-a L]^2}{\Delta}=\left(\frac{d S_\theta}{d\theta} \right)^2+ \frac{(L-a E \sin^2\theta)^2}{\sin^2\theta}.
\end{equation}
In this equation, the left-hand side depends only on $r$, while the right-hand side depends only on $\theta$. In this way, they define a constant which we will denote by
\begin{equation}\label{KSth}
K=\left(\frac{d S_\theta}{d\theta} \right)^2+ \frac{(L-a E \sin^2\theta)^2}{\sin^2\theta}.
\end{equation}
From $dx^\mu/d\lambda=p^\mu=g^{\mu\nu} p_\nu$ and using (\ref{SrSth}) and (\ref{KSth}) one gets
\begin{equation}\label{evolR}
\Sigma \frac{dr}{d\lambda}=\pm \sqrt{R(r)},
\end{equation}
where $R(r)\equiv P(r)^2-\Delta [(L-a E)^2+\mathcal Q]$, $P(r)\equiv E (r^2+a^2)-a L$ and $\mathcal Q\equiv K-(L-a E)^2$ is the Carter constant.
Equation (\ref{evolR}) implies that there would be unstable circular orbits at a certain $r=r_0$ whenever $R(r_0)=R'(r_0)=0$ and $R''(r_0)>0$.
To exploit this, note that the definition of $R$ can also be rewritten as
\[
R/E^2=r^4+(a^2-\xi^2-\eta) r^2 + 2 \mathcal M (r) [(\xi-a)^2+\eta] r-a^2 \eta,
\]
where $\eta\equiv \mathcal Q/E^2$ and $\xi\equiv L/E$. The derivative of this expression with respect to $r$ provides
\[
R'/E^2=4 r^3+2 (a^2-\xi^2-\eta) r+ 2 \mathcal M (r) [(\xi-a)^2+\eta] f(r),
\]
where
\[
f(r)\equiv 1+\frac{r \mathcal M' }{\mathcal M}.
\]
Using the conditions for the orbit one gets the quadratic equation with respect to $\xi$:
\begin{align*}
a^2 &(r_0-f_0 \mathcal M_0)\xi^2-2 a \mathcal M_0 [(2-f_0) r_0^2-f_0 a^2] \xi-r_0^5+\\
&+(4-f_0) \mathcal M_0 r_0^4-2 a^2 r_0^3+2 a^2 \mathcal M_0 (2-f_0) r_0^2 -a^4 r_0-a^4 \mathcal M_0 f_0=0,
\end{align*}
where $\mathcal M_0\equiv\mathcal M(r_0)$ and $f_0\equiv f(r_0)$.
In order to describe the black hole shadow, we must choose the solution
\[
\xi_-\equiv\frac{4 \mathcal M_0 r_0^2-(r+g_0 \mathcal M_0)(r_0^2+a^2)}{a (r_0-f_0 \mathcal M_0)^2}.
\]
that implies
\[
\eta=\eta_-\equiv \frac{r_0^3 [4 (2-f_0) a^2 \mathcal M_0-r_0 [r_0-(4-f_0)\mathcal M_0]^2]}{a^2 (r_0-f_0 \mathcal M_0)^2}.
\]
We consider an observer at a large distance from the RBH in the asymptotically flat spacetime thats observes the RBH with an inclination $\theta_o$. The contour of the shadow of the black hole can be expressed by celestial coordinates $\alpha$ and $\beta$ \cite{Tsuka} as
\[
\alpha=\frac{\xi_-}{\sin\theta_0}\hspace{.5 cm}; \hspace{.5 cm} \beta= \pm \sqrt{\eta_-+(a-xi_-)^2-\left(a \sin \theta_o-\frac{\xi_-}{\sin\theta_o} \right)}
\]
In this way, we have finally arrived at the expressions that link the parameters describing the RBH with the observer's celestial coordinates. Some applications in particular models can be found in \cite{Tsuka}.
The current observations of black hole shadows by the Event Horizon Telescope are so far consistent with the shadow predicted for Kerr's RBHs. However, this classical solution of Einstein's equations cannot provide a complete understanding of black holes since, for instance, it implies the existence of an inner singularity. An analysis of the regular RBH studied so far shows that their shadows are usually also compatible with the observed shadows. In fact, the shadows of the regular RBH are indistinguishable from Kerr black holes shadows within the current observational uncertainties \cite{KKG}\cite{LGPV}\cite{L&B}. Future mm/sub-mm VLBI facilities will be able to greatly increase the current observational resolution. Even so, it will be challenging to test these metrics in the near future.
A primary reason for this is (as explained in section \ref{Horizons}) that only slightly more compact event horizons and smaller shadows are usually expected. In fact, if the deviation from General Relativity comes from Quantum Gravity effects and it is Planck's scale which provides us with the scale in which to expect the departure, then it would be practically impossible to observe these effects in the shadows of a massive rotating black hole. A better prospect for future observations would be expected if, on the contrary, the scale could be much bigger, as pointed out by some authors \cite{Dvali}\cite{Mathur}, or the resolution of singularities were not related to Quantum Gravity effects.
\section{CONCLUSIONS}\label{conclu}
Under suitable conditions, the collapse of an astrophysically significant body can generate a black hole. Since one expects the generator of the black hole to be a rotating body, the black hole will rotate. General Relativity provide us with solutions for rotating black holes and predicts their characteristics, which are compatible with current observations. Nevertheless, the existence of inner singularities in the classical solutions for RBHs and the fact that General Relativity is incompatible with Quantum Mechanics leads us to seek for better singularity-free models for RBHs, often based in some approach to a Quantum Gravity Theory. There is some hope that a future increase of the current observational resolution could allow us to test these models against the classical ones.
Assuming that a manifold endowed with its corresponding metric is a fairly good approximation for describing the regular RBH, most of the models in the literature are of the Gürses-Gürsey type, whose general properties are rather well-known. We have seen that the regularity condition for these models translates into a condition for their mass function. We showed that the requirement of regularity leads to the violation of the energy conditions. Remarkably, regular RBH do not seem to require an extension through their equatorial plane. As a consequence, causality problems could be avoided simply if their mass function could remain non-negative. With regard to the choice of the particular RBH among the Gürses-Gürsey family, it all boils down to the choice of its mass function. In the literature, the mass function has been either chosen heuristically or derived from some gravitational theory. In this regard, the generalized Newman-Janis algorithm provides us with an alibi to just use the mass function from regular spherically symmetric static black hole models.
There are some open questions concerning regular RBHs. For example, the analysis of the possible existence of parallelly propagated curvature singularities, the treatment and resolution of their inner horizon instabilities, their future evolution due to the emission of Hawking radiation (including the problem of the possible formation of remnants) and a deeper generalization of the models beyond the Gürses-Gürsey type.
|
{
"timestamp": "2022-08-29T02:15:37",
"yymm": "2208",
"arxiv_id": "2208.12713",
"language": "en",
"url": "https://arxiv.org/abs/2208.12713"
}
|
"\\section{Introduction}\\label{sec:intro}\nElectrical resistor networks are modeled by undirected g(...TRUNCATED)
| {"timestamp":"2022-08-29T02:17:51","yymm":"2208","arxiv_id":"2208.12798","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:introduction}\n\nIn this Letter we apply the two-tensor-pomero(...TRUNCATED)
| {"timestamp":"2022-08-29T02:14:33","yymm":"2208","arxiv_id":"2208.12693","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:introduction}\nGaussian processes (GP) are a versatile tool fo(...TRUNCATED)
| {"timestamp":"2022-08-30T02:00:47","yymm":"2208","arxiv_id":"2208.12830","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{section:introduction}\n\nMagnetic resonance imaging (MRI) is a pow(...TRUNCATED)
| {"timestamp":"2022-09-05T02:20:38","yymm":"2208","arxiv_id":"2208.12835","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction} \\label{sec:motivation}\n\nThe temporal frequency of medium resolution, (...TRUNCATED)
| {"timestamp":"2022-08-30T02:00:13","yymm":"2208","arxiv_id":"2208.12810","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nStream Reasoning (SR)~\\cite{DBLP:journals/datasci/DellAglioVHB17} is a (...TRUNCATED)
| {"timestamp":"2022-08-29T02:15:52","yymm":"2208","arxiv_id":"2208.12726","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nLet $G$ be a simple, undirected, finite graph with vertex set $V$ and ed(...TRUNCATED)
| {"timestamp":"2022-08-29T02:18:01","yymm":"2208","arxiv_id":"2208.12802","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 5