Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid escape character in string. in row 30
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 72787)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid escape character in string. in row 30
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{GAN-based Abstraction}\label{sec:abstraction} \subsection{Model Abstraction} The underlying idea is the following: given a stochastic process $\{\eta_{t}\}_{t\ge 0}$ with transition probabilities $\mathbb{P}_{s_0}(\eta_{t}=s) = \mathbb{P}(\eta_{t}=s\mid \eta_{t_0}=s_0)$, we aim at finding another stochastic process whose trajectories are faster to simulate but similar to the original ones. Time has to be discretized, meaning we fix an initial time $t_0$ and a time step $\Delta t$ that suits our problem. We define $\tilde{\eta}_i := \eta_{t_0+i\cdot\Delta t}$, $\forall i\in\mathbb{N}$. In addition, given a fixed time horizon $H$, we define time-bounded trajectories as $\tilde{\eta}_{[1,H]} = s_1 s_2\cdots s_H\in S^H\subseteq\mathbb{N}^{H\times n}$. Given a state $s_0$ and a set of parameters $\theta$, we can represent a trajectory of length $H$ as a realization of a random variable over the state space $S^H$. The probability distribution for such random variable is given by the product of the transition probabilities at each time step: ${ \mathbb{P}_{s_0,\theta}(\tilde{\eta}_{[1,H]}=s_1s_2\cdots s_H)=\prod_{i=1}^H\mathbb{P}_{s_{i-1}, \theta}(\tilde{\eta}_{i}=s_{i})}$. The CTMC, $\{\eta_{t}\}_{t\ge 0}$, is now expressed as a time-homogeneous Discrete Time Markov Chain $\{\tilde{\eta}_i\}_i$. An additional approximation has to be made: the abstract model takes values in $S'\subseteq\mathbb{R}_{\geq 0}^n$, a continuous space in which the state space $S\subseteq\mathbb{N}^n$ is embedded. In constructing the approximate probability distribution for trajectories we can decide to restrict our attention to arbitrary aspects of the process, rather than trying to preserve the full behavior. A \emph{projection} $\pi$ from $S^H$ to an arbitrary space $U^H$ can be used to reach this purpose, for instance, to monitor the number of molecules belonging to a certain subset of chemical species, i.e., $U\subseteq S$. Note that $\pi(\tilde{\eta}_{[0,H]})$ is a random variable over $U^H$. Such flexibility could be extremely helpful in capturing the dynamics of systems in which some species are not observable. \paragraph{Abstraction accuracy.} Another important ingredient is a meaningful quantification of the error introduced by the abstraction procedure, i.e., the reconstruction accuracy. Such quantification must be based on a distance, $d$, among distributions. We choose the Wasserstein distance, together with the absolute and relative difference among means and variances of the histograms. Given a distribution over initial states $s_0$ and a distribution over parameters $\theta$, we would like to measure the expected error at every time instant $t_i = t_0+i\cdot\Delta t$ with $i\in\{1,\dots,H\}$. Formally, we want to measure $\mathbb{E}_{s_0,\theta}\left[d\big(\pi(\eta_{[1,H]})\big |_i, \pi'(\eta'_{[1,H]})\big |_i\big)\right]$ where $\pi(\eta_{[1,H]})\big |_i$ denotes the $i$-th time components of the projected trajectory $\pi(\eta_{[1,H]})\in U^H$. To estimate such quantity we use a well-known unbiased estimator, which is the average over the distances computed over a large sample set of initial settings. Computing the distance among SSA and abstract distributions at each time step quantifies how small the expected error is and, more importantly, how it evolves in time. As a matter of fact, it shows whether the error tends to propagate or not and how much each species contributes to the abstraction error. In practice, we compute $H \cdot n$ distances among distributions over $\mathbb{N}$ as we want to know how each species contributes in the reconstruction error. \vspace{-0.2cm} \subsection{Dataset Generation} \paragraph{Training set.} Choose a set of $N_{train}$ initial settings and for each setting simulate $k_{train}$ SSA trajectory of length $H$. The training set is composed of $N_{train}\cdot k_{train}$ pairs initial setting-trajectory, i.e. pairs $(\theta^i,s_0^i,\eta^{ij}_{[1,H]})$ for $i=1, \ldots,N_{train}$ and $j= 1,\ldots, k_{train}$. \paragraph{Test set.} Choose a set of $N_{test}$ initial settings and for each setting simulate a large number, $k_{test}\gg k_{train}$, of SSA trajectory of length $H$. The test set is composed of $N_{test}\cdot k_{test}$ pairs initial setting-trajectory, i.e. pairs $(\theta^i,s_0^i,\eta^{ij}_{[1,H]})$ for $i=1, \ldots,N_{test}$ and $j= 1,\ldots, k_{test}$. \paragraph{Partial observability.} In case of partial observability, $U\subseteq S$, we fix an initial condition for species in $U$, and simulate a pool of trajectories each time sampling the initial value of species in $S\smallsetminus U$. As a result, we are learning and abstract distribution that marginalizes over unobserved variables. \vspace{-0.2cm} \subsection{cWCGAN-GP architecture} The critic $C_{w_c}$ takes as input a batch of initial states, $s_0^1,\dots , s_0^b$, a batch of parameters, $\theta_1, \dots , \theta_b$, and a batch of subsequent trajectories, $\eta_{[1,H]}^1,\dots , \eta_{[1,H]}^b$. For each $i\in\{ 1,\dots , b\}$ the inputs, $\eta_{[1,H]}^i$, $s_0^i$ and $\theta_i$, are concatenated to form an input with dimension $b\times (H+1)\times (n+m)$. Formally, $C_{w_c}: S^{H+1}\times\Theta\rightarrow \mathbb{R}$. To enforce the Lipschitz property over $C_ {w_c}$ we add a gradient penalty term over $\mathbb{P}_{\hat{x}}$. Samples of $\mathbb{P}_{\hat{x}}$ are generated by sampling uniformly along straight lines connecting points coming from a batch of real trajectories and points coming from a batch of generated trajectories. On the other hand, the generator $G_{w_g}$ takes as input a batch of initial states, $s_0^1,\dots , s_0^b$, a batch of parameters, $\theta_1, \dots , \theta_b$, and a batch of random noise, $z^1,\dots , z^b$, with dimension $k$, a user-defined hyper-parameter. For each $i\in\{ 1,\dots , b\}$ the two inputs are, once again, concatenated to form an input with dimension $b\times (n+m+k)$. The generator outputs a batch of generated trajectories $\hat{\eta}_{[1,H]}^1,\dots\hat{\eta}_{[1,H]}^b$. Formally, $G_{w_g}:S\times\Theta\times Z\rightarrow S^H$, such that $G_{w_g}(s_0,\theta,z) = \hat{\eta}_{[1,H]} = s_1\cdots s_H$. See the pseudocode for the algorithm in Appendix~\ref{sec:algorithm} of [XXX]. \vspace{-0.2cm} \subsection{Model Training} The cWCGAN-GP-based model abstraction framework consists in training two different CNNs. The loss function, introduced in Eq.~\eqref{eq:wassdist_gp}, is a parametric function depending both on the generator weights $w_g$ and the critic weights $w_c$. When training the critic, we keep the generator weights constant $\overline{w}_g$, and we maximize $\mathcal{L}(w_c, \overline{w}_g)$ w.r.t. $w_c$. Formally, we solve the problem \begin{equation*} \small w_c^* = \underset{w_c}{\mbox{argmax}}\Big\{\mathcal{L}(w_c, \overline{w}_g)\Big\}. \end{equation*} On the other hand, in training the generator, we keep the critic weights constant $\overline{w}_c$, and we minimize $\mathcal{L}(\overline{w}_c, w_g)$ w.r.t. $w_g$. Formally, we solve the problem \begin{equation*} \small w_g^* = \underset{w_g}{\mbox{argmin}} \Big\{\mathcal{L}(\overline{w}_c,w_g)\Big\} = \underset{w_g}{\mbox{argmin}} \Big\{ -\mathbb{E}_{z,(s_0,\theta)}\Big[C_{\overline{w}_c}\big(G_{w_g}(z,s_0,\theta),s_0,\theta)\Big]\Big\}. \end{equation*} As mentioned in Section \ref{sec:background}, the loss function derives from the Wasserstein distance between the real and generated distributions, see \cite{arjovsky2017wasserstein,gulrajani2017improved} for the mathematical details. Intuitively, the generator generates a batch of samples, and these, along with real examples from the dataset, are provided to the critic, which is then updated to get better at estimating the distance between the real and the abstract distribution. The generator is then updated based on scores obtained by the generated samples from the critic. An important collateral advantage is that WGANs have a loss function that correlates with the quality of generated examples. Training the cWCGAN-GP has a cost. Nonetheless, once it has been trained, its evaluation is extremely fast. Details about training and evaluation costs are discussed in Section~\ref{sec:experiments}. \paragraph{Abstract Model Simulation.} Once the training is over, we can discard the critic and focus only on the trained generator $G$. In order to generate an abstract trajectory starting from a state $s_0^*$ with parameters $\theta^*$, we just have to sample a value $z$ from the random noise variable $Z$ and evaluate the generator on the pair $(s_0^*, \theta^*, z)$. The output is a stochastic trajectory of length $H$: $G(s_0^*, \theta^*, z) = \hat{\eta}_{[1,H]}$. The stochasticity is provided by the random noise variable, de facto the generator acts as a distribution transformer that maps a simple random variable into a complex distribution. In order to generate a pool of $p$ trajectories, we simply sample $p$ different values from the random noise variable: $z_1, \dots,z_p$. Therefore, the generation of a trajectory has a fixed computational cost. \section{Case Studies}\label{sec:casestudies} \begin{itemize} \item \textbf{SIR Model (Absorbing state).} The SIR epidemiological model describes a population divided in three mutually exclusive groups: susceptible (S), infected (I) and recovered (R). The system state at time $t$ is $\eta_t= (S_t, I_t, R_t)$. The possible reactions, given by the interaction of individuals (representing the molecules of a CRN), are the following: \begin{itemize} \item $R_1: S+I\xrightarrow{\theta_1\cdot I_tS_t/(S_t+I_t+R_t)} 2I$ (infection), \item $R_2: I\xrightarrow{\theta_2\cdot I_t} R$ (recovery). \end{itemize} The model describes the spread, in a population, of an infectious disease that grants immunity to those who recover from it. As the SIR model is well-known and stable, we use it as a testing ground for our GAN-based abstraction procedure. The ranges for the initial state are $S_0, I_0, R_0 \in [30, 200]$. An important aspect of the SIR model is the presence of an absorbing states. In fact, when $I = 0$ or when $R=N$ no more reaction can take place. \item \textbf{Ergodic SIRS Model.} Small perturbations of the SIR model force the system to be ergodic. We called this revised version ergodic SIRS (eSIRS). This model has no absorbing state. In particular, we assume that the population is not perfectly isolated, meaning there is always a chance of getting infected from some external individuals. In addition, we also assume that immunity is only temporary. The possible reactions are now the following: \begin{itemize} \item $R_1: S+I\xrightarrow{\theta_1\cdot I_tS_t/(S_t+I_t+R_t)+\theta_2\cdot S_t} 2I$ (infection), \item $R_2: I\xrightarrow{\theta_3\cdot I_t} R$ (recovery), \item $R_3: R\xrightarrow{\theta_4\cdot R_t} S$ (immunity loss), \end{itemize} Both epidemiological models are essentially unimodal. The ranges for the initial state are $S_0, I_0, R_0 \in [0, N]$ such that $S_0 + I_0 + R_0 = N$. In our experiments $N= 100$. The range for parameter $\theta_1$ is $[0.5,5]$. \item \textbf{Genetic Toggle Switch Model (Bistability).} The toggle switch is a well-known bistable biological circuit. Briefly, this system consists of two genes, $G_1$ and $G_2$, that mutually repress each other. The system displays two stable equilibrium states in which either of the two gene products represses the expression of the other gene. The possible reactions are: \begin{itemize} \item $prod_i: G_i^{on}\xrightarrow{kp_i\cdot G_i^{on}} G_i^{on}+P_i$, for $i=1,2$;, \item $bind_i: 2P_j+G_i^{on}\xrightarrow{kb_i\cdot G_i^{on}\cdot P_j\cdot(P_j-1)} G_i^{off}$, for $i=1,2$ and $j=2,1$ resp.; \item $unbind_i: G_i^{off}\xrightarrow{ku_i\cdot G_i^{off}} G_i^{on}+2P_j$, for $i=1,2$ and $j=2,1$ resp.; \item $deg_i: P_i\xrightarrow{kd_i\cdot Pi} \emptyset$, for $i = 1,2$. \end{itemize} The ranges for the initial state are $G_{1,0}, G^{on}_{2,0}\in\{0,1\}$ and $P_{1,0}, P_{2,0} \in [5, 20]$. \item \textbf{Oscillator Model.} The oscillator circuit consists of three species A, B and C and three reactions, in which A converts B to itself, B converts C to itself, and C converts A to itself. The three species regulate each other in a cyclic manner. This circuit was found to exhibit oscillations in the concentrations of the three species. \begin{itemize} \item $R_1: A+B\xrightarrow{{\tiny \theta\cdot\frac{A\cdot B}{A+B+C}}} 2A$ (B transformation), \item $R_2: B+C\xrightarrow{{\tiny \theta\cdot\frac{B\cdot C}{A+B+C}}} 2B$ (C transformation), \item $R_3: C+A\xrightarrow{{\tiny \theta\cdot\frac{C\cdot A}{A+B+C}}} 2C$ (A transformation). \end{itemize} The ranges for the initial state are $A_0, B_0, C_0 \in [20, 100]$. \item \textbf{MAPK Model.} Mitogen-activated protein kinase cascade is a particular type of signal transduction into protein phosphorylation (PP) whose function is the amplification of a signal. The sensitivity increases with the number of cascade levels, such that a small change in a stimulus results in a large change in the response. Negative feedback from MAPK-PP to the MAKKK activating reaction with ultra-sensitivity to a input stimulus, governed by parameter $V_1$. \begin{itemize} \item $\it R_1: MKKK \xrightarrow{{\tiny V_1\cdot MKKK/( (1+(MAPK\_PP/K_l)^n)\cdot (K_1+MKKK) )}} MKKK\_P$, \item $\it R_2: MKKK\_P \xrightarrow{{\tiny V_2\cdot MKKK\_P/(K_2+MKKK\_P)}} MKKK $, \item $\it R_3: MKK \xrightarrow{{\tiny k_3\cdot MKKK\_P\cdot MKK/(K_3+MKK)}} MKK\_P $, \item $\it R_4: MKK\_P \xrightarrow{{\tiny k_4\cdot MKKK\_P\cdot MKK\_P/(K_4+MKK\_P) }} MKK\_PP$, \item $\it R_5: MKK\_PP \xrightarrow{{\tiny V_5\cdot MKK\_PP/(K_5+MKK\_PP) }} MKK\_P $, \item $\it R_6: MKK\_P \xrightarrow{{\tiny V_6\cdot MKK\_P/(K_6+MKK\_P)}} MKK$, \item $\it R_7: MAPK \xrightarrow{{\tiny k_7\cdot MKK\_PP\cdot MAPK/(K_7+MAPK)}} MAPK-P$, \item $\it R_8: MAPK\_P \xrightarrow{{\tiny k_8\cdot MKK-PP\cdot MAPK\_P/(K_8+MAPK\_P)}} MAPK\_PP$, \item $\it R_9: MAPK\_PP \xrightarrow{{\tiny V_9\cdot MAPK\_PP/(K_9+MAPK\_PP)}} MAPK\_P$, \item $\it R_{10}: MAPK\_P \xrightarrow{{\tiny V_{10}\cdot MAPK\_P/(K_{10}+MAPK\_P)}} MAPK$. \end{itemize} \end{itemize} The ranges for the initial state are: $MKKK_0, MKKK_0\_P \in [0, 100]$ such that $MKKK_0 + MKKK_0\_P = 100$; $MKK_0, MKK_0\_P, MKK_0\_PP \in [0, 300]$ such that $MKK_0 + MKK_0\_P+ MKK_0\_PP = 300$; $MAPK_0, MAPK_0\_P, MAPK_0\_PP \in [0, 300]$ such that $MAPK_0 + MAPK_0\_P+ MAPK_0\_PP = 300$. \section{Additional Plots} \label{sec:plots} Here we present the remaining plots showing a qualitative evaluation of the performances of the abstract models. For each model, we present a small batch of trajectories, both real and abstract (plots on the left column). From the plots of such trajectories we can appreciate if the abstract trajectories are similar to real ones and if they capture the most important macroscopic behaviors. We also show the histograms of the empirical distributions at time $t_H$ for each species (plots on the right column) to quantify the behavior over all the $2$K trajectories present in the test set. In particular, Fig. \ref{fig:sir_trajectories} shows the results for the SIR case study, Fig. \ref{fig:esir_trajectories_one_par} shows the results for the e-SIRS model with one varying parameter, Fig. \ref{fig:ts_trajectories} shows the results for the Toggle Switch model and, finally, Fig. \ref{fig:clock_trajectories} shows the results for the Oscillator model. \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/SIR/SIR_Trajectories5.png} \includegraphics[scale=0.25]{imgs/SIR/SIR_hist_comparison_last_timestep_5.png} \caption{SIR model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue); \textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance on a randomly chosen test point represented by three trajectories: the top one (species S), the central one (species I) and the bottom one (species R).\vspace{-0.5cm}} \label{fig:sir_trajectories_extra} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/eSIRS/eSIRS_Rescaled_Trajectories2_tight.png} \includegraphics[scale=0.25]{imgs/eSIRS/eSIRS_rescaled_hist_comparison_-1th_timestep_2_tight.png} \includegraphics[scale=0.25]{imgs/eSIRS/eSIRS_Rescaled_Trajectories16_tight.png} \includegraphics[scale=0.25]{imgs/eSIRS/eSIRS_rescaled_hist_comparison_-1th_timestep_16_tight.png} \caption{e-SIRS model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue); \textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance for two, randomly chosen, test points. Each point is represented by a pair of trajectories: the top one (species S) and the bottom one is for (species I).\vspace{-0.5cm}} \label{fig:esir_trajectories} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/TS/ToggleSwitch_Rescaled_Trajectories18_tight.png} \includegraphics[scale = 0.25]{imgs/TS/ToggleSwitch_rescaled_hist_comparison_-1th_timestep_18_tight.png} \caption{Toggle Switch model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue); \textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance for a randomly chosen, test point represented by a pair of trajectories: the top one (species P1) and the bottom one (species P2).\vspace{-0.5cm}} \label{fig:ts_trajectories_extra} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/Oscillator/Oscillator_Rescaled_Trajectories4.png} \includegraphics[scale=0.25]{imgs/Oscillator/Oscillator_rescaled_hist_comparison_-1th_timestep_4.png} \caption{Oscilator model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue);\textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance on a randomly chosen test point represented by three trajectories: the top one (species A), the central one (species B) and the bottom one (species C). \vspace{-0.5cm} }\label{fig:clock_trajectories_extra} \end{figure} \begin{figure}[ht] \centering \subfigure[eSIRS]{ \includegraphics[scale=0.23]{imgs/eSIRS/eSIRS_avg_mean_distance_1000epochs_32steps.png}} \subfigure[eSIRS-1P]{ \includegraphics[scale=0.23]{imgs/eSIRS_1P/Scaled_avg_mean_distance_1epochs_32steps.png}} \subfigure[SIR]{ \includegraphics[scale=0.23]{imgs/SIR/Scaled_avg_mean_distance_500epochs_16steps.png}} \subfigure[Oscillator]{ \includegraphics[scale=0.23]{imgs/Oscillator/Oscillator_avg_mean_distance_1000epochs_32steps.png}} \subfigure[Toggle Switch]{ \includegraphics[scale=0.23]{imgs/TS/ToggleSwitch_avg_mean_distance_2000epochs_32steps.png}} \subfigure[MAPK]{ \includegraphics[scale=0.23]{imgs/MAPK/MAPK_avg_mean_distance_1500epochs_32steps_scaling.png}} \caption{Plots of the average difference in the means over time for each model and each species. Errors are computed over the entire test set. Generated trajectories have been keep scaled to the interval $[-1,1]$ so that the scale of the system does not affect the scale of the error measure.}\label{fig:avg_means_errors} \subfigure[eSIRS]{ \includegraphics[scale=0.23]{imgs/eSIRS/eSIRS_avg_mean_relative_distance_1000epochs_32steps.png}} \subfigure[eSIRS-1P]{ \includegraphics[scale=0.23]{imgs/eSIRS_1P/eSIRS_1P_avg_mean_relative_distance_1epochs_32steps.png}} \subfigure[SIR]{ \includegraphics[scale=0.23]{imgs/SIR/SIR_avg_mean_relative_distance_500epochs_16steps.png}} \subfigure[Oscillator]{ \includegraphics[scale=0.23]{imgs/Oscillator/Oscillator_avg_mean_relative_distance_999epochs_32steps.png}} \subfigure[Toggle Switch]{ \includegraphics[scale=0.23]{imgs/TS/ToggleSwitch_avg_mean_relative_distance_2000epochs_32steps.png}} \subfigure[MAPK]{ \includegraphics[scale=0.23]{imgs/MAPK/MAPK_avg_mean_relative_distance_1500epochs_32steps_scaling.png}} \caption{Plots of the average relative difference in the means over time for each model and each species. Errors are computed over the entire test set. Generated trajectories have been keep scaled back to $\mathbb{N}$.}\label{fig:avg_means_rel_errors} \end{figure} \begin{figure}[ht] \centering \subfigure[eSIRS]{ \includegraphics[scale=0.23]{imgs/eSIRS/eSIRS_avg_var_distance_1000epochs_32steps.png}} \subfigure[eSIRS-1P]{ \includegraphics[scale=0.23]{imgs/eSIRS_1P/Scaled_avg_var_distance_1epochs_32steps.png}} \subfigure[SIR]{ \includegraphics[scale=0.23]{imgs/SIR/Scaled_avg_var_distance_500epochs_16steps.png}} \subfigure[Oscillator]{ \includegraphics[scale=0.23]{imgs/Oscillator/Oscillator_avg_var_distance_1000epochs_32steps.png}} \subfigure[Toggle Switch]{ \includegraphics[scale=0.23]{imgs/TS/ToggleSwitch_avg_var_distance_2000epochs_32steps.png}} \subfigure[MAPK]{ \includegraphics[scale=0.23]{imgs/MAPK/MAPK_avg_var_distance_1500epochs_32steps_scaling.png}} \caption{Plots of the average difference in the variances over time for each model and each species. Errors are computed over the entire test set. Generated trajectories have been keep scaled to the interval $[-1,1]$ so that the scale of the system does not affect the scale of the error measure.}\label{fig:avg_vars_errors} \subfigure[eSIRS]{ \includegraphics[scale=0.23]{imgs/eSIRS/eSIRS_avg_var_relative_distance_1000epochs_32steps.png}} \subfigure[eSIRS-1P]{ \includegraphics[scale=0.23]{imgs/eSIRS_1P/eSIRS_1P_avg_var_relative_distance_1epochs_32steps.png}} \subfigure[SIR]{ \includegraphics[scale=0.23]{imgs/SIR/SIR_avg_var_relative_distance_500epochs_16steps.png}} \subfigure[Oscillator]{ \includegraphics[scale=0.23]{imgs/Oscillator/Oscillator_avg_var_relative_distance_999epochs_32steps.png}} \subfigure[Toggle Switch]{ \includegraphics[scale=0.23]{imgs/TS/ToggleSwitch_avg_var_relative_distance_2000epochs_32steps.png}} \subfigure[MAPK]{ \includegraphics[scale=0.23]{imgs/MAPK/MAPK_avg_var_relative_distance_1500epochs_32steps_scaling.png}} \caption{Plots of the average relative difference in the variances over time for each model and each species. Errors are computed over the entire test set. Generated trajectories have been keep scaled back to $\mathbb{N}$.}\label{fig:avg_vars_rel_errors} \end{figure} \begin{figure}[ht] \centering \subfigure[Means abs. err.]{ \includegraphics[scale=0.12]{imgs/eSIRS/eSIRS_mean_distance.png}} \subfigure[Means rel. err.]{ \includegraphics[scale=0.12]{imgs/eSIRS/eSIRS_mean_relative_distance.png}} \subfigure[Variances abs. err.]{ \includegraphics[scale=0.12]{imgs/eSIRS/eSIRS_var_distance.png}} \subfigure[Variances rel. err.]{ \includegraphics[scale=0.12]{imgs/eSIRS/eSIRS_var_relative_distance.png}} \subfigure[Wass. dist.]{ \includegraphics[scale=0.12]{imgs/eSIRS/eSIRS_wass_distance.png}} \caption{Histogram distance landscapes for the two-dimensional \textbf{eSIRS} model.} \label{fig:eSIRS_distance_landscapes} \end{figure} \begin{figure}[ht] \centering \subfigure[Means abs. err.]{ \includegraphics[scale=0.12]{imgs/TS/ToggleSwitch_mean_distance.png}} \subfigure[Means rel. err.]{ \includegraphics[scale=0.12]{imgs/TS/ToggleSwitch_mean_relative_distance.png}} \subfigure[Variances abs. err.]{ \includegraphics[scale=0.12]{imgs/TS/ToggleSwitch_var_distance.png}} \subfigure[Variances rel. err.]{ \includegraphics[scale=0.12]{imgs/TS/ToggleSwitch_var_relative_distance.png}} \subfigure[Wass. dist.]{ \includegraphics[scale=0.12]{imgs/TS/ToggleSwitch_wass_distance.png}} \caption{Histogram distance landscapes for the two-dimensional \textbf{Toggle Switch} model.} \label{fig:TS_distance_landscapes} \end{figure} \begin{figure}[ht] \centering \subfigure[Wass. dist.]{ \includegraphics[scale = 0.24]{imgs/MAPK/MAPK_wass_distance.png}} \subfigure[Means abs. err.]{ \includegraphics[scale = 0.24]{imgs/MAPK/MAPK_mean_distance.png}} \subfigure[Means rel. err.]{ \includegraphics[scale = 0.24]{imgs/MAPK/MAPK_mean_relative_distance.png}} \subfigure[Variances abs. err.]{ \includegraphics[scale = 0.24]{imgs/MAPK/MAPK_var_distance.png}} \subfigure[Variances rel. err.]{ \includegraphics[scale = 0.24]{imgs/MAPK/MAPK_var_relative_distance.png}} \caption{Histogram distance landscapes for the \textbf{MAPK} model.} \label{fig:mapk_distance_landscapes} \end{figure} \begin{figure} \centering \includegraphics[scale=0.245]{imgs/eSIRS_1P/analysis_avg_wass_distance_1epochs_32steps.png} \includegraphics[scale=0.245]{imgs/eSIRS_1P/analysis_avg_wass_distance_1epochs_32steps_100in.png} \includegraphics[scale=0.245]{imgs/eSIRS_1P/analysis_avg_wass_distance_1epochs_32steps_100par.png} \caption{Analysis of the generalization capabilities of the abstract model on various test sets: 100 different pairs $(s_0, \theta)$ \textbf{(left)}, fixed parameter and 100 different initial states \textbf{(middle)} and a fixed initial state with 100 different parameters \textbf{(right)}. For each test set we compute the mean and the standard deviation of the distribution of Wasserstein distances over such sets.} \label{fig:esirs_1p_analysis} \end{figure} \section{Satisfaction probability}\label{sec:satisf} \begin{figure}[ht] \centering \includegraphics[scale=0.17]{imgs/eSIRS/esirs_satisfability.png} \includegraphics[scale=0.17]{imgs/SIR/sir_sanity_check_absorption.png} \includegraphics[scale=0.17]{imgs/TS/ts_satisfability.png} \caption{\textbf{(eSIRS)} Given the property ``eventually the number of infected stays below a threshold of 25 individual", we check for each test point (x axis) the percentage of SSA (orange) and abstract (blue) trajectories that satisfy such property. \textbf{(SIR)} For abstract trajectories of the SIR model we check, for each test point, the percentage of valid trajectories, i.e. such that the state $I=0$ is absorbing. \textbf{(Toggle Switch)} Given the property ``eventually the level of protein P2 stays above a threshold of 50", we check for each test point (x axis) the percentage of SSA (orange) and abstract (blue) trajectories that satisfy such property. \vspace{-0.25cm}} \label{fig:satisf} \end{figure} We seek a formal way to quantify whether the abstract model captures and preserves the emergent macroscopic behaviors of the original system. In order to do so, we can resort to formal languages, such as Signal Temporal Logic (STL) \cite{maler2004monitoring}. The first step is to express formally the property that we would like abstract trajectories to preserve. Then we can measure the satisfaction probability of such property for both real and abstract trajectories and check if it is similar on a large pool of initial settings. Examples are shown in Fig. \ref{fig:satisf}. For the e-SIRS model we consider the property ``eventually the number of infected remains below a threshold of 25 individual". For abstract trajectories of the SIR model we check, for each test point, the percentage of valid trajectories, i.e. such that the state $I=0$ is absorbing. Finally, for the Toggle Switch model we check the property ``eventually the level of protein $P_2$ stays above a threshold of $50$", meaning we check for each test point the percentage of SSA and abstract trajectories that satisfy such property. It can be written as $\Diamond_{[0,H]}\square (P_2 > 50)$. These comparisons produce a measurable qualitative estimate of how good the reconstruction is. As future work, we intend to use such qualitative measure as a query strategy for an active learning approach, so that the obtained abstract model is driven in the desired direction. \section{Statistical tests}\label{sec:stat_test} \begin{figure}[ht] \centering \includegraphics[scale=0.245]{imgs/SIR/SIR_pvalues_no_rescaled_Energy.png} \includegraphics[scale=0.245]{imgs/eSIRS/eSIRS_pvalues_no_rescaled_Energy.png} \includegraphics[scale=0.245]{imgs/TS/ToggleSwitch_pvalues_no_rescaled_Energy.png} \includegraphics[scale=0.245]{imgs/Oscillator/Oscillator_pvalues_no_rescaled_Energy.png} \includegraphics[scale=0.245]{imgs/MAPK/MAPK_pvalues_no_rescaled_Energy_1500epochs.png} \caption{Average over the initial setting of the p-values (with confidence interval) for each species computing by the two-sample statistical test w.r.t. the number of samples present in the empirical distributions.} \label{fig:stat_test} \end{figure} In this section we show the results of a two-sample statistical test over all the cases studies. In particular, we use a statistical test based on the energy distance among distributions \cite{szekely2013energy}. For each initial setting and for each species we compute the distance statistics and the p-value among the empirical approximation of the SSA distribution and the empirical approximation of the abstract distribution over the trajectory space, i.e. a $H$-dimensional space. In Fig. \ref{fig:stat_test} we report the mean and the standard deviation of p-values over the initial settings present in the test set. Clearly the p-value decreases as the number of samples used to approximate the distributions increases. Fig. \ref{fig:stat_test} shows how the p-values for each species varies according to the number of samples used. These results come with no surprise as the abstract model was trained having only $10$ observations for each initial setting. It is interesting to observe how the Energy test is passed by a large percentage of points when the number of samples is around $10$. In order to enhance the resilience of the abstract model to such statistical tests we should increase the number of samples per point in the training set. This comes at the cost of reducing the number of initial setting, so that the resulting training set is not too large. In this regard, the active learning technique proposed in Section \ref{sec:satisf} can be extremely beneficial. \section{cWCGAN-GP Algorithm}\label{sec:algorithm} \begin{algorithm}[ht] \KwData{The gradient penalty coefficient $\lambda$, the number of epochs $n_{epochs}$, the number of critic iterations per generator iteration $n_{critic}$, the batch size $m$, Adam hyper-parameters $\alpha, \beta_1, \beta_2$.} \For{$e=1\dots, n_{epochs}$}{ \For{$t=1\dots, n_{critic}$}{ \For{$i=1\dots, m$}{ Sample real data $(y, x)\sim P_r$, latent variable $z \sim p(z)$, a random number $\epsilon\sim U[0, 1]$\; $\tilde{x}\leftarrow G_{w_g}(z, y)$\; $\hat{x}\leftarrow \epsilon x +(1-\epsilon)\tilde{x}$\; $L^{(i)}\leftarrow D_{w_c}(\tilde{x}, y)-D_{w_c}(x, y)+\lambda(\parallel \nabla_{\hat{x}} D_{w_c}(\hat{x},y)\parallel_2-1)^2$\; } { $w_c\leftarrow$ Adam $\left(\nabla_{w_c} \tfrac{1}{m}\sum_{i=1}^m L^{(i)}, w_c, \alpha, \beta_1, \beta_2\right)$\; } } { Sample a batch of latent variables $\{z^{(i)}\}_{i=1}^m\sim p(z)$\ and a batch of random conditions $\{y^{(i)}\}_{i=1}^m\sim p(y)$\; $\theta\leftarrow $ Adam $\left(\nabla_{w_g} \tfrac{1}{m}\sum_{i=1}^m -D_{w_c}(G_{w_g}(z^{(i)}, y^{(i)}), y^{(i)}), w_g, \alpha, \beta_1, \beta_2 \right)$ } {} } \caption{Conditional WGAN with gradient penalty. Default values used for hyper-parameters: $\lambda = 10$, $n_{critic} = 5$, $\alpha = 0.0001$, $\beta_1 = 0.5$, $\beta_2 = 0.9$. Variable $x$ denotes the trajectories of length $H$ ($\eta_{[1,H]}$), whereas variable $y$ denotes the condition, i.e., the initial setting of the system (pairs $(s_0, \theta)$. } \end{algorithm} \section{Background}\label{sec:background} \subsection{Chemical Reaction Networks} Consider a system with $n$ species evolving according to a stochastic model defined as a Chemical Reaction Network. Under the well-stirred assumption, the time evolution can be modelled as a Continuous Time Markov Chain (CTMC) on a discrete state space. The vector $\eta_t = (\eta_{t,1},\dots,\eta_{t,n})\in S\subseteq\mathbb{N}^n$ denotes the state vector at time $t$, where $\eta_{t,i}$ is the number of individuals in species $i$ at time $t$. The dynamics is encoded by a set of $m$ reactions with parametric propensity functions that depends on the state of the system. Due to the memoryless property of CTMC, the probability of finding the system in state $s$ at time $t$ given that it was in state $s_0$ at time $t_0$ can be expressed as a system of ODEs known as Chemical Master Equation (CME). Since in general the CME is a system with countably many differential equations, its analytic or numeric solution is almost always unfeasible. An alternative computational approach is to generate trajectories using stochastic algorithms for simulation, like the well-known Gillespie’s SSA~\cite{gillespie1977exact} which produces statistically correct trajectories, i.e., sampled according to the stochastic process described by the CME. \vspace{-0.2cm} \subsection{Generative Adversarial Nets}\label{sec:gan_intro} Every dataset can be considered as a set of observations drawn from an unknown distribution $\mathbb{P}_r$. Generative models aim at learning a model that mimics this unknown distribution as closely as possible, i.e., learn a distribution $\mathbb{P}_{w_g}$ as similar as possible to $\mathbb{P}_r$, in order to then get samples from it that are new but look as if they could have belonged to the original dataset. Generative Adversarial Nets (GANs)~\cite{goodfellow2014generative} are deep learning-based generative models, that, given a dataset, are capable of generating new random but plausible examples. \paragraph{Wasserstein GAN.} In this work we consider the Wasserstein version of GAN (WGAN)~\cite{arjovsky2017wasserstein,gulrajani2017improved} as it is known to be more stable and less sensitive to the choice of model architecture and hyperparameters compared to a traditional GAN. WGANs use the Wasserstein distance (also known as Earth-Mover's distance), rather than the Jensen Shannon divergence, to measure the difference between the model distribution $\mathbb{P}_{w_g}$ and the target distribution $\mathbb{P}_r$. Because of Kantorovich-Rubinstein duality \cite{villani2008optimal} such distance can be computed as the supremum over all the 1-Lipschitz functions $f : S \rightarrow \mathbb{R}$: \begin{equation}\label{eq:wassdist} \small W(\mathbb{P}_r,\mathbb{P}_{w_g}) = \sup_{||f||_L\le 1} \left( \mathbb{E}_{x\sim\mathbb{P}_{r}}[f(x)]-\mathbb{E}_{x\sim\mathbb{P}_{w_g}}[f(x)] \right). \end{equation} We approximate these functions $f$ with a neural net $C_{w_c}$ parametrized by weights $w_c$. To enforce the Lipschitz constraint we follow \cite{gulrajani2017improved} and introduce a penalty over the norm of the gradients. It is known that a differentiable function is 1-Lipchitz if and only if it has gradients with norm at most 1 everywhere. The objective function, to be maximized w.r.t. $w_c$, becomes: \begin{equation}\label{eq:wassdist_gp} \small \mathcal{L}({w_c}, w_g) := \mathbb{E}_{x\sim\mathbb{P}_{r}}[C_{w_c}(x)]-\mathbb{E}_{x\sim\mathbb{P}_{w_g}}[C_{w_c}(x)]-\lambda \mathbb{E}_{\hat{x}\sim\mathbb{P}_{\hat{x}}} ( \lVert \nabla_{\hat{x}}C_{w_c} (\hat{x}) \lVert_2-1 )^2] , \end{equation} where $\lambda$ is the penalty coefficient and $\mathbb{P}_{\hat{x}}$ is defined by sampling uniformly along straight lines between pairs of points sampled from $\mathbb{P}_r$ and $\mathbb{P}_{w_g}$. This is actually a softer constraint that however performs well in practice~\cite{gulrajani2017improved}. The $C_{w_c}$ network is referred to as \textit{critic} and it outputs different scores for real and fake samples, its objective function (Eq.~\eqref{eq:wassdist_gp}) provide an estimate of the Wasserstein distance among the two distributions. On the other hand, the distribution $\mathbb{P}_{w_g}$ is parametrized by $w_g$; we seek the parameters that make it as close as possible to $\mathbb{P}_r$. To achieve this, we consider a random variable $Z$ with a fixed simple distribution $\mathbb{P}_Z$ and pass it through a parametric function, the \textit{generator}, $G_{w_g} : Z \rightarrow S$ that generates samples following the distribution $\mathbb{P}_{w_g}$. Therefore, the WGAN architecture consists of two deep neural nets, a generator that proposes a distribution and a critic that estimate the distance between the proposed and the real (unknown) distribution. Using WGAN brings several important advantages compared to traditional GAN: it avoids the mode collapse problem, which makes WGAN more suitable for capturing stochastic dynamics, it drastically reduces the problem of vanishing gradients and it also have an objective function that correlates with the quality of generated samples, making the results easier to interpret. \paragraph{Conditional GAN.} Conditional Generative Adversarial Nets (cGAN)~\cite{mirza2014conditional} are a type of GANs that involves the conditional generation of examples, i.e., the generator produces examples of a required type, e.g. examples that belong to a certain class, and thus they introduce control over the desired generated output. In our application, we want the generation of stochastic trajectories to be conditioned on some model parameters and on the initial state of the system. Furthermore, dealing with inputs that are trajectories, i.e. sequences of fixed length, requires the use of convolutional neural networks (CNNs)~\cite{goodfellow2016deep} for both the generator and the critic. The architecture used in this work is thus a conditional Wasserstein Convolutional GAN with gradient penalty, it is going to be referred to as cWCGAN-GP. \section{Experimental Results}\label{sec:experiments} In this section we validate our GAN-based model abstraction procedure on the following case studies. More details are provided in Appendix~\ref{sec:casestudies}. \begin{itemize} \item \textbf{SIR Model (Absorbing state).} The SIR epidemiological model describes the spread, in a population, of an infectious disease that grants immunity to those who recover from it. The population is divided in three mutually exclusive groups: susceptible (S), infected (I) and recovered (R). The possible reactions, given by the interaction of individuals are infection and recovery. An important feature is the presence of an absorbing states. \item \textbf{Ergodic SIRS Model.} A SIR model in which the population is not perfectly isolated, meaning there is always a chance of getting infected from some external individuals, and in which immunity is only temporary. As a consequence, this model has no absorbing state. \item \textbf{Genetic Toggle Switch Model (Bistability).} The toggle switch is a well-known bistable biological circuit consisting of two genes, $G_1$ and $G_2$, that mutually repress each other in the production of proteins $P_1$ and $P_2$ respectively. The system displays two stable equilibria. \item \textbf{Oscillator Model.} The circuit consists of three species A, B and C and three cyclic reactions: A converts B to itself, B converts C to itself, and C converts A to itself. The concentrations of the three species oscillates in time. \item \textbf{MAPK Model.} The mitogen-activated protein kinase cascade models the amplification of an output signal ($\it MAKP\_PP$) thorough a multi-level cascade with negative feedback which is ultra-sensitive to an input stimulus ($V_1$). The output signal shows either stable or oscillating behaviour, depending on the input signal. \end{itemize} In order to evaluate the performance of our abstraction procedure we consider two important measures: the accuracy of the abstract model, evaluated for each species at each time step of the time grid, and the computational gain compared to SSA simulation time. \paragraph{Experimental Settings.} The workflow can be divided in steps: (1) define a CRN model, (2) generate the synthetic datasets via SSA simulation, (3) learn the abstract model by training the cWCGAN-GP and, finally, (4) evaluate such abstraction. All the steps have been implemented in Python. In particular, CRN models are defined in the \texttt{.psc} format, CRN trajectories are simulated using Stochpy~\cite{maarleveld2013stochpy} (stochastic modeling in Python) and PyTorch~\cite{paszke2017automatic} is used to craft the desired architecture for the cWCGAN-GP and to evaluate the latter on the test data. All the experiments were performed on a Intel Xeon Gold 6140 with 24 cores and a 128GB RAM. The source code for all the experiments can be found at the following link: \url{https://github.com/francescacairoli/WGAN_ModelAbstraction}. \paragraph{Datasets.} For each case study with fixed parameters, the training set consists of $20$K different SSA trajectories. In particular, $N_{train} =2$K and $k_{train}= 10$. The test set, instead, consists of $25$ new initial settings and from each of these we simulate $2$K trajectories, so to obtain an empirical approximation of the distribution targeted by model abstraction. When a parameter is allowed to vary, the training set consists of $50$K SSA trajectories ($N_{train} =1$K and $k_{train}= 50$). We manually choose $H$ and $\Delta t$ so that the system is close to steady state at time $H\cdot \Delta t$, without spending there too many steps. The time interval should be small enough to capture the full transient behavior of the system. For systems with no steady state, such as the oscillating models, we choose $H$ and $\Delta t$ so to observe a full period of oscillation. The chosen values are the following: SIR: $\Delta t = 0.5$, $H = 16$; e-SIRS: $\Delta t = 0.1$, $H = 32$; Toggle Switch: $\Delta t = 0.1$, $H = 32$; Oscillator: $\Delta t = 1$, $H = 32$; MAPK: $\Delta t = 60$, $H = 32$. \paragraph{Data Preparation.} Data have been scaled to the interval $[-1,1]$ to enhance the performance of the two CNNs and to avoid sensitivity to different scales in species counts. During the evaluation phase, the trajectories have been scaled back. Hence, results and errors are shown in the original scale. \vspace{-0.2cm} \subsection{cWCGAN-GP architecture} The same architecture and the same set of hyper-parameters works well for all the analyzed case studies, showing great stability and usability of the proposed solution. The Wasserstein formulation of GANs, with gradient penalty , strongly contributes to such stability. Traditional GANs have been tested as well, but they do not have such strength. The details of the archictecture follows the best practice suggestions provided in~\cite{gulrajani2017improved}. The critic network has two hidden one-dimensional convolutional layers, with $n+m$ channels, each containing $64$ filters of size $4$ and stride $2$. We use a leaky-ReLU activation function with slope $0.2$, we do layer normalization and at each layer we introduce a dropout with probability $0.2$. An additional dense layer, with linear activation function, is used to connect the single output node, that contains the critic value. In order to enforce the Lipschitz constraint on the critic’s model we add a gradient penalty term, as described in Section~\ref{sec:gan_intro}. On the other hand, the generator network takes as input the noise and the initial settings and it embeds the inputs in a larger space with $N_{ch}$ channels ($512$ in our experiments) through a dense layer. Four one-dimensional convolutional transpose layers are then inserted, containing respectively $128$, $256$, $256$ and $128$ filters of size $4$ with stride $2$. Here we do batch normalization and use a leaky-ReLU activation function with slope $0.2$. Finally, a traditional convolutional layer is introduced to reduce the number of output channels to $n$. The Adam algorithm~\cite{bengio2015rmsprop} is used to optimize loss function of both the critic and the generator. The learning rate is set to $0.0001$ and $\beta= \{0.5,0.9\}$. The above settings are shared by all the case studies, the only exception is the more complex MAPK model for which a deeper cWCGAN-GP architecture is selected: a critic with five layers, each containing $256$ filters of size $4$ and stride $2$, and a generator with five layers, containing respectively $128$, $256$, $512$, $256$ and $128$ filters of size $4$ with stride $2$. Training times depend on the dimension of the dataset, on the size of mini-batches, on the number of species, and on the architecture of the cWCGAN-GP. The latter has been kept constant for all the case studies. Batches of $256$ samples have been used and the number of epochs varies from $200$ to $500$ depending on the complexity of the model. Moreover, each training iteration of the generator correspond to $5$ iterations of the critic, to balance the power of the two player. The average time required for each training epoch is around one minute. Therefore, training the cWCGAN-GP model for $500$ epochs takes around $8$ hours leveraging the GPU. \vspace{-0.2cm} \subsection{Results} \paragraph{Computational gain.} The time needed to generate abstract trajectories does not depend on the complexity of the original system. Moreover, as the cWCGAN-GP architecture is shared by all the case studies, the computational time required to generate abstract trajectories is the same for all the case studies. In particular, considering a noise variable of size $480$, it takes around $1.75$ milliseconds (ms) to simulate a single trajectory. However, when generating batches of at least 200 trajectories the overhead reduces and the time to generate a single trajectory stabilizes around 0.8 ms. The same does not hold for the SSA trajectories, whose computational costs depends on the complexity of the model and on the chosen reaction rates. In the case studies considered the time required to simulate a single trajectory varies from $0.04$ to $0.22$ seconds, but it easily increases for more complex models or for smaller reaction rates, whereas the cost of abstract simulation stays constant. Details about the computational gain for each model are presented in Table~\ref{table:comp_times}. Computations are performed exclusively on a single CPU processor, to perform a fair comparison. However, the evaluation of cWCGAN-GP can be further sped up using GPUs, especially for large batches of trajectories, but this would have introduced a bias in their favour. It is important to stress how GPU parallelization is extremely straightforward in PyTorch and how the time to generate a single trajectory decrease to 1.9 $\times 10^{-5}$ seconds when generating a batch of at least $2K$ trajectories (see last line of Table~\ref{table:comp_times}). The training phase introduces a fixed overhead that affects the overall computational gain. For instance, the training phase of the MAPK model takes around 8 hours, which is equivalent to the time needed to generate 140K SSA trajectories. It follows that, together with the trajectories needed to generate the training set, the cost of the training procedure is paid off when we simulate at least 200K trajectories. In a typical biological multi-scale scenario in which we seek to simulate the evolution in time of a tissue containing millions of cells, having additional internal pathways, the number of trajectories needed for the training phase becomes negligible and the training time is soon paid off. \input{table} \paragraph{Measures of performance.} Results are presented as follows. For each model, we present a small batch of trajectories, both real and abstract. From the plots of such trajectories we can appreciate if the abstract trajectories are similar to real ones and if they capture the most important macroscopic behaviors. We also show the histograms of empirical distributions at time $t_H$ for each species to quantify the behavior over all the $2$K trajectories present in the test set (see Fig.~\ref{fig:sir_trajectories}-~\ref{fig:mapk_trajectories}). Additional plots are shown in Appendix~\ref{sec:plots} (Fig.~\ref{fig:sir_trajectories_extra}-~\ref{fig:clock_trajectories_extra}). \begin{figure}[ht] \centering \subfigure[eSIRS]{ \includegraphics[scale=0.235]{imgs/eSIRS/eSIRS_avg_wass_distance_1000epochs_32steps.png}} \hfill \subfigure[eSIRS-1P]{ \includegraphics[scale=0.235]{imgs/eSIRS_1P/Scaled_avg_wass_distance_1epochs_32steps.png}} \hfill \subfigure[SIR]{ \includegraphics[scale=0.235]{imgs/SIR/Scaled_avg_wass_distance_500epochs_16steps.png}} \vspace{-0.25cm} \subfigure[Oscillator]{ \includegraphics[scale=0.235]{imgs/Oscillator/Oscillator_avg_wass_distance_1000epochs_32steps.png}} \hfill \subfigure[Toggle Switch]{ \includegraphics[scale=0.235]{imgs/TS/ToggleSwitch_avg_wass_distance_2000epochs_32steps.png}} \hfill \subfigure[MAPK]{ \includegraphics[scale=0.235]{imgs/MAPK/MAPK_avg_wass_distance_1500epochs_32steps_scaling.png}} \vspace{-1\baselineskip} \caption{Plots of the error over time for each model and each species. Errors are computed using the Wasserstein distance over the entire test set. Generated trajectories have been keep scaled to the interval $[-1,1]$ so that the scale of the system does not affect the scale of the error measure.\vspace{-0.5cm}}\label{fig:wass_errors} \end{figure} \paragraph{Measuring error propagation.} The reconstruction accuracy of the proposed abstraction procedure is performed on test sets consisting of $25$ different initial settings. For each of these points $2$K SSA trajectories represent the empirical approximation of the true distribution over $S^H$. From each of these initial settings we also simulate $2$K abstract trajectories. Given a species $i\in\{1,\dots n\}$ and a time step $j\in\{1,\dots H\}$, we have the real one-dimensional distribution $\eta_{i,j}$ and the generated abstract distribution $\hat{\eta}_{i,j}$, where $\eta_{i,j}$ denotes the counts of species $i$ at time $t_j$ in a trajectory $\eta_{[1,H]}$. In order to quantify the reconstruction error, we compute five quantities: the Wasserstein distance among the two one-dimensional distributions, the absolute and relative difference among the two means and the absolute and relative difference among the two variances. By doing so, we are capable of seeing whether the error propagates in time and whether some species are harder to reconstruct than others. The error plots for the Wasserstein distance are shown in Figure~\ref{fig:wass_errors}. Plots of means and variances distances are provided in Appendix~\ref{sec:plots} (Fig.~\ref{fig:avg_means_errors}-~\ref{fig:avg_vars_rel_errors}). In addition, for two-dimensional models, i.e. eSIRS, Toggle Switch and MAPK, we show the landscapes of these five measures of the reconstruction error at three different time steps: step $t_1$, step $t_{H/2}$ and step $t_H$ (Fig.~\ref{fig:eSIRS_distance_landscapes}-\ref{fig:mapk_distance_landscapes} in Appendix~\ref{sec:plots}). We observe that, in all the models, each species seems to contribute equally to the global error and, in general, the error stays constant w.r.t. time, i.e., it does not propagate. This was a major concern in previous methods, based on the abstraction of transition kernels. In fact, in order to simulate a trajectory of length $H$ the abstract kernel has to be applied iteratively $H$ times. As a consequence, this results in a propagation of the error introduced in the approximation of the transition kernel. \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/SIR/SIR_Trajectories14.png} \includegraphics[scale=0.25]{imgs/SIR/SIR_hist_comparison_last_timestep_14.png} \vspace{-1\baselineskip} \caption{SIR model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue); \textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance on a randomly chosen test point represented by three trajectories: the top one (species S), the central one (species I) and the bottom one (species R).\vspace{-0.5cm}} \label{fig:sir_trajectories} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/eSIRS_1P/eSIRS_1P_Trajectories0.png} \includegraphics[scale=0.25]{imgs/eSIRS_1P/eSIRS_1P_hist_comparison_last_timestep_0.png} \includegraphics[scale=0.25]{imgs/eSIRS_1P/eSIRS_1P_Trajectories1.png} \includegraphics[scale=0.25]{imgs/eSIRS_1P/eSIRS_1P_hist_comparison_last_timestep_1.png} \vspace{-1\baselineskip} \caption{e-SIRS model with one varying parameter: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue); \textbf{(right)} comparison of the real and generated histogram at the last timestep. \label{fig:esir_trajectories_one_par} \includegraphics[scale=0.25]{imgs/TS/ToggleSwitch_Rescaled_Trajectories0_tight.png} \includegraphics[scale = 0.25]{imgs/TS/ToggleSwitch_rescaled_hist_comparison_-1th_timestep_0_tight.png} \vspace{-1\baselineskip} \caption{Toggle Switch model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue); \textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance for a randomly chosen test point represented by a pair of trajectories: the top one (species P1) and the bottom one (species P2).\vspace{-0.5cm}} \label{fig:ts_trajectories} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{imgs/Oscillator/Oscillator_Rescaled_Trajectories2.png} \includegraphics[scale=0.25]{imgs/Oscillator/Oscillator_rescaled_hist_comparison_-1th_timestep_2.png} \vspace{-1\baselineskip} \caption{Oscilator model: \textbf{(left)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue);\textbf{(right)} comparison of the real and generated histogram at the last timestep. Performance on a randomly chosen test point represented by three trajectories: the top one (species A), the central one (species B) and the bottom one (species C). \vspace{-0.5cm} }\label{fig:clock_trajectories} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.19]{imgs/MAPK/MAPK_Rescaled_Trajectories8.png} \includegraphics[scale=0.19]{imgs/MAPK/MAPK_Rescaled_Trajectories15.png} \includegraphics[scale=0.19]{imgs/MAPK/MAPK_Rescaled_Trajectories22.png} \includegraphics[scale=0.19]{imgs/MAPK/MAPK_rescaled_hist_comparison_-1th_timestep_8.png} \includegraphics[scale=0.19]{imgs/MAPK/MAPK_rescaled_hist_comparison_-1th_timestep_15.png} \includegraphics[scale=0.19]{imgs/MAPK/MAPK_rescaled_hist_comparison_-1th_timestep_22.png} \vspace{-1\baselineskip} \caption{MAPK model: \textbf{(top)} comparison of trajectories generated with a cWCGAN-GP (orange) and the trajectories generated with the SSA algorithm (blue);\textbf{(bottom)} comparison of the real and generated histogram at the last timestep. Performance on three, randomly chosen, test points. Each point is represented by the ouput species MAPK\_PP. \vspace{-0.5cm} }\label{fig:mapk_trajectories} \end{figure} \textbf{SIR.} The results for the SIR model are presented in Fig.~\ref{fig:sir_trajectories} and Fig.~\ref{fig:sir_trajectories_extra} (Appendix~\ref{sec:plots}), which shows the performance on two, randomly chosen, test points. Each point is represented by three trajectories, the top one is for species S, the central one is for species I and the bottom one is for species R. The population size, given by $S+I+R$, is variable. The abstraction was trained on a dataset with fixed parameters, $\theta = \{3,1\}$. Likewise, in the test set only the initial states are allowed to vary. We observe that our abstraction method is able to capture the absorbing nature of SIR trajectories. It is indeed very important that once state $I=0$ or state $R=N$ are reached, the system should not escape from it. Abstract trajectories satisfy such property without requiring the imposition of any additional constraint. The empirical distributions, real and generated, at time $t_H$ are almost indistinguishable. \textbf{e-SIRS.} The e-SIRS model represents our baseline. We train two abstractions: in the first case the model is trained on a dataset with fixed parameters, $\theta=\{2.36, 1.67, 0.9, 0.64\}$, and in the second case we let parameter $\theta_1$ vary as well. Results are very accurate in both scenarios In the fixed-parameters case, Fig.~\ref{fig:esir_trajectories} (Appendix~\ref{sec:plots}), the results are shown for two, randomly chosen, initial states. In the second case, Fig.~\ref{fig:esir_trajectories_one_par}, the results are shown on two, randomly chosen, pairs $(s_0,\theta_1)$. Each point is represented by a pair of trajectories, the top one is for species S and the bottom one is for species I. We performed a further analysis on the generalization capabilities of the abstraction learned on the dataset with one varying parameter, using larger test sets and computing mean and standard deviation of the distribution of Wasserstein distances over such sets. The mean stays around $0.04$ with a tight standard deviation ranges from $0.01$ to $0.05$, showing little impact of the chosen conditional setting (see Fig.~\ref{fig:esirs_1p_analysis} in Appendix~\ref{sec:plots}). \textbf{Toggle Switch.} The results for the Toggle Switch model, on two, randomly chosen, test points, are shown in Fig.~\ref{fig:ts_trajectories}) and Fig.~\ref{fig:ts_trajectories_extra} (Appendix~\ref{sec:plots}). The abstraction was trained on a dataset with fixed symmetric parameters ($kp_i=1,kb_i=1,ku_i=1, kd_i=0.01$ for $i = 1,2$). Likewise, in the test set only the initial states are allowed to vary. In this model, we tried to abstract only trajectories of the proteins $P1$ and $P2$, which are typically the observable species, ignoring the state of the genes. By doing so, we reduce the dimensionality of the problem but we also lose some information about the full state of the system. Nonetheless, the cWCGAN-GP abstraction is capable of capturing the bistable behaviour of such trajectories. In Fig.~\ref{fig:ts_trajectories}, each point is represented by two trajectories, the top one is for species $P1$, whereas the bottom one is for species $P2$. \textbf{Oscillator.} The results for the Oscillator model, on two, randomly chosen, test points, are shown in Fig.~\ref{fig:clock_trajectories} and Fig.~\ref{fig:clock_trajectories_extra} (Appendix~\ref{sec:plots}). The abstraction was trained on a dataset with fixed parameter ($\theta = 1$). Likewise, in the test set only the initial states are allowed to vary. Each point is represented by three trajectories, the top one is for species $A$, the central one is for species $B$ and the bottom one is for species $C$. The abstract trajectories well capture the oscillating behaviour of the system. \textbf{MAPK.} The results for the MAPK model, on three, randomly chosen, test points, are shown in Fig.~\ref{fig:mapk_trajectories}. The abstraction was trained on a dataset considering only a varying $V_1$ parameter and the dynamics of species $MAPK\_PP$. This case study represents a complex scenario in which the abstract distribution should capture the marginalization over the other seven unobserved variables. Moreover, the emergent behaviour of the only observed variable, $MAPK\_PP$, is strongly influenced by the input parameter $V_1$ and further amplified by the multi-scale nature of the cascade: for some values of $V_1$ the system oscillates, whereas for others it stabilizes around an equilibrium. Results show that our abstraction technique is flexible enough to capture such sensitivity. \vspace{-0.2cm} \subsection{Discussion} Previous approaches to model abstraction, see Related work in Section~\ref{sec:introduction}, focus on approximating the transition kernel, meaning the distribution of possible next states after a time $\Delta t$, rather than learning the distribution of full trajectories of length $H$. The main reason for such choice is the limited scalability of the tool used for learning the abstraction. In fact, learning a distribution over $S^H\subseteq\mathbb{N}^{H\times n}$ with a Mixture Density Network is unfeasible even for small $H$. Moreover, in learning to approximate the transition kernel one must split the SSA trajectories of the dataset in pairs of subsequent states. By doing so, a lot of information about the temporal correlation among states is lost. Having a tool strong and stable enough to learn distributions over $S^H$ allows us to preserve this information and make abstraction possible even for systems with a complex dynamics, which the abstraction of the transition kernel was failing to capture. For instance, we are now able to abstract the transient behaviour of multi-stable or oscillating systems. When attempting to abstract the transition kernel, either via MDN or via c-GAN, for such complex systems, we did not succeed in learning meaningful solutions. A collateral advantage in generating full trajectories, rather than single subsequent states, is that it introduces an additional computational speed-up in the time required to generate a large pool of trajectories of length $H$. For instance, if a cWGAN is used to approximate the transition kernel, it takes around $31$ seconds to simulate the $50$K trajectories of length $32$ present in the test set. Our trajectory-based method takes only $3.4$ seconds to generate the same number of trajectories. Furthermore, our cWCGAN-GP was trained with relative small datasets, which leaves room for further improvements where needed. An additional strength of our method is that one can train the abstract model only on species that are observable, reducing the complexity of the CRN model while preserving an accurate reconstruction for the species of interest. Once again, this was not possible with transition kernels and it may be extremely useful in real world applications. In general, the cWCGAN-GP approximation does not provide any statistical guarantee about the reconstruction error. In addition, the set of observations used to learn the abstraction is rather small, typically $10$ samples for each initial setting. Therefore, it is not surprising that the real and the abstract distributions are not indistinguishable from a statistical point of view, as shown in Appendix~\ref{sec:stat_test}. However, the abstract model is actually capable of capturing, from the little amount of information provided, the emergent features of the behaviour of the original system, such as multimodality or oscillations. In this regard, formal languages can be used to formalize and check such qualitative properties. In particular, we can check whether the satisfaction probability (of non rare events) is similar in real and abstract trajectories. Examples are shown in Appendix~\ref{sec:satisf}. Furthermore, such quantification of qualitative properties can be used to measure how good the reconstruction is. As future work, we intend to use it as query strategy for an active learning approach, so that the obtained abstract model is driven in the desired direction. \section{Introduction}\label{sec:introduction} A wide range of complex systems can be modeled as a network of chemical reactions. Stochastic simulation is typically the only feasible analysis approach that scales in a computationally tractable manner with the increase in system size, as it avoids the explicit construction of the state space. The well known Gillespie Stochastic Simulation Algorithm~\cite{gillespie1977exact} is widely used for simulating models, as it samples from the exact distribution over trajectories. This algorithm is effective to simulate systems of moderate complexity, but it does not scale well to systems with many species and reactions, large populations, or internal stiffness. In these scenarios, a more effective choice is to rely on approximate simulation algorithms such as tau-leaping~\cite{cao2006efficient} and hybrid simulation~\cite{pahle2009biochemical}. Nonetheless, when the number of simulations required is extremely large and possibly costly, e.g. when one needs to simulate a large population of heterogeneous cells in a multi-scale model of a tissue or to simulate many heterogeneous individuals in an population ecology scenario, all these methods become extremely computationally demanding, even for HPC facilities. A viable approach to address such problem is model abstraction, which aims at reducing the underlying complexity of the model, and thus reduce its simulation cost. However, building effective model abstractions is difficult, requiring a lot of ingenuity and man power. Here we advocate the strategy of learning an abstraction from simulation data. Our strategy is to frame model abstraction as a supervised learning problem, and learn an abstract probabilistic model using state of the art deep learning. The probabilistic model should then be able to generate approximate trajectories efficiently and in constant time, i.e., independent on the complexity of the original system, thus sensibly reducing the simulation cost. \paragraph{Related work.} The idea of using machine learning as a model abstraction tool to approximate and simplify the dynamics of a Markov Population Process has received some attention in recent years. In~\cite{bortolussi2018deep} the authors use a Mixture Density Network (MDN)~\cite{bishop2006pattern} to approximate the transition kernel of the stochastic process. In~\cite{petrov2020automated} the authors extend the previous approach by introducing an automated search of the MDN architecture that better fit the data. In~\cite{bortolussi2019bayesian} the authors present a Bayesian model abstraction technique, based on Dirichlet Processes, that allows the quantification of the reconstruction uncertainty. In all cases, what is learned is an approximate transition kernel, i.e., the probabilistic distribution of a single simulation step. In this paper we address a more general and more complex problem. Instead of learning an approximate transition kernel, we learn the distribution of an entire trajectory of fixed length. This latter problem is not solvable with any of the previously adopted approaches, and its major goal is to keep abstraction error under control. In fact, training the abstract model on a full trajectory, rather than on pairs of subsequent states, allows the abstract model to retain and capture more information about the dynamics of the Markov process. \paragraph{Contributions.} Our approach leverages Generative Adversarial Nets (GAN), which are one of the most strong and flexible techniques to learn probabilistic models. In fact, the GAN-based model abstraction technique is capable of learning a conditional distribution over the trajectory space, keeping into account the correlation, both spatial and temporal, among all the different species and conditioning both on initial states and model parameters. All the previous approaches focus on learning the distribution of the state of the system after a time $\Delta t$, the so called \textit{transition kernel}. However, such approaches perform poorly when the time interval is small and the dynamics is transient, showing a clear propagation of the error as the approximate kernel is applied iteratively to form a trajectory. Furthermore, producing a full trajectory reduces even more the computational cost of simulating a large pool of trajectories for different initial settings. \paragraph{Paper structure.} The paper is organized as follows: in Section~\ref{sec:background} the relevant background notions are introduced, in Section~\ref{sec:abstraction} we describe in detail the abstraction procedure, Section~\ref{sec:experiments} presents the case studies and the experimental evaluation. Conclusions are drawn in Section~\ref{sec:conclusions}. \section{Conclusions}\label{sec:conclusions} In the paper we presented a technique to abstract the simulation process of stochastic trajectories for various CRNs. The WGAN-based abstraction improves considerably the computational efficiency, which is no more related to the complexity of the underlying CRN. This would be extremely helpful in all those applications in which a large number of simulations is required, i.e., applications whose solution is unfeasible via SSA simulation. It would enable the simulation of multi-scale models for very large populations, it would speed-up statistical model checking \cite{younes2006statistical} and it can be used in particular cases of parameter estimation, for example when only few parameters have to be estimated multiple times. In conclusion, the c-WCGAN-based solution to model abstraction perform well in scenarios that are very complex and challenging, requiring relatively little data and very little fine-tuning. As future work, we plan to study how our abstraction technique works on real data. In this regard, we do not aim at capturing the underlying dynamical system, but we would rather be able to reproduce the trajectories observed in real applications. A great strength of our method, compared to state of the art solutions, is that it is able to generate trajectories only for a subset of the species present in the system domain, ignoring the information that is not observable, even during the training phase. Another interesting extension is to adapt our technique to sample bridging trajectories, where both the initial and the terminal states are fixed. Typically, the simulation of such trajectories requires expensive Monte Carlo simulations, which makes clear the benefits of resorting to model abstraction. {\footnotesize \noindent\textbf{Acknowledgements} This work has been partially supported by the Italian PRIN project ``SEDUCE'' n.\ 2017TWRCNB.} \bibliographystyle{splncs04}
{ "timestamp": "2021-06-25T02:17:54", "yymm": "2106", "arxiv_id": "2106.12981", "language": "en", "url": "https://arxiv.org/abs/2106.12981" }
\section{Introduction} \label{sec:introduction} Twitter is a popular online social media platform which was released in 2006. Individuals can sign up for a Twitter account to view and publish content of their interests. As reported by Statista\footnote{\url{https://www.statista.com/}}, the number of daily active Twitter users in the United States is over 35 million in the second quarter of 2020\footnote{\url{https://www.statista.com/statistics/970911/monetizable-daily-active-twitter-users-in-the-united-states/}}. Twitter has become not only an essential social platform in people's daily life but also an information publishing venue. The open nature and widespread popularity of Twitter have made itself an ideal target of exploitation from automated programs, also known as bots. These bot accounts are often operated to achieve malicious goals. Bots have been actively involved in many important events, including the elections in the United States and Europe~\cite{10.1145/3308560.3316486, DBLP:journals/corr/Ferrara17aa}. Bots are also responsible for spreading fake news and propagating extreme ideology~\cite{berger2015isis}. These malicious bots try to hide their automated nature by imitating the behaviors of normal users. Across the whole Twittersphere, it is reported that bot accounts for 9\% to 15\% of total active users~\cite{yardi2010detecting}. Since bots jeopardize user experience in Twitter and may even induce undesirable social effects, many research efforts have been devoted to Twitter bot detection. The first work to detect automated accounts in social media dates back to 2010~\cite{yardi2010detecting}. Early studies conducted feature engineering and adopted traditional classification algorithms. Three categories of features were considered: (1) user property features~\cite{d2015real}; (2) features derived from tweets~\cite{miller2014twitter}; and (3) features extracted from neighborhood information~\cite{yang2013empirical}. Later, researchers began to propose neural network based bot detection frameworks. Wei \textit{et al.} ~\cite{wei2019twitter} adopted long short-term memory to extract semantics information from tweets. Kudugunta \textit{et al.}~\cite{kudugunta2018deep} proposed a method that combined feature engineering and neural network models. Heuristic methods for bot detection were also put forward recently. Minnich \textit{et al.}~\cite{minnich2017botwalk} proposed a bot detection method based on anomaly detection. Cresci \textit{et al.}~\cite{cresci2016dna} encoded tweets into a string to find out the difference between human and bots in tweeting behaviors. Despite early successes, ever-shifting social media brought two new challenges to the task of bot detection: generalization and adaptation. The challenge of generalization in social media bot detection demands bot detectors to simultaneously identify bots that attack in many different ways and exploit diversified features on Twitter. Cresci \textit{et al.}~\cite{cresci2017paradigm} points out that Twitter bots attack in different ways such as retweet frauds, malicious hashtag promotion and URL spamming. They also imitate the tweeting behaviour of different types of genuine users, fill out profile items differently and follow each other to boost their follower count. Since Twitter bots are indeed becoming more diversified, a robust Twitter bot detector should therefore address the challenge of generalization to induce real-world impact. However, previous bot detection methods fail to generalize since they only leverage limited user information and are trained on datasets with few types of bots. Apart from that, the challenge of adaptation in bot detection demands bot detectors to maintain desirable performance in different times and catch up with rapid bot evolution. Cresci \textit{et al.}~\cite{10.1145/3409116}'s investigation shows that bots in the past used to be simple and easily identified, possessing too little profile and friend information to be genuine. However, more recently evolved bots have large numbers of friends and followers, use stolen profile pictures and intersperse malicious tweets with neutral ones. These newly evolved bots often evade existing detection measures, thus a robust bot detector should address the challenge of adaptation to put an end to the arms race between bot evolution and bot detection research. However, previous bot detection measures rely heavily on feature engineering and are not designed to adapt to emerging trends in bot evolution. In light of the two challenges of Twitter bot detection, we propose a novel framework SATAR (\textbf{S}elf-supervised \textbf{A}pproach to \textbf{T}witter \textbf{A}ccount \textbf{R}epresentation learning). SATAR adopts self-supervised learning to obtain user representation and identify bots on social media. Specifically, SATAR jointly encodes tweet, property and neighborhood information of users without feature engineering to promote bot detection generalization. SATAR follows a pre-training and fine-tuning learning schema to adapt to different generations of bots. Our main contributions are summarized as follows: \begin{itemize} [topsep=4pt, leftmargin=*] \item We propose a novel framework SATAR to conduct generalizable and adaptable Twitter bot detection. SATAR is an end-to-end framework that jointly uses semantic, property and neighborhood information of users without feature engineering. \item To the best of our knowledge, this paper is the first work to introduce self-supervised representation learning to improve the performance of bot detection. \item We conduct extensive experiments on three real-world datasets to evaluate SATAR and competitive baselines. SATAR outperforms baselines on all three datasets and is proved to generalize and adapt through further exploration. \end{itemize} \noindent In the following, we first review related work in Section ~\ref{sec:relatedwork} and define the task of Twitter bot detection in Section ~\ref{sec:problemdefinition}. Next, we propose SATAR in Section ~\ref{sec:SATAR}, following with extensive experiments in Section ~\ref{sec:experiments}. Finally, we conclude the whole paper in Section ~\ref{sec:conclusion}. \begin{figure*}[h] \centering \includegraphics[width=.95\linewidth]{70.png} \caption{Overview of our proposed self-supervised approach to Twitter account representation learning framework SATAR.} \Description{SATAR architecture in a nutshell} \label{fig:SATAR} \end{figure*} \section{Related Work} \label{sec:relatedwork} In this section, we briefly review the related literature on self-supervised learning and Twitter bot detection. \subsection{Self-Supervised Learning} In order to use unsupervised dataset in a supervised manner, self-supervised learning frames a special learning task, predicting a subset of entities' information using the rest. As a promising learning paradigm, self-supervised learning has drawn massive attention for its fantastic data efficiency and generalization ability, with many state-of-the-art models following this paradigm \cite{liu2020self}. Doersch \textit{et al.}~\cite{doersch2017multi} combined several self-supervised tasks to jointly train a network. Zhai \textit{et al.}~\cite{zhai2019s4l} proposed that semi-supervised learning can benefit from self-supervised learning. Self-supervised learning has been used in different domains, such as natural language processing ~\cite{devlin2018bert, zhang2019hibert}, computer vision ~\cite{oord2016conditional, larsson2016learning} and graph analysis ~\cite{grover2016node2vec, kipf2016variational}. In natural language processing, self-supervised tasks are designed based on following words ~\cite{radford2018improving} or the whole sentence ~\cite{mikolov2013distributed}. Masked language models are also adopted to better attend to the content in general ~\cite{devlin2018bert}. In computer vision, adjacent pixels \cite{oord2016conditional, van2016pixel} and the full image \cite{dinh2014nice, dinh2016density} are used for pretext tasks similarly. In graph analysis, self-supervised tasks are designed based on edge attributes ~\cite{dai2018adversarial, tang2015line} or node attributes ~\cite{ding2018semi}. \subsection{Twitter Bot Detection} Traditional bot detection methods mainly focused on extracting basic features from user information. Among them, Gao \textit{et al.}~\cite{gao2012towards} used text shingling and incremental clustering to merge spam messages into campaigns for real-time classification. Lee \textit{et al.} \cite{lee2013warningbird} proposed to use the redirection of URLs in tweets and Thomas \textit{et al.} \cite{thomas2011design} focused on classification of mentioned websites . Other features are also adopted such as information on the user profile ~\cite{lee2011seven}, social networks ~\cite{minnich2017botwalk} and timeline of accounts ~\cite{cresci2016dna}. Yang \textit{et al.}~\cite{yang2013empirical} designed several new features to counter the evolution of modern Twitter bots. Cresci \textit{et al.}~\cite{cresci2018reaction} proposed that confrontation between bot detectors and bot operators is a never-ending arms race. It is also argued that we should refrain from methods that rely on posterior observations. Neural networks are also adopted to detect Twitter bots because of their strong learning capability. Wei \textit{et al.}~\cite{wei2019twitter} employed recurrent neural networks to efficiently capture features across tweets. Kudugunta \textit{et al.}~\cite{kudugunta2018deep} divided user features into account-level features, such as follower count, and tweet-level features, such as the number of hashtags. Both kinds of features and semantic information are used to set up an LSTM-based bot detection framework. Stanton \textit{et al.}~\cite{stanton2019gans} utilized generative adversarial network for spam detection to avoid annotation costs and inaccuracies. Alhosseini \textit{et al.}~\cite{ali2019detect} proposed a model based on graph convolutional networks for spam bot detection to leverage both node features and neighborhood information. \section{Problem Definition} \label{sec:problemdefinition} Let $U$ be a Twitter user, consisting of three aspects of user information. Let $T = \{t_i\}_{i=1}^{M}$ be a user's semantic information of $M$ tweets. Each tweet $t_i = \{w_1^i, \cdot \cdot \cdot, w_{Q_i}^i\}$ contains $Q_i$ words. Let $P = \{p_i\}_{i=1}^{R}$ be a user's property information with a total of $R$ properties. Each property $p_i$ could be numerical such as follower count or categorical such as whether the user is verified. Let $N = \{N^f, N^t\}$, where $N^f = \{N_1^f, \cdot \cdot \cdot, N_u^f\}$ are $u$ followings of the user and $N^t = \{N_1^t, \cdot \cdot \cdot, N_v^t\}$ are $v$ followers. Similar to previous research~\cite{yang2020scalable, kudugunta2018deep}, we treat Twitter bot detection as a binary classification problem, where each user could either be human ($y = 0$) or bot ($y = 1$). Formally, we can define the Twitter bot detection task as follows: \hspace{2pt} \begin{tcolorbox}[ standard jigsaw, opacityback=0, boxrule=0.5p ] \textbf{Problem: Twitter Bot Detection} Given a Twitter user $U$ and its information $T$, $P$ and $N$, learn a bot detection function $f:f(U(T,P,N)) \rightarrow \hat{y}$, such that $\hat{y}$ approximates ground truth $y$ to maximize prediction accuracy. \end{tcolorbox} \section{SATAR methodology} \label{sec:SATAR} In this section, we present the details of the proposed Twitter user representation learning framework named as SATAR (\textbf{S}elf-supervised \textbf{A}pproach to \textbf{T}witter \textbf{A}ccount \textbf{R}epresentation learning). In Section~\ref{subsec:SATARover}, we provide an overview of the proposed framework SATAR. In Section ~\ref{subsec:SATARTSSN} - ~\ref{subsec:SATARCIA}, we formally define the architecture of SATAR and details regarding its four major components. In Section ~\ref{subsec:SATARSSLO}, we provide details about the self-supervised learning schema and present the overall SATAR training algorithm. \subsection{Overview} \label{subsec:SATARover} Figure ~\ref{fig:SATAR} illustrates the proposed framework SATAR. It consists of four major components: (1) a tweet-semantic sub-network, (2) a profile-property sub-network, (3) a following-follower sub-network and (4) a Co-Influence aggregator. Specifically, we use the Twitter API\footnote{\url{https://developer.twitter.com/en/products/twitter-api/early-access}} to obtain relevant data regarding a user's semantic, property and neighborhood information. The tweet-semantic sub-network encodes a Twitter user's textual information into $r_s$ with hierarchical RNNs of different depth accompanied by the attention mechanism. The profile-property sub-network encodes a Twitter user's profile properties into $r_p$ with property data encoding and fully connected layers. The following-follower sub-network encodes a Twitter user's neighborhood relationships into $r_n$ with neighborhood information extractor and fully connected layers. Finally, a non-linear Co-Influence aggregator takes the correlation between three aforementioned components into account, generating a representation vector that fully embodies the social status of a specific Twitter user. A softmax layer is then applied for user classification and enables model learning. \vspace{-5pt} \subsection{Tweet-Semantic Sub-Network} \label{subsec:SATARTSSN} Most of the previous works about Twitter bot detection have utilized users' tweet content information. Firstly, hand-picked keywords and feature engineering are pervasive in bot detection endeavors. These approaches extracted information that is conceived as helpful to bot detection, such as URL count ~\cite{AHMED20131120}, hashtag count~\cite{yang2013empirical} and the frequency of spam words~\cite{8508495}. Perceived as effective to begin with, these approaches are generally abandoned due to the inevitable bias introduced in the feature engineering process. With the advent of deep learning, techniques in natural language processing are adopted to capture the semantic information in a specific user's tweets, which shows promising results for bot detection. These efforts treat each tweet as a distinct entity, which is deemed independent from other tweets in evaluating a Twitter user. However, two characteristics of tweeting behaviors would weaken such an assumption of independence. Firstly, Twitter currently has a 280-character limit for tweets, which forces longer texts to become a thread while tweets in a thread often has a coherent meaning. Secondly, a specific user's tweets represent a sequential flow of the user's engagements on social media, but the temporal dependence of different tweets are not considered by existing works. In this paper, we exploit user semantic information at two different levels, tweet-level and word-level, to capture the tweet content of users. Specifically, words in a user's tweets could be fitted into two hierarchical structures. For tweet-level characterization, as defined in Section~\ref{sec:problemdefinition}, $w_i^j$ denotes the $i$-th word in the $j$-th tweet of the user timeline, and $t_j$ represents the $j$-th tweet of a specific user. We also concatenate temporally adjacent tweets: $\{w_1, \cdot \cdot \cdot, w_K\} = \{w_1^1, \cdot \cdot \cdot, w_{Q_1}^1, w_1^2, \cdot \cdot \cdot, w^M_{Q_M}\}$, where the total word count $K = \sum_{i=1}^M Q_i$. Thus for word-level characterization, $w_k$ denotes the $k$-th word in the user's tweet history with temporally adjacent tweets concatenated to form a sequence. It is noteworthy that the underlying words are identical between tweet-level and word-level, but their annotations differ according to the user's tweeting behaviors. To jointly leverage user tweet information on these two different levels, we propose tweet-level and word-level encoders of hierarchical RNNs to model tweet text sequences respectively and derive an overall semantic representation for Twitter users. \noindent \textbf{Tweet-Level Encoder.} The tweet-level encoder follows a bottom-up approach. For the $j$-th tweet of a specific user, we first embed words in it with an embedding layer: \vspace{-2pt} \begin{equation} \label{sbegin} x_i^j = emb(w_i^j), 1 \leqslant i \leqslant Q_j, 1 \leqslant j \leqslant M, \end{equation} \vspace{-2pt} \noindent where $Q_j$ is the length of the $j$-th tweet, and we use Word2Vec~\cite{mikolov2013distributed} as the embedding layer $emb(\cdot)$. To encode the tweet, a bidirectional RNN processes the tweet in a forward pass and a backward pass. For the forward pass, a sequence of forward hidden states is generated for the $j$-th tweet: \begin{equation} \overrightarrow{h}^t_j = \bigg[\overrightarrow{h}^t_{j,1}, \overrightarrow{h}^t_{j,2}, \cdot \cdot \cdot, \overrightarrow{h}^t_{j,Q_j}\bigg], \end{equation} \noindent where the hidden representation for each step is generated by \vspace{-2pt} \begin{equation} \overrightarrow{h}^t_{j,i} = RNN\bigg(\overrightarrow{h}^t_{j,i-1}, x_i^j\bigg). \end{equation} \vspace{-2pt} Here we use LSTM~\cite{lstm} as $RNN(\cdot)$, which is widely adopted to model long-term dependencies in a sequence. For the backward pass, a sequence of backward hidden states is generated similarly: \begin{equation} \overleftarrow{h}^t_j = \bigg[\overleftarrow{h}^t_{j,1}, \overleftarrow{h}^t_{j,2}, \cdot \cdot \cdot, \overleftarrow{h}^t_{j,Q_j}\bigg]. \end{equation} We concatenate the forward and backward results to form a sequence of word representations in the $j$-th tweet: \begin{equation} h^t_j = \bigg[ h^t_{j,1}, h^t_{j,2},\cdot \cdot \cdot, h^t_{j,Q_j} \bigg], \end{equation} \noindent where $h^t_{j,i}=\bigg[ \overrightarrow{h}^t_{j,i}; \overleftarrow{h}^t_{j,i} \bigg]$. Since words in a tweet vary in their contribution to the tweet's overall semantic meaning, the attention mechanism is adopted to aggregate word hidden representations into a tweet vector. Specifically, \begin{equation} \alpha^t_{j,i} = \frac{exp(u^t_{j,i} \cdot v_l^t)}{\sum_{i'}exp(u^t_{j,i'} \cdot v_l^t)}, \end{equation} \noindent where $u^t_{j,i} = tanh(W_l^t h^t_{j,i}+b_l^t)$ transforms vectors for each word and $v_l^t$, $W_l^t$ and $b_l^t$ are learnable parameters. $\alpha^t_{j,i}$ represents the weight of the $i$-th word in the $j$-th tweet. Finally, the representation of the $j$-th tweet can be obtained as follows: \vspace{-2pt} \begin{equation} v^t_j = \sum_i \alpha^t_{j,i}h^t_{j,i}. \end{equation} After deriving a vector for each tweet, the tweet-level encoder applies RNN similarly to tweet representations $\{v^t_j\}_{j=1}^M$, generating a forward and a backward sequence. We concatenate the forward and backward results to form a sequence of tweet representations: \vspace{-2pt} \begin{equation} h^t = \bigg[ h^t_1, h^t_2, \cdot \cdot \cdot, h^t_M \bigg], \end{equation} \noindent where $h^t_i = \bigg[\overrightarrow{h}^t_i;\overleftarrow{h}^t_i\bigg]$. An attention layer is applied to model the influence each tweet has on the overall semantics of the user: \begin{equation} \alpha^t_i = \frac{exp(u^t_i \cdot v^t_h)}{\sum_{i'}exp(u^t_{i'} \cdot v^t_h)}, \end{equation} \noindent where $u^t_i = tanh(W^t_h h^t_j + b^t_h)$ transforms vectors for each tweet and $v^t_h$, $W^t_h$ and $b^t_h$ are learnable parameters. $\alpha^t_i$ represents the weight of the $i$-th tweet. Finally, the representation of a user's tweet semantics from a tweet-oriented perspective can be obtained as follows: \vspace{-2pt} \begin{equation} r^t_s = \sum_i \alpha^t_i h^t_i. \end{equation} \noindent \textbf{Word-Level Encoder.} The word-level encoder concatenates temporally adjacent tweets into a long sequence of words. For the $i$-th word of the sequence, we first embed it with the embedding layer identical to the tweet-level encoder: \begin{equation} x_i = emb(w_i), 1 \leqslant i \leqslant K, \end{equation} \noindent where $K$ is the total word count in the temporally concatenated tweets. A bidirectional RNN with attention is adopted to encode the concatenated sequence. For the forward pass, we have: \begin{equation} \overrightarrow{h}^w = \bigg[ \overrightarrow{h}^w_1,\overrightarrow{h}^w_2,\cdot \cdot \cdot, \overrightarrow{h}^w_K \bigg], \end{equation} \noindent where $\overrightarrow{h}^w_i = RNN(\overrightarrow{h}^w_{i-1}, x_i)$ and LSTM is adopted for $RNN(\cdot)$ regarding its particular length. For the backward pass, we have: \begin{equation} \overleftarrow{h}^w = \bigg[ \overleftarrow{h}^w_1,\overleftarrow{h}^w_2,\cdot \cdot \cdot, \overleftarrow{h}^w_K \bigg], \end{equation} \noindent where $\overleftarrow{h}^w_i = RNN(\overleftarrow{h}^w_{i+1}, x_i)$. Then we concatenate the forward and backward results to form a sequence of word representations in the user's tweet history: \begin{equation} h^w = \bigg[h^w_1,h^w_2,\cdot \cdot \cdot, h^w_K \bigg], \end{equation} \noindent where $h^w_i = \bigg[\overrightarrow{h}^w_i; \overleftarrow{h}^w_i \bigg]$. Then the attention mechanism is applied: \begin{equation} \alpha^w_i = \frac{exp(u^w_i \cdot v^w)}{\sum_{i'}exp(u^w_{i'} \cdot v^w)}, \end{equation} \noindent where $u^w_i = tanh(W^w h^w_i + b^w)$, $v^w$, $W^w$ and $b^w$ are learnable parameters, $\alpha^w_i$ represents the weight of the $i$-th word in the concatenated sequence. Finally, the representation of a user's tweet semantics from a word-oriented perspective is as follows: \begin{equation} r_s^w = \sum_i \alpha^w_i h^w_i. \end{equation} \vspace{-2pt} \noindent \textbf{Overall Semantic Representation.} The tweet-semantic sub-network produces an overall representation $r_s$ based on the two encoders: \begin{equation} \label{send} r_s = concatenation(r^t_s; r^w_s). \end{equation} \begin{algorithm}[!t] \caption{SATAR Learning Algorithm} \label{alg:SATAR} \SetAlgoLined \KwIn{Twitter user dataset $TU$, each user $u \in TU$ has tweets $T$, properties $P$ and neighbors $N$} \KwOut{SATAR-optimized parameters $\theta$} Initialize $\theta$; \\ \For{each user $u \in TU$} { Initialize $r_n(u)$; \\ $u.y \leftarrow$ self-supervised label assignment according to user $u$'s follower count; \\ } \While{$\theta$ has not converged} { \For{each user $u \in TU$} { $r_s(u) \leftarrow$ Equation (\ref{sbegin} - \ref{send}) with $u.T$; \\ $r_p(u) \leftarrow$ Equation (\ref{p}) with $u.P$; \\ $r(u) \leftarrow$ Equation (\ref{cobegin} - \ref{coend}) with $r_s(u)$, $r_p(u)$ and $r_n(u)$; \\ $L_u \leftarrow$ Equation (\ref{lossbegin} - \ref{equ:lossend}) with $r(u)$ and $u.y$; \\ } $\theta \leftarrow$ BackPropagate($L_u$); \\ \For{each user $u \in TU$} { $r_n(u) \leftarrow$ Equation (\ref{nbegin} - \ref{equ:nend}) with $u.N$; } } \end{algorithm} \vspace{-2pt} \subsection{Profile-Property Sub-Network} \label{subsec:SATARPPSN} To avoid the undesirable bias incorporated in feature engineering, the profile-property sub-network utilizes profile properties that could be directly retrieved from the Twitter API. Different encoding strategies are adopted for different types of property data: \begin{itemize}[leftmargin=*] \item There are 15 true-or-false property items in total. We use 1 for true and 0 for false. e.g. “profile uses background image”. \item There are 5 numerical property items in total. We apply z-score normalization to numerical properties over the whole dataset. e.g. “favorites count”. \item There is one special property item: “location”. We divide locations geographically and apply one-hot encoding. \end{itemize} It is noteworthy that the follower count of a specific user would not be included in the property vector, which would be part of the self-supervised learning schema presented in Section ~\ref{subsec:SATARSSLO}. The encoded property items are concatenated to form a raw property vector $u_p$, which is then transformed to produce the Twitter user's property representation $r_p$: \begin{equation} \label{p} r_p = ReLU(FC_p(u_p)), \end{equation} \noindent where $FC_p(\cdot)$ is a fully connected layer and $ReLU(\cdot)$ is a nonlinearity adopted as the activation function. \subsection{Following-Follower Sub-Network} \label{subsec:SATARFFSN} For user followings, according to Twitter mechanism, their tweets will appear in the timeline and the following behaviors often demonstrate interest in their tweet content. Thus we propose $u_n^{f}$ to model the following relationships: \begin{equation} \label{nbegin} u_n^{f} = \frac{1}{\sum_{u\in N^f}TF(u)}\sum_{u\in N^f} TF(u) r_s(u), \end{equation} \noindent where $N^f$ denotes the following set of a Twitter user, $TF(u)$ denotes the tweet frequency of user $u$ and $r_s(u)$ is the semantic representation of user $u$ generated by the tweet-semantic sub-network. Tweet frequency $TF$ is approximated by a user's total tweet count divided by account active time, which is the time period between a user's registration and its last update. Note that $\frac{TF(u)}{\sum_{u'\in N^f} TF(u')}$ represents the proportion that user $u$ appears in one's timeline, thus $u_n^f$ serves as a weighted sum of followings' semantics information according to their relative tweeting frequency. For followers, as the average quality of followers of an account defines its social status and the quality could be evaluated by its properties, we propose to model the follower relationships as follows: \begin{equation} u_n^t = \frac{1}{|N^t|}\sum_{u\in N^t} r_p(u), \end{equation} \noindent where $N^t$ denotes the follower set of a Twitter user, $|\cdot|$ denotes the cardinality of a set and $r_p(u)$ is the property representation of user $u$ generated by the profile-property sub-network. The following-follower sub-network then produces a raw hidden vector for neighborhood information $u_n = concatenation(u_n^f; u_n^t)$. The intermediate vector is then transformed to produce the Twitter user's neighborhood representation $r_n$: \begin{equation} \label{equ:nend} r_n = ReLU(FC_n(u_n)), \end{equation} \noindent where $FC_n(\cdot)$ is a fully connected layer and $ReLU(\cdot)$ is the adopted activation function. \subsection{Co-Influence Aggregator} \label{subsec:SATARCIA} So far, we have obtained the representation vectors regarding three and all three aspects of a Twitter user, namely $r_s$, $r_p$ and $r_n$ for tweet semantics, user property and follow relationships. A good bot detector should be comprehensive and robust to tamper. In other words, independently considering each aspect of user information would inevitably jeopardize the robustness of the bot detector. Co-attention has been a successful mechanism at handling correlation between two sequences, but it is not designed for mutual influence between multiple representation vectors. Thus we propose a Co-Influence aggregator to take the mutual correlation between tweet semantics, user property and follow relationships into consideration. Firstly, the affinity index between a pair of aspects is derived: \begin{equation} \label{cobegin} \begin{aligned} F_{sp} = tanh(r_s^T W_{sp} r_p),\\ F_{pn} = tanh(r_p^T W_{pn} r_n),\\ F_{ns} = tanh(r_n^T W_{ns} r_s), \end{aligned} \end{equation} \noindent where $W_{sp}$, $W_{pn}$ and $W_{ns}$ are learnable parameters of the aggregator. A hidden representation for each aspect which incorporates relevant information from the other two aspects are derived: \begin{equation} \begin{aligned} h^s = tanh(W_sr_s + F_{sp}(W_pr_p) + F_{ns}(W_nr_n)),\\ h^p = tanh(W_pr_p + F_{sp}(W_sr_s) + F_{pn}(W_nr_n)),\\ h^n = tanh(W_nr_n + F_{ns}(W_sr_s) + F_{pn}(W_pr_p)), \end{aligned} \end{equation} \noindent where $W_s$, $W_p$ and $W_n$ are learnable parameters of the aggregator. Finally, the proposed framework SATAR produces the Twitter user representation $r$ as follows: \begin{equation} \label{coend} r = tanh(W_V\cdot concatenation(h^s;h^p;h^n)), \end{equation} \noindent where $W_V$ is a learnable parameter of the aggregator. \begin{table} \setlength{\tabcolsep}{1pt} \setlength{\abovecaptionskip}{1pt} \caption{Overview of three adopted bot detection datasets.} \label{tab:dataset} \begin{tabular}{c c c c c c c c} \toprule Dataset & \tabincell{c}{User \\ Count} & \tabincell{c}{Human \\ Count} & \tabincell{c}{Bot \\ Count} & \tabincell{c}{S \\ Info} & \tabincell{c}{P \\ Info} & \tabincell{c}{N \\ Info} & \tabincell{c}{Release\\ Year} \\ \midrule \ \ TwiBot-20\ \ & \ \ 229,573 \ \ & \ \ 5,237 \ \ & \ \ 6,589 \ \ & \ \ \checkmark \ \ & \ \ \checkmark \ \ & \ \ \checkmark \ \ & \ \ 2020 \ \ \\ Cresci-17 & 9,813 & 2,764 & 7,049 & \checkmark & \checkmark & & 2017 \\ PAN-19 & 11,378 & 5,765 & 5,613 & \checkmark & & & 2019 \\ \bottomrule \end{tabular} \vspace{-10pt} \end{table} \begin{table*} \caption{Components of Twitter user information used by each bot detection method.} \label{tab:SPN} \begin{tabular}{c c c c c c c c c c c} \toprule & \tabincell{c}{Lee \textit{et}\\ \textit{al.}~\cite{lee2011seven}} & \tabincell{c}{Yang\\ \textit{et al.}~\cite{yang2020scalable}} & \tabincell{c}{Kudugunta\\ \textit{et al.}~\cite{kudugunta2018deep}} & \tabincell{c}{Wei \textit{et}\\ \textit{al.}~\cite{wei2019twitter}} & \tabincell{c}{Miller\\ \textit{et al.}~\cite{miller2014twitter}} & \tabincell{c}{Cresci\\ \textit{et al.}~\cite{cresci2016dna}} & \tabincell{c}{Botometer\\ ~\cite{davis2016botornot}} & \tabincell{c}{Alhosseini\\ \textit{et al.}~\cite{ali2019detect}} & $\rm SATAR_{FC}$ & $\rm SATAR_{FT}$ \\ \midrule $\bf Semantic$ & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark \\ $\bf Property$ & \checkmark & \checkmark & \checkmark & & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark \\ $\bf Neighbor$ & & & & & & & \checkmark & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabular} \end{table*} \begin{table*} \caption{Performance comparison for bot detection methods. “/” denotes insufficient user information to support the baseline.} \label{tab:TwiBotMetric} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \toprule \multicolumn{2}{c}{} & \multicolumn{3}{c}{\tabincell{c}{Lee \textit{et}\\ \textit{al.}~\cite{lee2011seven}}} & \multicolumn{3}{c}{\tabincell{c}{Yang\\ \textit{et al.}~\cite{yang2020scalable}}} & \multicolumn{3}{c}{\tabincell{c}{Kudugunta\\ \textit{et al.}~\cite{kudugunta2018deep}}} & \multicolumn{3}{c}{\tabincell{c}{Wei \textit{et} \\\textit{al.}~\cite{wei2019twitter}}} & \multicolumn{3}{c}{\tabincell{c}{Miller\\ \textit{et al.}~\cite{miller2014twitter}}} & \multicolumn{3}{c}{\tabincell{c}{Cresci\\ \textit{et al.}~\cite{cresci2016dna}}} & \multicolumn{3}{c}{\tabincell{c}{\tabincell{c}{Botometer\\ ~\cite{davis2016botornot}}}} & \multicolumn{3}{c}{\tabincell{c}{Alhosseini\\ \textit{et al.}~\cite{ali2019detect}}} & \multicolumn{3}{c}{$\rm SATAR_{FC}$} & \multicolumn{3}{c}{$\rm SATAR_{FT}$} \\ \midrule \multirow{3}{*}{\textbf{TwiBot-20}} & Acc & \multicolumn{3}{c}{0.7456} & \multicolumn{3}{c}{0.8191} & \multicolumn{3}{c}{0.8174} & \multicolumn{3}{c}{0.7126} & \multicolumn{3}{c}{0.4801} & \multicolumn{3}{c}{0.4793} & \multicolumn{3}{c}{0.5584} & \multicolumn{3}{c}{0.6813} & \multicolumn{3}{c}{0.7838} & \multicolumn{3}{c}{\bf 0.8412} \\ & F1& \multicolumn{3}{c}{0.7823} & \multicolumn{3}{c}{0.8546} & \multicolumn{3}{c}{0.7517} & \multicolumn{3}{c}{0.7533} & \multicolumn{3}{c}{0.6266} & \multicolumn{3}{c}{0.1072} & \multicolumn{3}{c}{0.4892} & \multicolumn{3}{c}{0.7318} & \multicolumn{3}{c}{0.8084} & \multicolumn{3}{c}{\bf 0.8642} \\ & MCC & \multicolumn{3}{c}{0.4879} & \multicolumn{3}{c}{0.6643} & \multicolumn{3}{c}{0.6710} & \multicolumn{3}{c}{0.4193} & \multicolumn{3}{c}{-0.1372} & \multicolumn{3}{c}{0.0839} & \multicolumn{3}{c}{0.1558} & \multicolumn{3}{c}{0.3543} & \multicolumn{3}{c}{0.5637} & \multicolumn{3}{c}{\bf 0.6863} \\ \midrule \multirow{3}{*}{\textbf{Cresci-17}} & Acc & \multicolumn{3}{c}{0.9750} & \multicolumn{3}{c}{0.9847} & \multicolumn{3}{c}{0.9799} & \multicolumn{3}{c}{0.9670} & \multicolumn{3}{c}{0.5204} & \multicolumn{3}{c}{0.4029} & \multicolumn{3}{c}{0.9597} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.9622} & \multicolumn{3}{c}{\bf 0.9871} \\ & F1& \multicolumn{3}{c}{0.9826} & \multicolumn{3}{c}{0.9893} & \multicolumn{3}{c}{0.9641} & \multicolumn{3}{c}{0.9768} & \multicolumn{3}{c}{0.4737} & \multicolumn{3}{c}{0.2923} & \multicolumn{3}{c}{0.9731} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.9737} & \multicolumn{3}{c}{\bf 0.9910} \\ & MCC & \multicolumn{3}{c}{0.9387} & \multicolumn{3}{c}{0.9625} & \multicolumn{3}{c}{0.9501} & \multicolumn{3}{c}{0.9200} & \multicolumn{3}{c}{0.1573} & \multicolumn{3}{c}{0.2255} & \multicolumn{3}{c}{0.8926} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.9069} & \multicolumn{3}{c}{\bf 0.9685} \\ \midrule \multirow{3}{*}{\textbf{PAN-19}} & Acc & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.9464} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.8797} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.8728} & \multicolumn{3}{c}{\bf 0.9509} \\ & F1& \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.9448} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.8701} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.8729} & \multicolumn{3}{c}{\bf 0.9510} \\ & MCC & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.8948} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.7685} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{/} & \multicolumn{3}{c}{0.7456} & \multicolumn{3}{c}{\bf 0.9018} \\ \bottomrule \end{tabular} \end{table*} \subsection{Self-Supervised Learning and Optimization} \label{subsec:SATARSSLO} Twitter user representation learning attempts to model a specific user with a distributed representation. We adopt \textbf{follower count} as the self-supervised signal for SATAR training. Specifically, a user's follower count is separated into several categories based on its numerical scale and the overall follower count distribution. We train the representation learning framework SATAR to classify each user into such categories, obtaining user representation in the process. We believe that \textbf{follower count} would be an ideal self-supervised training signal due to the following reasons: \begin{itemize}[leftmargin=*] \item Self-supervised training with follower count is task-agnostic. Whether it is bot detection, content recommendation or online campaign modeling, follower count relates to all tasks on social media without being specific to any of them. \item Follower count is most representative of a Twitter user. There is no better choice to describe a Twitter user more efficiently and accurately, especially when follower count also involves the evaluation of other users. \item Follower count is more robust to large-scale tamper. Although it is possible to purchase fake followers, according to Cresci \textit{et al.}~\cite{Cresci2015FameFS}'s investigation, an increase of 1,000 followers often costs from 13 to 19 U.S. dollars. As a result, it is costly to significantly alter the magnitude of a user's follower count, let alone launch a campaign with many active bots. \end{itemize} Specifically, assuming that a user could be categorized into $D$ classes based on its follower count, a softmax layer is applied to the representation of the user $r$: \vspace{-2pt} \begin{equation} \label{lossbegin} \hat{y} = softmax(W_fr + b_f), \end{equation} \noindent where $\hat{y} = [\hat{y_1}, \hat{y_2}, \cdot \cdot \cdot, \hat{y_D}]$ is the predicted probability vector for each class, $W_f$ and $b_f$ are learnable parameters. $y = [y_1,y_2,\cdot \cdot \cdot, y_D]$ denotes the self-supervised ground-truth for such classification in one-hot encoding. We minimize the cross-entropy loss function as follows: \begin{equation} \label{equ:lossend} L(\theta) = -\sum_{1 \leqslant i \leqslant D} y_i log(\hat{y_i}), \end{equation} \noindent where $\theta$ denotes the parameters in the proposed framework SATAR. Algorithm \ref{alg:SATAR} presents the overall training schema of our proposed Twitter account representation learning framework SATAR. \section{Experiments} \label{sec:experiments} In this section, we conduct extensive experiments with in-depth analysis on three real-world bot detection datasets. \subsection{Experiment Settings} \label{subsec:expsetting} In this section, we provide information about datasets, bot detection baselines and evaluation metrics adopted in the experiments. \noindent \textbf{Datasets.} We make use of three datasets, {\verb|TwiBot-20|}, {\verb|cresci-17|} and {\verb|PAN-19|}. As Twitter bots bear different purposes and evolve rapidly, these high quality datasets are adopted to provide a comprehensive evaluation and verify the generalizability and adaptability of baselines and our proposed method. \begin{itemize} [leftmargin=*] \item We publicized a bot detection dataset {\verb|TwiBot-20|}\footnote{\url{https://github.com/GabrielHam/TwiBot-20}}. It is a comprehensive sample of the current Twittersphere to evaluate whether bot detection methods can generalize in real-world scenarios. Users in {\verb|TwiBot-20|} could be generally split into four interest domains: politics, business, entertainment and sports. As of user information, {\verb|TwiBot-20|} contains semantic, property and neighborhood information of Twitter users. \item {\verb|cresci-17|} ~\cite{cresci2017paradigm} is a public dataset with 4 components: genuine accounts, social spambots, traditional spambots and fake followers. We merge the four parts and utilize {\verb|cresci-17|} as a whole. {\verb|cresci-17|} contains semantic and property information. \item {\verb|PAN-19|} \footnote{\url{https://zenodo.org/record/3692340}} is a dataset of a Bots and Gender Profiling shared task in the PAN workshop at CLEF 2019. It is used for bots and gender profiling and only contains user semantic information. \end{itemize} \vspace{-3pt} A summary of these three datasets is presented in Table~\ref{tab:dataset}. We randomly conduct a 7:2:1 partition for three datasets as training, validation and test set. Such a partition is shared across all experiments in Section ~\ref{subsec:expBDP}, Section ~\ref{subsec:GeneralizeStudy} and Section ~\ref{subsec:AdaptStudy}. We choose these three benchmarks out of numerous bot detection datasets due to their larger size, collection time span and superior annotation quality. \noindent \textbf{Baseline Methods.} We compare SATAR with the following bot detection methods as baselines: \begin{figure}[!t] \centering \setlength{\belowcaptionskip}{-0.4cm} \subfigure[train on politics domain]{\label{fig:subfig:a} \includegraphics[width=0.47\linewidth]{domain_1.png}} \hspace{0.01\linewidth} \subfigure[train on business domain]{\label{fig:subfig:b} \includegraphics[width=0.47\linewidth]{domain_2.png}} \vfill \subfigure[train on entertainment domain]{\label{fig:subfig:c} \includegraphics[width=0.47\linewidth]{domain_3.png}} \hspace{0.01\linewidth} \subfigure[train on sports domain]{\label{fig:subfig:d} \includegraphics[width=0.47\linewidth]{domain_4.png}} \caption{Train SATAR and two competitive baselines on one domain of TwiBot-20 and test on the other three domains.} \label{fig:Domain} \end{figure} \vspace{-2pt} \begin{itemize} [leftmargin=*] \item Lee \textit{et al.}~\cite{lee2011seven}: Lee \textit{et al.} use random forest classifier with several Twitter user features. e.g. the longevity of the account. \item Yang \textit{et al.}~\cite{yang2020scalable}: Yang \textit{et al.} use random forest with minimal account metadata and 12 derived features. \item Kudugunta \textit{et al.}~\cite{kudugunta2018deep}: Kudugunta \textit{et al.} propose an architecture that uses both tweet content and the metadata. \item Wei \textit{et al.}~\cite{wei2019twitter}: Wei \textit{et al.} use word embeddings and a three-layer BiLSTM to encode tweets. A fully connected softmax layer is adopted for binary classification. \item Miller \textit{et al.}~\cite{miller2014twitter}: Miller \textit{et al.} extract 107 features from a user's tweet and property information. Bot users are conceived as abnormal outliers and modified stream clustering algorithm is adopted to identify Twitter bots. \item Cresci \textit{et al.}~\cite{cresci2016dna}: Cresci \textit{et al.} utilize strings to represent the sequence of a user's online actions. Each action type can be encoded with a character. By identifying the group of accounts that share the longest common substring, a set of bot accounts are obtained. \item Botometer~\cite{davis2016botornot}: Botometer is a publicly available service that leverages more than one thousand features to classify an account. \item Alhosseini \textit{et al.}~\cite{ali2019detect}: Alhosseini \textit{et al.} utilize graph convolutional network to detect Twitter bots. It uses following information and user features to learn representations and classify Twitter users. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width = \linewidth]{ablation.png} \caption{Ablation study that removes the semantic, property and neighborhood sub-networks from SATAR respectively.} \label{fig:ablation} \end{figure} For the following SATAR-based bot detection methods, the self-supervised representation learning step adopts the Pareto Principle\footnote{\url{https://en.wikipedia.org/wiki/Pareto\_principle}} as a self-supervised classification task, where the framework learns to predict whether a Twitter user's follower count is among the top $20\%$ or the bottom $80\%$. It is an instance of the self-supervised representation learning strategy in Section ~\ref{subsec:SATARSSLO}. \begin{itemize}[leftmargin=*] \item $\rm SATAR_{FC}$: The proposed representation learning framework SATAR is firstly trained with self-supervised user classification tasks based on their follower count, then the final softmax layer is reinitialized and trained on the task of bot detection. \item $\rm SATAR_{FT}$: The proposed representation learning framework SATAR is firstly trained using self-supervised users, then the final softmax layer is reinitialized and fine-tuning is performed on the whole framework using the training set of bot detection. \end{itemize} \noindent \textbf{Evaluation Metrics.} We adopt Accuracy, F1-score and MCC~\cite{matthews1975comparison} as evaluation metrics of different bot detection methods. Accuracy is a straightforward indicator of classifier correctness, while F1-score and MCC are more balanced evaluation metrics. \begin{figure*} \centering \setlength{\belowcaptionskip}{-0.3cm} \includegraphics[width = .95\linewidth]{time5.png} \caption{SATAR's prediction of specific users in TwiBot-20. Scattered points demonstrate SATAR's prediction for specific users and the line indicates SATAR's overall accuracy of capturing bots registered in a 3-month time span.} \label{fig:time} \end{figure*} \begin{figure} \centering \setlength{\belowcaptionskip}{-0.5cm} \includegraphics[width = \linewidth]{super.png} \caption{Ablation study removing the self-supervised pre-training step from SATAR and train on the three datasets.} \label{fig:super} \end{figure} \subsection{Bot Detection Performance} \label{subsec:expBDP} Table \ref{tab:SPN} identifies the user information that each compared method uses. Table ~\ref{tab:TwiBotMetric} reports bot detection performance of different methods on three datasets. Table ~\ref{tab:TwiBotMetric} demonstrates that: \begin{itemize} [leftmargin=*] \item $\rm SATAR$ based methods achieve competitive performance compared with other baselines, which demonstrates that SATAR is generally effective in Twitter bot detection. $\rm SATAR_{FT}$ outperforms $\rm SATAR_{FC}$, which demonstrates the efficacy of the pre-training and fine-tuning approach. \item $\rm SATAR_{FT}$ generalizes to real-world scenarios because it outperforms the state-of-the-art methods on the comprehensive and representative dataset {\verb|TwiBot-20|}, which imitates the real-world Twittersphere. Meanwhile, $\rm SATAR_{FT}$ adapts to evolving generations of bots because it achieves the best performance on all three datasets with varying collection time from 2017 to 2020. Section ~\ref{subsec:GeneralizeStudy} and Section ~\ref{subsec:AdaptStudy} will provide further analysis to demonstrate that SATAR successfully addresses the challenges of generalization and adaptation, while critical components and design choices of SATAR are the reasons behind its success. \item For methods mainly based on LSTM, we see that Kudugunta \textit{et al.} ~\cite{kudugunta2018deep} outperforms Wei \textit{et al.} ~\cite{wei2019twitter}. It indicates that Kudugunta \textit{et al.} ~\cite{kudugunta2018deep} can better capture bots by incorporating property items. $\rm SATAR_{FT}$ leverages even more user information than Kudugunta \textit{et al.} ~\cite{kudugunta2018deep} and achieves better performance, which suggests that bot detection methods should incorporate more aspects of user information. \item Feature-engineering based methods, such as Yang \textit{et al.} ~\cite{yang2020scalable}, perform well on {\verb|cresci-17|} but inferior to $\rm SATAR_{FT}$ on {\verb|TwiBot-20|}. This shows that traditional bot detection methods that emphasize feature engineering fail to adapt to new generations of bots. \item Both Alhosseini \textit{et al.} ~\cite{ali2019detect} and $\rm SATAR$ use neighborhood information. $\rm SATAR$ based methods outperform Alhosseini \textit{et al.} ~\cite{ali2019detect}, which shows that $\rm SATAR$ better utilizes user neighbors that put Twitter users into their social context. \end{itemize} \subsection{SATAR Generalization Study} \label{subsec:GeneralizeStudy} The challenge of generalization in social media bot detection demands bot detectors to simultaneously identify bots that attack in many different ways and exploit diversified user information. To prove that SATAR generalizes, we examine SATAR and competitive baselines' performance on {\verb|TwiBot-20|}. As demonstrated in Table ~\ref{tab:TwiBotMetric}, SATAR outperforms all baselines on {\verb|TwiBot-20|}. Given the fact that {\verb|TwiBot-20|} contains diversified bots and human which imitates the real-world Twittersphere, SATAR is demonstrated to best generalize in real-world scenarios. To further prove SATAR's generalizability, we train SATAR and two competitive baselines, Alhosseini \textit{et al.} ~\cite{ali2019detect} and Yang \textit{et al.} ~\cite{yang2020scalable}, on one of the four user domains of dataset {\verb|TwiBot-20|} and test on the others. The results are presented in Figure ~\ref{fig:Domain}. It is illustrated that SATAR could better capture other types of bots even when not explicitly trained on them, which further establishes the claim that SATAR successfully generalizes to diversified bots that co-exist on social media. SATAR is designed to generalize by jointly leveraging all three aspects of user information, namely semantic, property and neighborhood information. To figure out whether our proposal of using as much user information as possible has lead to the generalizability of SATAR, we conduct ablation study that removes one aspect of user information at a time. The results are demonstrated in Figure ~\ref{fig:ablation}. Results in Figure ~\ref{fig:ablation} show that removing any aspect of information from SATAR would result in a considerable loss in bot detection performance, limiting SATAR's ability to generalize to different types of bots in {\verb|TwiBot-20|}. It indicates that SATAR's strategy of leveraging more aspects of user data is crucial in addressing the challenge of generalization. \subsection{SATAR Adaptation Study} \label{subsec:AdaptStudy} \begin{figure*}[h] \centering \includegraphics[width = 0.95\textwidth]{representation.png} \caption{2D t-SNE plot of the user representation vectors of SATAR, Alhosseini \textit{et al.} ~\cite{ali2019detect} and Yang \textit{et al.} ~\cite{yang2020scalable}.} \label{fig:representation} \end{figure*} The challenge of adaptation in bot detection demands bot detectors to maintain desirable performance in different times and catch up with rapid bot evolution. To prove that SATAR adapts, we examine SATAR and competitive baselines' performance on three datasets, since they are released in 2017, 2019 and 2020 respectively and could well characterize the bot evolution. Results in Table ~\ref{tab:TwiBotMetric} demonstrate that SATAR reaches state-of-the-art performance on all three datasets, which indicates that SATAR is more successful at adapting to the bot evolution than existing baselines. To further prove SATAR's ability to adapt, we examine SATAR's prediction of users in dataset {\verb|TwiBot-20|}'s validation set and test set. We present SATAR's prediction results of specific users and SATAR's accuracy in any 3-month time span of user registration time in Figure ~\ref{fig:time}. It is illustrated that SATAR maintains a steady detection accuracy for users created from 2007 to 2020, which further establishes the claim that SATAR successfully adapts to the everlasting bot evolution. SATAR is designed to adapt by pre-training on mass self-supervised users and fine-tuning on specific bot detection scenarios. To figure out whether this pre-training and fine-tuning schema has enabled SATAR to adapt to newly evolved bots, we conduct ablation study to remove the self-supervised pre-training step. SATAR's performance on different datasets are illustrated in Figure ~\ref{fig:super}. Figure ~\ref{fig:super} shows that SATAR's performance increases with the adoption of the self-supervised pre-training step, and such trend is especially salient on the dataset PAN-19 with less user information. It indicates that SATAR's ability to adapt indeed comes from the innovative strategy to use follower count as a self-supervised signal for user representation pre-training. \vspace{-5pt} \subsection{Representation Learning Study} \label{subsec:expRLS} SATAR improves representation learning for Twitter users. Extrinsic evaluation has proven that SATAR representations are of desirable quality. We further conduct intrinsic evaluation by comparing SATAR representations with Alhosseini \textit{et al.} ~\cite{ali2019detect} and Yang \textit{et al.} ~\cite{yang2020scalable}, which also provide user representations. We cluster representations using $k$-means with $k = 2$, and calculate the homogeneity score, which is the extent to which clusters contain a single class. Higher homogeneity score indicates that users with the same label are more likely to be close to each other. Figure ~\ref{fig:representation} visualizes representations of users in a subgraph of {\verb|TwiBot-20|}. Figure ~\ref{fig:representation}(a) is the t-SNE plot of SATAR representations, which shows moderate collocation for groups of bot and human users, while Figure ~\ref{fig:representation}(b) and (c) show little collocation within either group. Quantitatively, SATAR achieves the highest homogeneity score, which indicates that SATAR produces user representations of higher quality. \subsection{Case Study} \vspace{-2pt} To further understand how SATAR identifies bots, we study a specific case of several bots. We use the affinity index values in Equation (\ref{cobegin}) to quantitatively analyze SATAR's decision making. Figure ~\ref{fig:case} shows the detailed information of the sampled users: \begin{itemize} [leftmargin=*] \item SATAR identifies user B and E through their repeated or similar tweets that signal automation. For example, user B has affinity values of $F_{sp} = -0.9989$, $F_{pn} = 0.0017$ and $F_{ns} = 0.6376$. Absolute values of $F_{sp}$ and $F_{ns}$ are significantly greater than $F_{pn}$, which demonstrates that semantic information is the dominant factor for SATAR's decision in this case. \item SATAR identifies user C and D through their properties. Abnormal characteristics such as too many followings and default background image are detected by SATAR. User D has larger absolute values for $F_{sp}$ and $F_{pn}$ than $F_{ns}$, which shows that property information is critical in SATAR's judgement. \item SATAR captures the anomaly that user A has bot users B, C, D and E as neighbors, which is unlikely for genuine users. User A has larger absolute values for $F_{ns}$ and $F_{pn}$ than $F_{sp}$, which also bears out the claim that user A's abnormal neighborhood has led to SATAR's decision. \end{itemize} The case study in Figure ~\ref{fig:case} demonstrates that SATAR identifies bot users by jointly evaluating their semantic, property and neighborhood information. Affinity values of our proposed Co-Influence aggregator provides explanation to SATAR's decisions. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{case_study5.png} \setlength{\belowcaptionskip}{-0.3cm} \caption{A sample bot cluster to explain SATAR's decision.} \label{fig:case} \end{figure} \section{Conclusion and Future Work} \label{sec:conclusion} Social media bot detection is attracting growing attention. We proposed SATAR, a self-supervised approach to Twitter account representation learning and applied it to the task of bot detection. SATAR aims to tackle the challenges of generalizing in real-world scenarios and adapting to bot evolution, where previous efforts failed. We conducted extensive experiments to demonstrate the efficacy of SATAR-based bot detection in comparison to competitive baselines. Further exploration proved that SATAR also succeeded in generalizing on the real Twittersphere and adapting to different generations of Twitter bots. In the future, we plan to apply the SATAR representation learning framework to other tasks in the social media domain such as fake news detection and content recommendation. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2021-06-25T02:21:27", "yymm": "2106", "arxiv_id": "2106.13089", "language": "en", "url": "https://arxiv.org/abs/2106.13089" }
\section{Introduction} Quantum spin liquids (QSLs) are novel states of magnetic systems characterized by the absence of long range spin order down to zero temperature\cite{Balents2010,RevModPhys.89.025003,Savary_2016}. Amongst various QSLs, the Kitaev QSL based on honeycomb lattice is of especial importance. Different from spin liquids arising from geometrically frustrated spin arrangements, the bond-dependent Kitaev spin interactions frustrate the spin configuration. The Kitaev model hosts exactly solvable QSL ground state and fractionalized excitations described by Majorana fermions\cite{KITAEV20062}. Experimentally, the Kitaev model is expected to be realized in honeycomb Mott insulators with spin-orbit coupling\cite{PhysRevLett.102.017205}. Much effort has been made to experimentally explore materials dominated by the bond dependent Kitaev interaction, firstly in transition metal 5d-electron iridates and then in 4d-electron RuCl$_{3}$. However, there are significant non-Kitaev interactions in those real materials, such as the Heisenberg type exchange interaction and off-diagonal exchange interactions. The non-Kitaev interactons hinder the formation of pure Kitaev quantum spin liquid and push the system to be ordered at low temperature\cite{PhysRevLett.112.077204,PhysRevB.94.064435}. To approach the Kitaev QSL, one promising route is to suppress the long range magnetic order by applying magnetic field. The field induced quantum spin liquid candidate states are widely studied in these Kitaev materials\cite{Ruiz2017,PhysRevLett.110.097204,PhysRevLett.117.277202,Banerjee2016,Banerjee1055}. Recently, the $3d^{7}$ Co-based honeycomb materials with high spin state S=3/2 and effective orbital angular momentum L=1 have been proposed as new candidates of Kitaev QSLs and have attracted wide attention \cite{PhysRevB.97.014407,PhysRevB.97.014408,PhysRevLett.125.047201,PhysRevB.97.134409,REGNAULT1977660,10.1016/j.heliyon.2018.e00507,REGNAULT1979194,VICIU20071060,PhysRevMaterials.3.074405,PhysRevB.102.224429,PhysRevB.101.085120,PhysRevB.102.054414,PhysRevB.102.224411,PhysRevB.94.214416,PhysRevB.103.L180404,doi:10.1063/1.5029090,2012.00940v2,Motome_2020,2106.11982v1}. In transition metal 4d- and 5d-electron systems, the spatially extended d-electron wave function leads to nonnegligible longer range coupling, which is detrimental to Kitaev QSL. Comparing with 4d- and 5d-electron systems, the 3d-electrons have more localized wave functions, thus recede the longer range coupling. Besides, the Heisenberg interactions in 3d systems are easier to be minimized by tuning the external parameters. Accordingly, the $3d^{7}$ systems may be more appropriate to realize the Kitaev QSL. BaCo$_{2}$(AsO$_{4}$)$_{2}$ is among the suggested candidates with $3d^{7}$ Co honeycomb lattice. It has similar properties as the well-established Kitaev QSL candidate RuCl$_{3}$. The magnetic susceptibility shows strong anisotropy, indicating the anisotropic exchange interactions in the spin Hamiltonian. Especially, a small magnetic field applied in the honeycomb plane can significantly change the magnetic system and completely suppress the long range order. The critical field $\sim$0.5 T is much weaker compare to that of 7 T in RuCl$_{3}$, indicating very small Heisenberg interaction\cite{Zhongeaay6953}. Those studies suggest that this system is an excellent candidate for realization of a field-induced QSL state. To characterize the magnetic excitations at low energies, we perform terahertz (THz) time domain spectroscopy measurement on BaCo$_{2}$(AsO$_{4}$)$_{2}$ crystals under external magnetic field. We monitored the evolution of the spin wave under magnetic field and characterized the emergent magnetic excitations in the field induced paramagnetic state. \section{Sample and experiments} The single crystals of BaCo$_{2}$(AsO$_{4}$)$_{2}$ with a typical size of $3\times3\times0.3 mm^{3} $ were grown by the flux method \cite{Zhongeaay6953}. The compound crystallizes in the trigonal centrosymmetric space group R-3. The honeycomb structure is made of edge sharing CoO$_6$ tetrahedra and stacked along the c-axis with an ABC periodicity. Below 5.4 K, the system transforms to an antiferromagnetic ordered state. The magnetic order is rather complex with spiral spin chains \cite{10.1007/978-94-009-1860-3}. By applying in-plane magnetic field, the system exhibit two phase transitions near 0.2 T and 0.5 T, respectively. The antiferromagnetic long range order is suppressed at the second transition, while the first transition is more complicated \cite{Zhongeaay6953}. Time domain THz transmission spectra were measured by using a home built spectroscopy system equipped with helium cryostat and Oxford spectramagnet \cite{PhysRevB.98.094414}. As shown in Figure~\ref{Fig:1} (a), the wave vector of the incident THz beam is perpendicular to the crystallographic ab-plane. The polarization of the magnetic field component of THz wave can be tuned from a-axis to b-axis. By rotating the superconducting magnet, the external magnetic field can be applied either parallel or perpendicular to the ab-plane. The time domain signals of the sample and reference (empty aperture) were detected via free space electro-optics sampling in ZnTe crystal. Fourier transformation of the time domain spectra provides the frequency dependent complex transmission spectra containing both magnitude and phase information, from which the real and imaginary parts of optical constants can be extracted (see Supplementary Material)\cite{sup}. \section{Evolution of the magnon mode with temperature and magnetic field in the antiferromagnetic order region} \begin{figure}[htbp] \centering \includegraphics[width=8cm]{graph1.eps}\\ \caption{(a) Schematic of the terahertz transmission spectra measurement. (b) Real part of the optical conductivity at selected temperatures without external field. }\label{Fig:1} \end{figure} Figure~\ref{Fig:1} (b) shows the temperature dependence of the real part of the optical conductivity $\sigma$ at low temperatures. Below $T_N$, a narrow peak emerges abruptly on the flat background at 0.35 THz. Then, its intensity gradually increases and reaches a maximum at $\sim$3 K. We label this peak as mode A. The frequency of the mode is consistent with previous neutron scattering experiments, being identified as the magnon at the $\Gamma$ point of the magnetically ordered state \cite{10.1007/978-94-009-1860-3,10.1016/j.heliyon.2018.e00507}. In time domain THz measurement, ordered spins are excited by THz wave through Zeeman torque $dM/dt = \gamma M \times H_{THz}$, where $\gamma$ denotes the gyromagnetic constant, $M$ the magnetic momentum and $H_{THz}$ the magnetic field component of the THz wave. To effectively drive the spin procession, \emph{i.e.} excite magnon, $H_{THz}$ should be perpendicular to the magnetic moment. Therefore, measuring the polarization dependence of the magnon can help identify the spin orientation. When we rotate the polarization of magnetic field component of the THz wave from a- to b-axis without applying external magnetic field, the magnon mode is visible in all directions. The result is consistent with the antiferromagnetic order with presence of spiral spin chains \cite{10.1007/978-94-009-1860-3}. \begin{figure}[htbp] \centering \includegraphics[width=9cm]{graph2.eps}\\ \caption{ Evolution of the optical conductivity under magnetic field at 2 K. (a) Spectra for H along a axis and $H_{THz}$ along b axis. (b) Spectra for H along a axis and $H_{THz}$ along a axis. (c) Temperature dependence of the spectra for H (0.4 T) along a axis and $H_{THz} \parallel H$. (d) Spectra for H along c axis. }\label{Fig:2} \end{figure} We now present the evolution of THz spectra under applied external magnetic field. Figure~\ref{Fig:2} (a) shows the frequency-dependent conductivity measured at 2 K in the configuration of the external magnetic field $H$ being applied along the a-axis and the magnetic field of THz wave perpendicular to $H$. Upon increasing the magnetic field, the intensity of the magnon mode A initially decreases slightly, but keeps the mode frequency unchanged. At around $H_{c1}$ = 0.2 T, the excitation mode suddenly shifts to 0.39 THz, and its intensity is strongly enhanced, suggesting a magnetic field induced phase transition above 0.2 T. We denote this sharp peak as mode B for the field above 0.2 T. As the magnetic field increases to 0.5 T, the narrow peak excitations are fully suppressed. The results are consistent with previous reports that a magnetic field induced transition from antiferromagnetic phase I to phase II occurs near 0.2 T and a complete suppression of antiferromagnetic order \cite{10.1007/978-94-009-1860-3,Zhongeaay6953}. The spectra with $H_{THz}\parallel H$ are quite different. As shown in Fig.~\ref{Fig:2} (b), in the antiferromagnetic phase II above 0.2 T, the excitation mode at 0.39 THz is completely absent. Instead, a slight enhancement in conductivity spectra above 0.7-0.8 THz seems to be visible. Figure~\ref{Fig:2} (c) shows the temperature dependent spectra measured at 0.4 T in the antiferromagnetic phase II region. We do not observe any excitation peak emerging below T$_N$. The low energy spectral weight drops below T$_N$ and reduced spectral weight is shifted to the higher energy above 0.7-0.8 THz. Based on those results, we identify that the first phase transition near 0.2 T is a spin reorientation transition. The moments in spiral spin chains are tuned by the magnetic field. The antiferromagnetic phase II is consistent with a collinear antiferromagnetic state, consistent with previous report\cite{Zhongeaay6953}. The disappearance of magnon mode in Fig.~\ref{Fig:2} (c) can be attributed to the polarization selection rule of $H_{THz}$ parallel to the magnetic moments. For a comparison, the magnetic state is stable when the external magnetic field is applied along the c-axis, as shown in Fig.~\ref{Fig:2} (d). Those results are consistent with static measurement \cite{10.1007/978-94-009-1860-3,Zhongeaay6953}. \begin{figure}[htbp] \centering \includegraphics[width=9cm]{graph3.eps}\\ \caption{ (a) Real part of the optical conductivity at different magnetic field for H along a axis and $H_{THz}$ along b axis. (b) Temperature evolution of the excitation at 2 T for $H_{THz} \perp H$. (c)(d) Contour plot of $\sigma$ as a function of field and frequency for $H_{THz} \perp H$ and $H_{THz} \parallel H$. (e) Temperature dependence of the spectra at 2 T for $H_{THz} \parallel H$. The spectrum at 15 K are subtracted as a reference. (f) Field dependence of the excitation modes. }\label{Fig:3} \end{figure} \section{Magnetic excitations in the field induced paramagnetic state} Across the second critical field $H_{c2}$ at around 0.5 T, the sample transforms to a paramagnetic phase. Identifying the nature of the excitations in this phase is essential to judge whether or not it is a quantum spin liquid state. The measured spectra of this phase are shown in Fig.~\ref{Fig:3}. The field evolution of the spectra with $H_{THz}\perp H$ are shown in Fig.~\ref{Fig:3} (a). All the well defined modes are fully suppressed at 0.5 T. Upon further increasing the in-plane magnetic field slightly, a new narrow peak emerged immediately at $\sim$0.6 T, denoted as mode C. This peak has the largest intensity in our measurement. Its frequency increases with the magnetic field. Obviously, mode C is the magnon in field-polarized paramagnetic phase, being essentially the same as the ferromagnetic order. Surprisingly, we find an anomalous behavior for this mode as shown in Fig.~\ref{Fig:3} (a). At 2 K, the intensity of the mode drops dramatically when the magnetic field is above 1 T. To track the characteristics of the mode, we measured the temperature dependence of the spectra at each field. The typical behavior of the mode at relatively high field, \emph{e.g.} 2 T, are shown in Fig.~\ref{Fig:3} (b). The temperature evolution of the mode intensity shows an unusual strong resonance character. Below 1 T, the resonance temperature is around 2 K, the mode decreases monotonically as temperature increases. With increasing magnetic field, the resonance temperature also increases. At 2 T, the mode intensity gets maximum at around 10 K, while the intensity at 2 K becomes much smaller. It is reasonable to attribute the resonance behavior of the mode to the competing interactions among the exchange interaction, magnetic field and thermal excitations. Figure~\ref{Fig:3} (c) displays the intensity plot of the mode frequency as a function of external magnetic field in the configuration of $H_{THz}\perp H$ at 2 K, in which we can identify clearly three magnon modes upon increasing the magnetic field. Mode A and B are magnons in antiferromagentic phase I and II, respectively, while mode C is the magnon in field-polarized paramagnetic phase. Similarly, we plot the intensity map of the conductivity spectra for $H_{THz} \parallel H$ in Fig.~\ref{Fig:3} (d). Note that the intensity of the excitation spectrum is much weaker than that observed for $H_{THz} \perp H$. In order to clearly identify the magnetic excitations, we substrate the spectrum at 15 K as a reference. The typical spectra at a representative field, \emph{e.g.} 2 T, is shown in Fig.~\ref{Fig:3} (e). We emphasize that the mode C observed with $H_{THz} \parallel H$ is completely absent here, indicating that this mode also follows the polarization selection rules. On the contrary, another broad mode feature at higher energy, labelled as mode C', is observed with a dip at lower energy side and a broad peak at higher energy side. The temperature evolution is monotonous for mode C' in all applied magnetic field above 0.6 T. The characteristic behaviors imply an opening of energy gap below the mode C' for $H_{THz} \parallel H$. To further identify the relation of magnetic excitations between $H_{THz} \perp H$ and $H_{THz} \parallel H$, we plot the peak positions of mode C and C' in Fig.~\ref{Fig:3} (f). We noticed that both have a sublinear field dependence, and mode C' has approximately the energy twice of that of mode C. The doubled frequency of mode C is also plotted as the circles in Fig.~\ref{Fig:3} (f), showing a good match with mode C'. In field-polarized paramagnetic phase, the magnetic moment in each spin chain is along the external field direction. With $H_{THz} \perp H$, the THz wave coherently drives the spin procession and excites the single magnon. While for $H_{THz} \parallel H$, the THz wave can not excite the single magnon, but excite the two magnons which show an energy gap below the mode and a continuum structure, as we shall explain below. To understand the physics begetting the novel spin excitation spectrum of BaCo$_{2}$(AsO$_{4}$)$_{2}$, we consider a simplified model on honeycomb lattice and show that its excitation spectrum qualitatively reproduces the observed mode C and C' in the high field phase. The model Hamiltonian is \begin{eqnarray} \hat{H}=-J\sum_{<ij>} \boldsymbol{S}_i\cdot \boldsymbol{S}_j-g_z\mu_B H\sum_{i}S_i^{z}, \end{eqnarray} where $J$ is the nearest-neighbor ferromagnetic Heisenberg exchange coupling, $g_z$ is the gyromagnetic ratio, $\mu_B$ is the Bohr magneton, $H$ is the applied external magnetic field. Without loss of generality we define the $z$ direction along the applied field $H$, and the $x$ direction along the $H_{THz}$ direction of THz wave that is perpendicular to applied field $H$. The measured THz spectra is then essentially the dynamical spin structure factors $\chi^{xx}(\boldsymbol{q},\omega)$ and $\chi^{zz}(\boldsymbol{q},\omega)$ for the $H_{THz}\perp H$ and $H_{THz}\parallel H$ cases respectively. The wave vector ${\boldsymbol{q}}$ of the THz wave is much smaller than the Brillouin zone size and is treated as ${\boldsymbol{q}}=0$ hereafter. We compute the magnon dispersion and dynamical spin structure factors for this model by the standard linear spin wave theory, the calculation details can be found in Supplementary Materials \cite{sup}. This model has a polarized ground state and two branches (because of the two-site unit cell of honeycomb lattice) of gapped magnons with dispersion \begin{eqnarray} \varepsilon_{\boldsymbol{q}}^{\pm}=g_z\mu_B H+\frac{3J}{2}\left ( 1\pm \left\vert \frac{1}{3}\sum_{\delta_{1,2,3}}e^{i {\boldsymbol{q}}\cdot {\boldsymbol{\delta}}} \right\vert \right ), \end{eqnarray} where $\boldsymbol{\delta}_{1,2,3}$ are the three bond vectors connecting nearest-neighbor sites on the honeycomb lattice. The magnon dispersions $\varepsilon_{\boldsymbol{q}}^{\pm}$ depend on external field $H$ linearly. The calculated $\chi^{xx}({\boldsymbol{q}}=0,\omega)$ is \begin{eqnarray} \chi^{xx}({\boldsymbol{q}}=0,\omega)\propto \delta(\omega -\varepsilon_{\boldsymbol{q}}^{-}), \end{eqnarray} and shows a sharp peak at single magnon energy $\varepsilon_{\boldsymbol{q}=0}^{-}=g_z\mu_B H$. The calculated $\chi^{zz}({\boldsymbol{q}}=0,\omega)$ is \begin{eqnarray} \chi^{zz}({\boldsymbol{q}}=0,\omega)\propto \sum_k [\delta(\omega -\varepsilon_{\boldsymbol{k}}^{+}-\varepsilon_{-{\boldsymbol{k}}}^{+})+\delta(\omega -\varepsilon_{\boldsymbol{k}}^{-}-\varepsilon_{-{\boldsymbol{k}}}^{-})], \end{eqnarray} and shows broad two-magnon continuum. Fig.~\ref{Fig:4} shows an example of the calculated dynamical spin structure factors. The low energy edge of the continuum in $\chi^{zz}({\boldsymbol{q}}=0,\omega)$ is twice of the minimal single magnon energy $\omega=2\varepsilon_{{\boldsymbol{k}}=0}^-=2g_z\mu_B H$. The spin model for BaCo$_{2}$(AsO$_{4}$)$_{2}$ will certainly be much more complicated than the Heisenberg model considered here\cite{10.1007/978-94-009-1860-3}. But its dynamical spin structure factors in the high field polarized phase will have the same qualitative behaviors, namely that $\chi^{xx}$ shows a single magnon peak and $\chi^{zz}$ shows a two magnon continuum whose low energy edge is twice of the minimal single magnon energy. \begin{figure}[htbp] \centering \includegraphics[width=7.5cm]{graph4.eps}\\ \caption{The calculated dynamical spin structure factors $\chi^{xx}({\boldsymbol{q}}=0,\omega)$ (orange line, for $H_{THz}\perp H$ case) showing a single magnon peak, and $\chi^{zz}({\boldsymbol{q}}=0,\omega)$ (blue line, for $H_{THz}\parallel H$ case) showing two-magnon continuum whose low energy edge is twice of the single magnon energy. Details about this calculation can be found in Supplementary Materials \cite{sup}.}\label{Fig:4} \end{figure} Based on above discussions, we summarize the assignment of magnetic excitations displayed in Fig. \ref{Fig:3} (c) and (d). The mode A at 0.35 THz below 0.2 T is the magnon excitation in antiferromagnetic phase I. It is observed for both $H_{THz} \perp H$ and $H_{THz} \parallel H$ due to the presence of spiral spin chains. The mode B at 0.39 THz above 0.2 T but below 0.5 T is the magnon excitation in antiferromagnetic phase II. It is observed only for $H_{THz} \perp H$. There is a very weak feature near 0.8 THz for $H_{THz} \parallel H$, denoted as B', which is likely originated from the two-magnon excitations in the field-induced collinear antiferromagnetic phase II. The mode C and C' observed for $H_{THz} \perp H$ and $H_{THz} \parallel H$ respectively are single magnon and two-magnon excitations in field-polarized paramagnetic phase. The reason that the single magnon excitation is absent for $H_{THz} \parallel H$ in antiferromagnetic phase II and field-polarized paramagnetic state is because the magnetic component of THz wave is parallel to the moment orientation in both phases. \section{Further discussions on the field induced paramagnetic state} Our study indicates that BaCo$_{2}$(AsO$_{4}$)$_{2}$ offers an ideal system to investigate the magnetic excitations in $3d^{7}$ Co-based honeycomb lattice systems. The external magnetic field required to suppress the antiferromagnetic order is much smaller than that in any other known magnetic honeycomb compounds, enabling a full and careful characterization of magnetic excitations by time domain THz spectroscopy technique. It is also interesting to note that the present measurement result on BaCo$_{2}$(AsO$_{4}$)$_{2}$ shares similarity to the well-studied RuCl$_3$ compound under high magnetic field. As mentioned above, both the mode C and mode C' in the field induced paramagnetic state have a sublinear field dependence, with the slopes gradually decreasing with the field increasing. These modes seem to have the similar characters with the excitations observed in the field induced disorder state of RuCl$_3$ \cite{PhysRevLett.119.227202,PhysRevB.96.241107,PhysRevLett.125.037202,PhysRevB.101.140410,Wulferding2020}. Although the compound BaCo$_{2}$(AsO$_{4}$)$_{2}$ was suggested to be a possible new Kitaev QSL candidate when the magnetic order was suppressed by the magnetic field of 0.5 T \cite{Zhongeaay6953}, our study revealed sharp single magnon and two magnon excitations even the in-plane magnetic field is as low as 0.6 T, which suggests against formation of Kitaev QSL state. The sharp magnon excitation is absent only in a region when the magnetic field is extremely close to 0.5 T. \section{Conclusion} In conclusion, we have presented THz spectroscopy study on Co-based honeycomb BaCo$_{2}$(AsO$_{4}$)$_{2}$ under in-plane magnetic field up to 4 T. The fact that an extremely small magnetic field can suppress the long range antiferromagnetic phase makes it an ideal system to investigate the magnetic excitations in the field induced states by THz spectroscopy measurement. Our field and polarization dependent measurement reveals two first order transitions. The first transition at 0.2 T is from spiral order to collinear antiferromagnetic order. The second transition at 0.5 T is the suppression of the antiferromagnetic order. We observed different magnon excitations in different regions of applied magnetic field. In particular, after the long range magnetic order was suppressed by a weak field $H_{c2}$, the system was driven immediately to a field-polarized paramagnetic phase being similar to a ferromagnetic state. The spectra beyond $H_{c2}$ are dominated by single magnon and two magnon excitations. However, no signature of a quantum spin liquid state is observed. We also compared the excitation spectra of BaCo$_{2}$(AsO$_{4}$)$_{2}$ with that of widely studied 4d-electron Kitaev candidate RuCl$_{3}$ and addressed their similarities and differences in magnetic excitations. \begin{center} \small{\textbf{ACKNOWLEDGMENTS}} \end{center} This work was supported by National Natural Science Foundation of China (No. 11888101), the National Key Research and Development Program of China (No. 2017YFA0302904). The crystals were grown in the laboratory of R.J. Cava at Princeton University. \bibliographystyle{apsrev4-1}
{ "timestamp": "2021-06-25T02:21:47", "yymm": "2106", "arxiv_id": "2106.13100", "language": "en", "url": "https://arxiv.org/abs/2106.13100" }
"\\section{Introduction} \n\\label{sec:introduction}\n\nThe problem of dealing with missing or inco(...TRUNCATED)
{"timestamp":"2021-06-25T02:20:53","yymm":"2106","arxiv_id":"2106.13071","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nDespite advances in the fields of optics and nano-optics, the detection (...TRUNCATED)
{"timestamp":"2021-06-25T02:19:43","yymm":"2106","arxiv_id":"2106.13036","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{sec:introduction}\nThe main purpose of the paper is the analysis of (...TRUNCATED)
{"timestamp":"2021-06-30T02:03:35","yymm":"2106","arxiv_id":"2106.12983","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\r\n\r\nThe theoretical study of emission or absorption spectral properties (...TRUNCATED)
{"timestamp":"2021-06-25T02:18:52","yymm":"2106","arxiv_id":"2106.13009","language":"en","url":"http(...TRUNCATED)
"\\section{\\label{sec:Intro}Introduction}\n\\noindent In a He gas discharge, two long-lived, excite(...TRUNCATED)
{"timestamp":"2021-06-25T02:19:02","yymm":"2106","arxiv_id":"2106.13016","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nA central goal of modern neuroscience research is to measure and quantif(...TRUNCATED)
{"timestamp":"2021-06-25T02:18:10","yymm":"2106","arxiv_id":"2106.12993","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nIn this paper, a \\textit{(metric) tree} is a compact, connected metric sp(...TRUNCATED)
{"timestamp":"2021-06-25T02:18:49","yymm":"2106","arxiv_id":"2106.13007","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
4