diff --git "a/title_31K_G/test_title_long_2405.02710v1.json" "b/title_31K_G/test_title_long_2405.02710v1.json" new file mode 100644--- /dev/null +++ "b/title_31K_G/test_title_long_2405.02710v1.json" @@ -0,0 +1,122 @@ +{ + "url": "http://arxiv.org/abs/2405.02710v1", + "title": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning", + "abstract": "With the deluge of information delivered by the daily news cycle, there is a\ngrowing need to effectively and efficiently summarize news feeds for quick\nconsumption. We leverage large language models (LLMs), with their advanced\nlearning and generative abilities as compared to conventional language models,\nto generate concise and coherent summaries for news articles from the XSum\ndataset. Our paper focuses on two key aspects of LLMs: Efficient in-context\nLearning (ELearn) and Parameter Efficient Fine-tuning (EFit). Under ELearn, we\nfind that increasing the number of shots in prompts and utilizing simple\ntemplates generally improve the quality of summaries. We also find that\nutilizing relevant examples in few-shot learning for ELearn does not improve\nmodel performance. In addition, we studied EFit using different methods and\ndemonstrate that fine-tuning the first layer of LLMs produces better outcomes\nas compared to fine-tuning other layers or utilizing LoRA. We also find that\nleveraging more relevant training samples using selective layers does not\nresult in better performance. By combining ELearn and EFit, we create a new\nmodel (ELearnFit) that leverages the benefits of both few-shot learning and\nfine-tuning and produces superior performance to either model alone. We also\nuse ELearnFit to highlight the trade-offs between prompting and fine-tuning,\nespecially for situations where only a limited number of annotated samples are\navailable. Ultimately, our research provides practical techniques to optimize\nnews summarization during the prompting and fine-tuning stages and enhances the\nsynthesis of news articles.", + "authors": "Che Guan, Andrew Chin, Puya Vahabi", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning", + "main_content": "Introduction There has been an overload of information with each passing day \u2013 data is more voluminous, comes in more varieties and arrives at higher velocity. The news cycle is a good example of this trend, making it more difficult to read and synthesize the vast amount of information coming our way. The advent of large language models (LLMs) has led to a substantial improvement in the effectiveness and comprehensibility of news summarization. LLMs present two ways to address downstream tasks \u2013 through prompt engineering and fine-tuning. In our research, we explore various techniques to improve model performance through better prompts and finetuning methods. First, we study efficient in-context learning, which we call ELearn to denote the process of the model learning through prompts. We examine the impact of LLM size, the number of shots, and various templates during the in-context learning . We also select relevant samples in prompting in an attempt to improve performance. We then explore efficient methods to fine-tune LLMs. Calling this technique EFit, we test the performance of selective layer fine-tuning and LoRA in news summarization. We also utilize selective samples to improve the training set for the fine-tuning process. Finally, we combine ELearn and EFit to create ELearnFit and find that this model achieves superior performance versus either model alone. We make various contributions to existing research on news summarization 1. Through ELearn, we find that using larger models, increasing the number of shots during prompting, and leveraging simple templates can all enhance model performance. We also show that utilizing selective relevant examples during prompting does not meaningfully impact performance. Through EFit, we find that fine-tuning the first layer of LLMs produces better outcomes as compared to fine-tuning other layers or utilizing LoRA, and leveraging more relevant training samples using selective samples does not result in better performance. The combined model, ELearnFit, leverages the best of both worlds and suggests practical implementations for practitioners, especially when using a limited number of annotated samples. 2 Related Work The evolution of news summarization techniques has been driven by advancements in NLP and the increasing availability of largescale datasets. Early news summarization techniques relied on statistical methods, such as frequency analysis and clustering, to extract important information from news articles. These methods were limited in their ability to capture the semantics and context of the news content. With the advent of deep learning, news summarization techniques have undergone a significant transformation. Deep learning models, particularly transformer-based architectures such as BERT and GPT-3[3, 5, 17], have demonstrated remarkable performance in various NLP tasks, including news summarization [7]. These models are able to learn complex representations of news articles and generate summaries that are both informative and coherent. Recent research in news summarization has focused on developing techniques that can handle diverse types of news articles, including 1The codes used in this study were derived from the class \"Deep Multi-Task and Meta Learning\" offered by Stanford School of Engineering. We implemented and adapted these foundational project codes to meet the specific requirements of our study. arXiv:2405.02710v1 [cs.CL] 4 May 2024 \flong and complex articles, and generate summaries that are tailored to specific user needs and preferences. Additionally, there has been growing interest in explainable news summarization [8, 13], which aims to provide users with insights into how summaries are generated and the rationale behind the selection of specific sentences or phrases. Fine-tuning LLMs and in-context learning are two powerful techniques that have been successfully applied to summarization [2, 6, 19]. Fine-tuning LLMs [15] involves adapting the pre-trained LLM to the specific task of news summarization by fine-tuning its parameters on a smaller, task-specific dataset. This allows the LLM to leverage its learned knowledge and adapt it to the task of generating informative and coherent summaries. In-context learning is a technique where a pre-trained LM utilizes text input to define a task. By providing the model with an instruction and/or a few task demonstrations, it gains the ability to predict subsequent steps and complete additional instances of the task [3]. Furthermore, in-context learning can be viewed as a form of implicit Bayesian inference [18]. The model learns to infer a latent concept from the context and uses it to generate a response. The pretraining distribution can be seen as a mixture of hidden Markov models (HMMs), where each HMM represents a different concept. When prompted with a specific context, the model implicitly infers the latent concept that is most relevant to the context and generates a response based on that concept. To facilitate the training and evaluation of news summarization models, large-scale datasets such as CNN/Daily Mail and XSum [4, 14] have proven invaluable. These datasets provide a diverse collection of news articles and human-generated summaries, enabling researchers to benchmark different summarization techniques and track progress in the field. 3 Approach In this study, we utilize the XSum dataset, a large-scale collection of news articles with annotated summarizations, as analyzed in Subsection 3.1 , to explore various methods for enhancing prompting (ELearn) and fine-tuning (EFit), which will be explained in Subsections 3.2 and 3.3, respectively, in a more efficient manner. Furthermore, we investigate the advantages of combining these techniques through our proposed ELearnFit approach, which will be described in Subsection 3.4. To run each model, the input consists of the testing article, which may or may not be accompanied by support article-summary pair samples in the prompt. The output is the generated summary. To generate the summary, we sample from a pre-trained language model using greedy decoding, producing tokens one by one until a stop token is encountered or the maximum token limit of 100 is reached. To evaluate the performance of the model, the ROUGE-1 F1 score is employed. This metric measures the overlap between the generated summary and the reference summary Although the main emphasis of this paper is on fine-tuning LLaMa2 models, it is worth highlighting that the strategies and techniques discussed in the following sections can be adapted to optimize the performance of other transformer-based models. 3.1 Analysis of Data and Performance of Existing Models on Leaderboard We conduct our research using the XSum dataset, which consists of a training set comprising 204,045 article-summary samples meticulously curated by the original researchers. In Figure 1, the distributions of article and summary lengths in the training set are displayed. Due to limited resources, we face constraints (refer to Section 4.6 for more details) in using powerful GPU machines to fine-tune models using the complete training dataset. Additionally, the size and input token limits for several representative open-source GPT models, as indicated in Table 1, could pose restrictions on testing few-shot learning scenarios. Consequently, we create a smaller dataset consisting of 17,806 samples from the training set. This subset is obtained by filtering out rows from the training dataset where the combined word count of the article and summary exceeded 100. The length distributions of the filtered articles and summaries are displayed in Figure 2. Based on numerical testing observations, it has been determined that even the filtered dataset is still too large to adequately explore optimal parameters in experiments. To ensure a fair comparison across all experiments, we further reduce the dataset by selecting the initial 256 article-summary pairs as the fine-tuning set. The remaining 125 pairs are reserved for testing purposes. It\u2019s important to note that while the testing set consists of only 125 pairs, the entire filtered dataset (excluding the testing pairs) is utilized to assist the model in selecting relevant support samples for prompting and fine-tuning, as explained in subsections 4.2 and 4.4, respectively. According to the leaderboard ranking [1], the top-performing papers in news summarization achieve impressive results by assigning probability mass to candidate summaries [20] or by aligning model-generated sequences with reference sequences [12]. These approaches consistently yield Rouge-1 scores close to 0.5 across the entire testing dataset. However, in our work, we simply sample from a pre-trained LLM using greedy decoding, generating tokens iteratively until either a stop token is encountered or the maximum token limit of 100 is reached. It is worth noting that our primary focus is on optimizing efficient techniques for in-context learning and fine-tuning in news summarization, with the specific choice of dataset and token adjustment not being crucial to the outcome of our work. Figure 1: Length Distribution of Articles and Summaries in Training Set 2 \fFigure 2: Length Distribution of Filtered Articles and Summaries (Combined Word Count \u2264100) Table 1: Number of Parameters and Input Token Limits for GPT Models (Approximately 1.5 Tokens per Word) Models Parameters Input Tokens GPT2-Medium 345 million 1,024 Eleuther-Neo 2.7 billion 2,048 LLaMa2-7B 7 billion 2,048 LLaMa2-13B 13 billion 4,096 3.2 ELearn Efficient In-Context Learning We use two simple templates to investigate the impact of templates on few-shot learning. Figure 3 illustrates the case for one-shot learning. The first template, called \"NONE,\" utilizes a single space to separate the support article, support summary, and the test article. The second template, known as \"TL;DR\" (Too Long;Didn\u2019t Read), utilizes \" TL;DR: \" to differentiate between the article and summary (Please note that there intentionally exists a space before and after \"TL;DR:\" in most occurrences, while in the last occurrence of \"TL;DR:\", there is only one space before \"TL;DR:\" and no space after the colon. This formatting choice has been made to facilitate word generation using a language model.). Additionally, a single space is used to separate the support sample from the test sample. For clarity, these separators are highlighted in green in the figure. Figure 3: Templates for One-Shot Learning: \"none\" vs \"TL;DR\" Figure 3 presents an example of a one-shot learning template. An interesting aspect to explore is the impact of different numbers of support examples in the prompt. When selecting examples, one approach is to randomly choose article-summary pairs from the training set, which generally provides diversified support examples. Another approach is to use retrieve similar pairs to a given testing article in the prompt, which may result in examples concentrated around specific content or topics. Furthermore, it is important to consider the size of language models, as it directly relates to memory usage and can potentially influence in-context learning. 3.3 EFit Efficient Fine-Tuning In LLaMa2 [16], the transformer block plays a crucial role in the transformer architecture. It comprises of two main sub-layers: a self-attention layer and a feed-forward network. To construct a Transformer model, multiple Transformer blocks are repeated (32 for LLaMa2-7b and 40 for LLaMa2-13b) and stacked together. Each block processes the output of the previous block, allowing the LLaMa2 model to capture both local and global dependencies in the input sequence. However, due to the large size of the model and limited GPU resources, one approach for parameter-efficient fine-tuning is to selectively choose a specific transformer block layer, such as the first layer, to fine-tune the pre-trained weight matrix \ud835\udc4a0 \u2113\u2208R\ud835\udc511\u00d7\ud835\udc512 to a new arbitrary weight matrix \ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 while freezing the remaining block layers. Another approach for parameter-efficient fine-tuning is to employ LoRA (Low-Rank Adaptation). This technique freezes the pretrained model weights and introduces trainable rank decomposition matrices into each layer of the Transformer architecture. By doing so, the number of trainable parameters for downstream tasks is significantly reduced [9]. Mathematically, LoRA imposes constraints on the fine-tuned parameter space:\ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 = \ud835\udc4a0 \u2113+\ud835\udc34\ud835\udc35\u22a4, where \ud835\udc34\u2208R\ud835\udc511\u00d7\ud835\udc5dand \ud835\udc35\u2208R\ud835\udc512\u00d7\ud835\udc5dare low rank matrices and \ud835\udc5d<< \ud835\udc511,\ud835\udc512. With LoRA, the number of parameters being fine-tuned for a single layer is (\ud835\udc511 + \ud835\udc512) \u00d7 \ud835\udc5d. The original number of parameters for the single layer is \ud835\udc511 \u00d7\ud835\udc512. Therefore, the ratio of parameters fine-tuned by LoRA to the original parameters is: (\ud835\udc511 + \ud835\udc512) (\ud835\udc511 \u00d7 \ud835\udc512) \u00d7 \ud835\udc5d= ( 1 \ud835\udc511 + 1 \ud835\udc512 ) \u00d7 \ud835\udc5d Let\u2019s take the query projection matrix (q_proj) of the self-attention layer of LLaMa2-7b as an example. The matrix has dimensions of 4,096 x 4,096, with \ud835\udc511 = 4, 096 and \ud835\udc512 = 4, 096. By applying LoRA with a rank parameter of \ud835\udc5d= 16 (where \ud835\udc5d<< \ud835\udc511 and \ud835\udc5d<< \ud835\udc512), we achieve a reduction ratio of 0.0078, indicating significant parameter reduction. LoRA proves to be most effective in saving parameters when \ud835\udc5dis much smaller than both \ud835\udc511 and \ud835\udc512. Furthermore, inspired by the idea of retrieval augmented generation (RAG) [10], we incorporate the selection of relevant support examples during the prompting and fine-tuning stages. This is accomplished by performing semantic search to retrieve top pairs that are similar to each individual testing article. By adopting this approach, the fine-tuning examples can be more targeted and aligned with the specific content or topics covered in the testing articles. Another alternative is to randomly select article-summary pairs, which introduces a broader range of examples for the fine-tuning process. This random selection provides diverse instances, enhancing the fine-tuned model\u2019s robustness and adaptability. 3.4 ELearnFit Combine ELearn and EFit Both the ELearn and EFit approaches, discussed earlier, have the potential to independently improve model performance. ELearn is preferable when there are few annotated examples available, whereas EFit may be more suitable when numerous examples are accessible. In practice, annotations are costly and often limited to a small number of examples per task. Moreover, training models 3 \fwith a large amount of data necessitates substantial GPU resources and time. To address these issues, we propose an approach called ELearnFit, which combines ELearn and EFit by first fine-tuning and then prompting the model. Since both ELearn and EFit have multiple parameters to optimize independently, we employ a heuristic approach. This involves selecting optimal parameters from the ELearn optimization process and then incorporating the optimal parameters from the EFit optimization process. By doing this, we effectively manage computational resources and time constraints while striving for the best parameter settings. For these experiments, prompting is conducted via random sampling of support examples in the prompt to be fed to the pre-trained model. On the other hand, fine-tuning is performed over ten iterations, with data randomly sampled from the training set without replacement in each iteration. Both the randomly sampled examples in the prompt for ELearn and the fine-tuning process for EFit introduce variability in the fine-tuning process. To comprehensively evaluate and analyze the robustness and performance of ELearn, EFit, and ELearnFit, we investigate which component contributes more to the variation. This investigation is crucial for understanding the stability and reliability of each approach across different trials and conditions. Ultimately, it will help us identify the most robust strategy that consistently delivers strong performance in the presence of variability. 4 Experiments All experiments are run on the Azure ML platform, harnessing the computational capabilities of A100 GPUs equipped with 80 gigabytes of high-bandwidth memory. This technological foundation provide the ideal setting for a series of groundbreaking investigations. In order to ensure a systematic exploration and refinement of the parameters, we conduct all experiments in a sequential manner. Through employing a heuristic sequential approach, we efficiently manage computational resources and time constraints while striving for optimal parameter settings. Subsection 4.1 focuses on ELearn, and compares the results of varying LLM model size, prompt templates, and few-shot learning paradigms. Subsection 4.2 delves into the impact of selective samples for prompting on ELearn. Subsection 4.3 shifts to EFit, and explores the effectiveness of parameter-efficient fine-tuning through two distinct approaches: selective layer finetuning and LoRA algorithms. Subsection 4.4 sheds light on the insights gleaned from selective training samples for EFit. Subsection 4.5 analyzes the impact of combining the capabilities of ELearn and EFit, resulting in ELearnFit, and highlighting the potential for synergy between these techniques. Lastly, Subsection 4.6 compares the robustness of the various models. 4.1 Investigate ELearn by Analyzing the Influence of Model Size, Templates, and Few-shot Learning In this experiment, we compare four representative open-source GPT models: Eleuther-Neo, GPT2-medium, LLaMa2-7b, and LLaMa213b, explore the influence of two prompt templates (none and TL;DR), and vary the number of examples in the prompt. The results are illustrated in Figure 4, where the x-axis represents the number of examples in the prompt and the y-axis represents the Rouge-1 score. Our findings suggest that increasing the number of examples in the prompt leads to improved model performance. Notably, in the case of GPT-2 models, the zero-shot performance exceeds the one-shot performance. These findings align with previous studies conducted by [3, 18] on datasets such as LAMBADA, HellaSwag, PhysicalQA, and RACE-m, which reported similar observations in relation to GPT-3. Additionally, we observe that utilizing a straightforward prompt structure, specifically \"TL;DR\" (depicted in red), facilitates the model\u2019s learning process. This simplified format enables faster pattern recognition in comparison to the none template (depicted in black). Furthermore, focusing on the four models and examining their performance with the \"TL;DR\" template (depicted in red) in Figure 4, it becomes evident that LLaMa2-7b and LLaMa213b outperform gp2-medium and Eleuther-Neo. This finding suggests that the larger models, LLaMa2-7b and LLaMa2-13b, possess superior capabilities in handling the summarization task, signifying their suitability for this specific application. Figure 4: Comparison of Four Language Models with Fewshot Learning using Two Templates 4.2 Enhance ELearn via Selective Samples during Prompting To further improve the perforamcne of ELearn, inspired by the idea of RAG for prompting, we utilize semantic search to retrieve support article-summary samples that are contextually relevant to each testing article, which enables ELearn to learn from these samples in prompts and potentially generate more accurate responses. In this experiment, we broaden the range of support samples used in the prompt by including the entire filtered dataset, excluding the samples designated for testing. The outcomes obtained using this expanded scope align closely with those achieved using the original support samples from training set, so we solely showcase the results obtained from the latter (the complete filtered dataset, excluding the 125 testing samples) in this paper. Note that the order of prompting may potentially lead to different performance results as compared to random prompt ordering. Research conducted by 4 \f[11] demonstrates that in the QA problem, the location of relevant information within the language model\u2019s input context follows a U-shaped performance curve. Moreover, the 7B Llama-2 models are biased towards recent information, performing best when it is located at the end of the input context. However, exploring the impact of prompt order for news summerization is beyond the scope of this research paper. Figure 5 depicts that the utilization of selective samples during few-shot learning does not significantly affect the performance of the model. One potential explanation for this outcome could be that our straightforward implementation is incapable of capturing the extensive range of topics encompassed in news articles. As a result, the support samples may not adequately represent the diverse range of subjects covered by the articles in the test dataset. Figure 5: An Evaluation of In-Context Learning Methods: Comparing Random Samples vs. Selective Samples in Prompts 4.3 Investigate EFit We explore the effectiveness of parameter-efficient fine-tuning using two approaches: LoRA (LoRA4, LoRA16, and LoRA32 algorithms) and selective layers. Figure 6 shows the results of the various fine-tuned models with LoRA as well as the models fine-tuned on specific layers (while freezing the remaining layers). The results suggest that increasing the number of training examples for fine-tuning generally leads to improved performance. When there is only one support example, all algorithms perform similarly. However, with a larger number of support examples (e.g., 8 and 64), fine-tuning the first layer and fine-tuning with LoRA16 results in significantly better performance. Furthermore, when the number of support examples is limited (e.g., 8), fine-tuning the first layer of a LLaMa2-7b often yields weaker results compared to fine-tuning with LoRA16. This is because LoRA16 makes slight modifications to each layer of the LLaMa2-7b, allowing it to adapt more effectively to a small number of examples. However, as the number of support examples increases (e.g., 64), fine-tuning the first layer of the LLaMa2-7b shows improved performance compared to fine-tuning with LoRA16. This is because fine-tuning the first layer allows the LLaMa2-7b to learn task-specific patterns and relationships more directly, leveraging the increased amount of training data. Additionally, fine-tuning with LoRA16 outperforms both LoRA4 and LoRA32. This suggests that the decomposed weight matrix with a rank of 16 is better suited for representing features learned from news articles compared to ranks 4 and 32. Finally, the model where only the last layer is fine-tuned performs the worst, suggesting that the pre-trained and fine-tuned data sets do not fully overlap. As a result, fine-tuning the lower-level, granular features proves more effective in improving performance as compared to focusing on high-level features, given an adequate number of support examples. These findings suggest that fine-tuning the first layer of LLMs has the most impact. Figure 6: Parameter-Efficient Fine-tuning using LoRA and Selective Layer Approaches (Please note that the x-axis is logarithmically scaled for values of the number of support examples greater than 4). In practice, annotated examples may not be readily available so we investigate the impact of sample size on model performance. In Figure 7, we observe that the Rouge-1 score reaches a local maximum around 64 training examples. Beyond that point, the performance exhibits fluctuations as the number of examples continues to increase. This finding suggests that 64 training examples could potentially represent a \"sweet spot\" for fine-tuning. 4.4 Enhance EFit via Selective Training Samples during Fine-tuning To enhance the performance of EFit, we draw inspiration from the concepts of selecting relevant samples in the prompting phase. In this experiment, for each testing sample, we select the top 1 or top 2 most similar training samples from the entire filtered dataset, excluding the 125 testing samples. We then fine-tune the model using these selected samples. Table 2 shows the results. When fine-tuning the first layer of LLaMa2-7b, using the more similar samples during fine-tuning did not impact model performance. On the other hand, when the model is fine-tuned LoRA16, using the more similar samples led to slightly improved performance. Interestingly, the improved results under LoRA16 are comparable to the results under the model with the 5 \fFigure 7: Impact of Number of Training Examples on Finetuning the First Layer of LLaMa2-7b (note that the x-axis has been logarithmically scaled). Table 2: Comparison of Rouge-1 (%) between EFit with Random Samples and Selective Samples EFit Sampling LLaMa2-7b, Finetuned First layer LLaMa2-7b, LoRA16 Random Sample 36.32 32.43 Top 1 Selective Sample 35.36 36.16 Top 2 Selective Samples 36.62 34.38 fine-tuned first layer. This suggests that the LoRA16 model may benefit from having more relevant samples during fine tuning. 4.5 ELearnFit Optimize LLM by Combining ELearn and EFit We now look to combine the ELearn and EFit approaches to gain the benefits of both better prompting and fine-tuning. In this experiment, we focus on the TL;DR template in the prompt and two finetuned models (fine-tune the first layer of LLaMa2-7b or LLaMa2-7b with LoRA16). During each testing phase, we first fine-tune LLaMa2-7b and then apply few-shot in-context learning using different numbers (referred to as shots) of support examples (e.g., 0, 1, 2, 4, and 8 shots). The examples for in-context learning were randomly selected from the training set and incorporated into the prompts. The results, as depicted in Figure 8 and Figure 9, indicate that when there are limited annotations available for fine-tuning LLaMa27b, 4-shot learning leads to superior performance when compared to the results using less shots. Interestingly, both 4-shot and 8shot learning exhibit similar performance levels. However, this performance gap disappears when there are enough examples for fine-tuning and the results with different shot learnings converge. This suggests that few-shot learning has a lesser impact when a model is effectively fine-tuned with an adequate number of examples. Said another way, having more examples in the prompt can compensate for smaller sample sizes during the fine-tuning process. Similar to our investigation of selecting relevant samples in fewshot learning for ELearn, we now test whether this approach would Figure 8: Fine-tuning LLaMa2-7b with LoRA16 and Applying Few-shot In-context Learning Figure 9: Fine-tuning the First Layer of LLaMa2-7b and Applying Few-shot In-context Learning benefit ELearnFit during its few-shot learning phase. Figures 10 and 11 show the results of applying for ELearnFit after fine-tuning the first layer of LLaMa2-7b and fine-tuning with LoRA16, respectively. It is worth mentioning that when the number of training examples for fine-tuning is zero, it signifies pure in-context learning. Consistent with our findings in Section 3.2, these results suggest that randomly sampled examples offer a wider range of styles for the LLMs to effectively learn the summarization task. On the other hand, selective sampling faces challenges in capturing the desired diversity. Furthermore, it is worthwhile noting that when the model undergoes fine-tuning with LoRA16 and has an adequate number of examples (e.g., 64 examples), selective sampling demonstrates a slight improvement in overall model performance. We now use semantic search to identify the most similar training samples to fine-tune the model. Figure 12 shows that fine-tuning the first layer, using selective samples in training and 4-shot learning during incontext learning exhibits slightly inferior performance as compared to the proposed combined approach, which involves 64 examples for 6 \fFigure 10: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning the First Layer of LLaMa2-7b Figure 11: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning LLaMa2-7b with LoRA16 fine tuning and 4 examples in prompting. However, it outperforms the ELearnFit approach with 1or 2-shot learning. Additionally, as depicted in Figure 13, when fine-tuning with LoRA16, the combination of selective samples in training, and few-shot learning with four selective samples during in-context learning yields the overall best result. One possible explanation is that the use of selective samples for finetuning for prompting together could potentially enhance the effectiveness of finetuning LLaMA2-7b with LoRA16. This proposition finds support in the comparison between Figure 8 and Figure 9. Specifically, when evaluating the performance of the 4-shot learning scenarios, an increase in the number of examples for finetuning from 8 to 64 results in a degradation in performance for the former, as depicted in Figure 8. In contrast, the latter exhibits a stable performance, as illustrated in Figure 9. Figure 12: Comparing Fine-tuning the First Layer of LLaMa27b: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting Figure 13: Comparing Fine-tuning LLaMa2-7b with LoRA16: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting 4.6 Robustness Checks In our experiment, fine-tuning is performed over ten iterations. In each iteration, data are randomly sampled from the training set without replacement, introducing variability in the fine-tuning process. We now assess the robustness of three approaches: ELearn, EFit, and ELearnFit. The descriptions for each model are detailed in Table 3. Figure 14 presents the results obtained from five repeated trials for each approach. The x-axis represents the nth trial, while the y-axis displays the Rouge-1 score. While we were limited to five trials due to computational constraints, additional trials could be conducted to further assess the robustness of these approaches. This experimental setup allowed us to gain insights into the performance of each approach under varying conditions and to compare their effectiveness in different scenarios. 7 \fTable 3: Model Description for Robustness Comparison Model In-context Learning Fine-tuning ELearn 4 Shots EFit_first First Layer w/ 64 Examples EFit_LoRA6 LoRA16 w/ 64 Examples ELearnFit_first 4 Shots First Layer w/ 64 Examples ELearnFit_LoRA16 4 Shots LoRA16 w/ 64 Examples Table 4: Performance Details for Robustness Comparison Model Mean Standard Deviation ELearn 0.2962 0.0303 EFit_first 0.3465 0.0039 EFit_LoRA16 0.3274 0.0029 ELearnFit_first 0.3441 0.0086 ELearnFit_LoRA16 0.3273 0.0053 Table 4 reveals that in-context learning exhibits greater variability across trials compared to the other two approaches. This is evident from the higher standard deviation observed in the ELearn results. In contrast, both EFit_first and ELearnFit_first demonstrated similar performance, although ELearnFit_first had twice the standard deviation of EFit_first. A similar observation can be made for ELearnFit_LoRA16 and EFit_LoRA16. These findings further suggest that fine-tuning offers more stable performance than incontext learning. Additionally, when the number of samples for fine-tuning is limited, the combined approach ELearnFit yields consistent and reliable performance across different trials, highlighting its potential for enhancing robustness. Figure 14: Robustness Comparison of ELearn, EFit and ELearnFit Limitations In this paper, we primarily directed our attention to the LLaMa2-7b model, a formidable language model consisting of 7 billion parameters. Assuming that each parameter occupies a modest 4 bytes of memory, the estimated total memory requirement for this model is approximately 27.34 gigabytes, calculated as follows: Total Memory Size = 7 \u00d7 109 \u00d7 4 bytes/(10242) \u224827.34 gigabytes (1) where: 1 kilobyte (KB) = 1024 bytes 1 megabyte (MB) = 1024 kilobytes Similarly, the total memory requirements for the LLaMa2-13b and LLaMa2-70b models are approximately 51 gigabytes and 274 gigabytes, respectively. Due to limited resources on A100 GPUs, which offer up to 80 gigabytes of high-bandwidth memory, and the substantial computation time required for each experiment, we primarily focus on optimizing ELearn and EFiT with the LLaMa2-7b model in this paper. However, we believe that the insights gained from this research work can be readily extended to larger language models such as LLaMa2-70b, especially when coupled with more powerful GPU resources. Conclusion News summarization has become increasingly important as the volume of information has exploded. In our research, we explore different techniques to enhance news summaries. Under prompting (ELearn), we demonstrate that using larger models, adding more prompts, and utilizing simple templates improve performance. We also show that fine-tuning (EFit) enhances performance, especially when the first layer of models is fine-tuned. Surprisingly, for both prompt engineering and fine-tuning, leveraging more relevant samples does not improve performance. This is likely due to the fact that news articles are very diverse, and retrieving highly relevant samples during prompting or fine-tuning may result in over-learning, resulting in the model\u2019s failure to adequately capture the wide range of topics covered in the test dataset. Finally, we show that our combined model (ELearnFit) produces the best performance, particularly for situations where there are few annotated samples. In practice, our research suggests that a fine-tuned model (especially on the first layer) coupled with diverse examples during prompting, yields optimal performance for news summarization.", + "additional_graph_info": { + "graph": [ + [ + "Che Guan", + "Mengyu Huang" + ], + [ + "Mengyu Huang", + "Yuxing Zhong" + ], + [ + "Mengyu Huang", + "Huiwen Yang" + ] + ], + "node_feat": { + "Che Guan": [ + { + "url": "http://arxiv.org/abs/2405.02710v1", + "title": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning", + "abstract": "With the deluge of information delivered by the daily news cycle, there is a\ngrowing need to effectively and efficiently summarize news feeds for quick\nconsumption. We leverage large language models (LLMs), with their advanced\nlearning and generative abilities as compared to conventional language models,\nto generate concise and coherent summaries for news articles from the XSum\ndataset. Our paper focuses on two key aspects of LLMs: Efficient in-context\nLearning (ELearn) and Parameter Efficient Fine-tuning (EFit). Under ELearn, we\nfind that increasing the number of shots in prompts and utilizing simple\ntemplates generally improve the quality of summaries. We also find that\nutilizing relevant examples in few-shot learning for ELearn does not improve\nmodel performance. In addition, we studied EFit using different methods and\ndemonstrate that fine-tuning the first layer of LLMs produces better outcomes\nas compared to fine-tuning other layers or utilizing LoRA. We also find that\nleveraging more relevant training samples using selective layers does not\nresult in better performance. By combining ELearn and EFit, we create a new\nmodel (ELearnFit) that leverages the benefits of both few-shot learning and\nfine-tuning and produces superior performance to either model alone. We also\nuse ELearnFit to highlight the trade-offs between prompting and fine-tuning,\nespecially for situations where only a limited number of annotated samples are\navailable. Ultimately, our research provides practical techniques to optimize\nnews summarization during the prompting and fine-tuning stages and enhances the\nsynthesis of news articles.", + "authors": "Che Guan, Andrew Chin, Puya Vahabi", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction There has been an overload of information with each passing day \u2013 data is more voluminous, comes in more varieties and arrives at higher velocity. The news cycle is a good example of this trend, making it more difficult to read and synthesize the vast amount of information coming our way. The advent of large language models (LLMs) has led to a substantial improvement in the effectiveness and comprehensibility of news summarization. LLMs present two ways to address downstream tasks \u2013 through prompt engineering and fine-tuning. In our research, we explore various techniques to improve model performance through better prompts and finetuning methods. First, we study efficient in-context learning, which we call ELearn to denote the process of the model learning through prompts. We examine the impact of LLM size, the number of shots, and various templates during the in-context learning . We also select relevant samples in prompting in an attempt to improve performance. We then explore efficient methods to fine-tune LLMs. Calling this technique EFit, we test the performance of selective layer fine-tuning and LoRA in news summarization. We also utilize selective samples to improve the training set for the fine-tuning process. Finally, we combine ELearn and EFit to create ELearnFit and find that this model achieves superior performance versus either model alone. We make various contributions to existing research on news summarization 1. Through ELearn, we find that using larger models, increasing the number of shots during prompting, and leveraging simple templates can all enhance model performance. We also show that utilizing selective relevant examples during prompting does not meaningfully impact performance. Through EFit, we find that fine-tuning the first layer of LLMs produces better outcomes as compared to fine-tuning other layers or utilizing LoRA, and leveraging more relevant training samples using selective samples does not result in better performance. The combined model, ELearnFit, leverages the best of both worlds and suggests practical implementations for practitioners, especially when using a limited number of annotated samples. 2 Related Work The evolution of news summarization techniques has been driven by advancements in NLP and the increasing availability of largescale datasets. Early news summarization techniques relied on statistical methods, such as frequency analysis and clustering, to extract important information from news articles. These methods were limited in their ability to capture the semantics and context of the news content. With the advent of deep learning, news summarization techniques have undergone a significant transformation. Deep learning models, particularly transformer-based architectures such as BERT and GPT-3[3, 5, 17], have demonstrated remarkable performance in various NLP tasks, including news summarization [7]. These models are able to learn complex representations of news articles and generate summaries that are both informative and coherent. Recent research in news summarization has focused on developing techniques that can handle diverse types of news articles, including 1The codes used in this study were derived from the class \"Deep Multi-Task and Meta Learning\" offered by Stanford School of Engineering. We implemented and adapted these foundational project codes to meet the specific requirements of our study. arXiv:2405.02710v1 [cs.CL] 4 May 2024 \flong and complex articles, and generate summaries that are tailored to specific user needs and preferences. Additionally, there has been growing interest in explainable news summarization [8, 13], which aims to provide users with insights into how summaries are generated and the rationale behind the selection of specific sentences or phrases. Fine-tuning LLMs and in-context learning are two powerful techniques that have been successfully applied to summarization [2, 6, 19]. Fine-tuning LLMs [15] involves adapting the pre-trained LLM to the specific task of news summarization by fine-tuning its parameters on a smaller, task-specific dataset. This allows the LLM to leverage its learned knowledge and adapt it to the task of generating informative and coherent summaries. In-context learning is a technique where a pre-trained LM utilizes text input to define a task. By providing the model with an instruction and/or a few task demonstrations, it gains the ability to predict subsequent steps and complete additional instances of the task [3]. Furthermore, in-context learning can be viewed as a form of implicit Bayesian inference [18]. The model learns to infer a latent concept from the context and uses it to generate a response. The pretraining distribution can be seen as a mixture of hidden Markov models (HMMs), where each HMM represents a different concept. When prompted with a specific context, the model implicitly infers the latent concept that is most relevant to the context and generates a response based on that concept. To facilitate the training and evaluation of news summarization models, large-scale datasets such as CNN/Daily Mail and XSum [4, 14] have proven invaluable. These datasets provide a diverse collection of news articles and human-generated summaries, enabling researchers to benchmark different summarization techniques and track progress in the field. 3 Approach In this study, we utilize the XSum dataset, a large-scale collection of news articles with annotated summarizations, as analyzed in Subsection 3.1 , to explore various methods for enhancing prompting (ELearn) and fine-tuning (EFit), which will be explained in Subsections 3.2 and 3.3, respectively, in a more efficient manner. Furthermore, we investigate the advantages of combining these techniques through our proposed ELearnFit approach, which will be described in Subsection 3.4. To run each model, the input consists of the testing article, which may or may not be accompanied by support article-summary pair samples in the prompt. The output is the generated summary. To generate the summary, we sample from a pre-trained language model using greedy decoding, producing tokens one by one until a stop token is encountered or the maximum token limit of 100 is reached. To evaluate the performance of the model, the ROUGE-1 F1 score is employed. This metric measures the overlap between the generated summary and the reference summary Although the main emphasis of this paper is on fine-tuning LLaMa2 models, it is worth highlighting that the strategies and techniques discussed in the following sections can be adapted to optimize the performance of other transformer-based models. 3.1 Analysis of Data and Performance of Existing Models on Leaderboard We conduct our research using the XSum dataset, which consists of a training set comprising 204,045 article-summary samples meticulously curated by the original researchers. In Figure 1, the distributions of article and summary lengths in the training set are displayed. Due to limited resources, we face constraints (refer to Section 4.6 for more details) in using powerful GPU machines to fine-tune models using the complete training dataset. Additionally, the size and input token limits for several representative open-source GPT models, as indicated in Table 1, could pose restrictions on testing few-shot learning scenarios. Consequently, we create a smaller dataset consisting of 17,806 samples from the training set. This subset is obtained by filtering out rows from the training dataset where the combined word count of the article and summary exceeded 100. The length distributions of the filtered articles and summaries are displayed in Figure 2. Based on numerical testing observations, it has been determined that even the filtered dataset is still too large to adequately explore optimal parameters in experiments. To ensure a fair comparison across all experiments, we further reduce the dataset by selecting the initial 256 article-summary pairs as the fine-tuning set. The remaining 125 pairs are reserved for testing purposes. It\u2019s important to note that while the testing set consists of only 125 pairs, the entire filtered dataset (excluding the testing pairs) is utilized to assist the model in selecting relevant support samples for prompting and fine-tuning, as explained in subsections 4.2 and 4.4, respectively. According to the leaderboard ranking [1], the top-performing papers in news summarization achieve impressive results by assigning probability mass to candidate summaries [20] or by aligning model-generated sequences with reference sequences [12]. These approaches consistently yield Rouge-1 scores close to 0.5 across the entire testing dataset. However, in our work, we simply sample from a pre-trained LLM using greedy decoding, generating tokens iteratively until either a stop token is encountered or the maximum token limit of 100 is reached. It is worth noting that our primary focus is on optimizing efficient techniques for in-context learning and fine-tuning in news summarization, with the specific choice of dataset and token adjustment not being crucial to the outcome of our work. Figure 1: Length Distribution of Articles and Summaries in Training Set 2 \fFigure 2: Length Distribution of Filtered Articles and Summaries (Combined Word Count \u2264100) Table 1: Number of Parameters and Input Token Limits for GPT Models (Approximately 1.5 Tokens per Word) Models Parameters Input Tokens GPT2-Medium 345 million 1,024 Eleuther-Neo 2.7 billion 2,048 LLaMa2-7B 7 billion 2,048 LLaMa2-13B 13 billion 4,096 3.2 ELearn Efficient In-Context Learning We use two simple templates to investigate the impact of templates on few-shot learning. Figure 3 illustrates the case for one-shot learning. The first template, called \"NONE,\" utilizes a single space to separate the support article, support summary, and the test article. The second template, known as \"TL;DR\" (Too Long;Didn\u2019t Read), utilizes \" TL;DR: \" to differentiate between the article and summary (Please note that there intentionally exists a space before and after \"TL;DR:\" in most occurrences, while in the last occurrence of \"TL;DR:\", there is only one space before \"TL;DR:\" and no space after the colon. This formatting choice has been made to facilitate word generation using a language model.). Additionally, a single space is used to separate the support sample from the test sample. For clarity, these separators are highlighted in green in the figure. Figure 3: Templates for One-Shot Learning: \"none\" vs \"TL;DR\" Figure 3 presents an example of a one-shot learning template. An interesting aspect to explore is the impact of different numbers of support examples in the prompt. When selecting examples, one approach is to randomly choose article-summary pairs from the training set, which generally provides diversified support examples. Another approach is to use retrieve similar pairs to a given testing article in the prompt, which may result in examples concentrated around specific content or topics. Furthermore, it is important to consider the size of language models, as it directly relates to memory usage and can potentially influence in-context learning. 3.3 EFit Efficient Fine-Tuning In LLaMa2 [16], the transformer block plays a crucial role in the transformer architecture. It comprises of two main sub-layers: a self-attention layer and a feed-forward network. To construct a Transformer model, multiple Transformer blocks are repeated (32 for LLaMa2-7b and 40 for LLaMa2-13b) and stacked together. Each block processes the output of the previous block, allowing the LLaMa2 model to capture both local and global dependencies in the input sequence. However, due to the large size of the model and limited GPU resources, one approach for parameter-efficient fine-tuning is to selectively choose a specific transformer block layer, such as the first layer, to fine-tune the pre-trained weight matrix \ud835\udc4a0 \u2113\u2208R\ud835\udc511\u00d7\ud835\udc512 to a new arbitrary weight matrix \ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 while freezing the remaining block layers. Another approach for parameter-efficient fine-tuning is to employ LoRA (Low-Rank Adaptation). This technique freezes the pretrained model weights and introduces trainable rank decomposition matrices into each layer of the Transformer architecture. By doing so, the number of trainable parameters for downstream tasks is significantly reduced [9]. Mathematically, LoRA imposes constraints on the fine-tuned parameter space:\ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 = \ud835\udc4a0 \u2113+\ud835\udc34\ud835\udc35\u22a4, where \ud835\udc34\u2208R\ud835\udc511\u00d7\ud835\udc5dand \ud835\udc35\u2208R\ud835\udc512\u00d7\ud835\udc5dare low rank matrices and \ud835\udc5d<< \ud835\udc511,\ud835\udc512. With LoRA, the number of parameters being fine-tuned for a single layer is (\ud835\udc511 + \ud835\udc512) \u00d7 \ud835\udc5d. The original number of parameters for the single layer is \ud835\udc511 \u00d7\ud835\udc512. Therefore, the ratio of parameters fine-tuned by LoRA to the original parameters is: (\ud835\udc511 + \ud835\udc512) (\ud835\udc511 \u00d7 \ud835\udc512) \u00d7 \ud835\udc5d= ( 1 \ud835\udc511 + 1 \ud835\udc512 ) \u00d7 \ud835\udc5d Let\u2019s take the query projection matrix (q_proj) of the self-attention layer of LLaMa2-7b as an example. The matrix has dimensions of 4,096 x 4,096, with \ud835\udc511 = 4, 096 and \ud835\udc512 = 4, 096. By applying LoRA with a rank parameter of \ud835\udc5d= 16 (where \ud835\udc5d<< \ud835\udc511 and \ud835\udc5d<< \ud835\udc512), we achieve a reduction ratio of 0.0078, indicating significant parameter reduction. LoRA proves to be most effective in saving parameters when \ud835\udc5dis much smaller than both \ud835\udc511 and \ud835\udc512. Furthermore, inspired by the idea of retrieval augmented generation (RAG) [10], we incorporate the selection of relevant support examples during the prompting and fine-tuning stages. This is accomplished by performing semantic search to retrieve top pairs that are similar to each individual testing article. By adopting this approach, the fine-tuning examples can be more targeted and aligned with the specific content or topics covered in the testing articles. Another alternative is to randomly select article-summary pairs, which introduces a broader range of examples for the fine-tuning process. This random selection provides diverse instances, enhancing the fine-tuned model\u2019s robustness and adaptability. 3.4 ELearnFit Combine ELearn and EFit Both the ELearn and EFit approaches, discussed earlier, have the potential to independently improve model performance. ELearn is preferable when there are few annotated examples available, whereas EFit may be more suitable when numerous examples are accessible. In practice, annotations are costly and often limited to a small number of examples per task. Moreover, training models 3 \fwith a large amount of data necessitates substantial GPU resources and time. To address these issues, we propose an approach called ELearnFit, which combines ELearn and EFit by first fine-tuning and then prompting the model. Since both ELearn and EFit have multiple parameters to optimize independently, we employ a heuristic approach. This involves selecting optimal parameters from the ELearn optimization process and then incorporating the optimal parameters from the EFit optimization process. By doing this, we effectively manage computational resources and time constraints while striving for the best parameter settings. For these experiments, prompting is conducted via random sampling of support examples in the prompt to be fed to the pre-trained model. On the other hand, fine-tuning is performed over ten iterations, with data randomly sampled from the training set without replacement in each iteration. Both the randomly sampled examples in the prompt for ELearn and the fine-tuning process for EFit introduce variability in the fine-tuning process. To comprehensively evaluate and analyze the robustness and performance of ELearn, EFit, and ELearnFit, we investigate which component contributes more to the variation. This investigation is crucial for understanding the stability and reliability of each approach across different trials and conditions. Ultimately, it will help us identify the most robust strategy that consistently delivers strong performance in the presence of variability. 4 Experiments All experiments are run on the Azure ML platform, harnessing the computational capabilities of A100 GPUs equipped with 80 gigabytes of high-bandwidth memory. This technological foundation provide the ideal setting for a series of groundbreaking investigations. In order to ensure a systematic exploration and refinement of the parameters, we conduct all experiments in a sequential manner. Through employing a heuristic sequential approach, we efficiently manage computational resources and time constraints while striving for optimal parameter settings. Subsection 4.1 focuses on ELearn, and compares the results of varying LLM model size, prompt templates, and few-shot learning paradigms. Subsection 4.2 delves into the impact of selective samples for prompting on ELearn. Subsection 4.3 shifts to EFit, and explores the effectiveness of parameter-efficient fine-tuning through two distinct approaches: selective layer finetuning and LoRA algorithms. Subsection 4.4 sheds light on the insights gleaned from selective training samples for EFit. Subsection 4.5 analyzes the impact of combining the capabilities of ELearn and EFit, resulting in ELearnFit, and highlighting the potential for synergy between these techniques. Lastly, Subsection 4.6 compares the robustness of the various models. 4.1 Investigate ELearn by Analyzing the Influence of Model Size, Templates, and Few-shot Learning In this experiment, we compare four representative open-source GPT models: Eleuther-Neo, GPT2-medium, LLaMa2-7b, and LLaMa213b, explore the influence of two prompt templates (none and TL;DR), and vary the number of examples in the prompt. The results are illustrated in Figure 4, where the x-axis represents the number of examples in the prompt and the y-axis represents the Rouge-1 score. Our findings suggest that increasing the number of examples in the prompt leads to improved model performance. Notably, in the case of GPT-2 models, the zero-shot performance exceeds the one-shot performance. These findings align with previous studies conducted by [3, 18] on datasets such as LAMBADA, HellaSwag, PhysicalQA, and RACE-m, which reported similar observations in relation to GPT-3. Additionally, we observe that utilizing a straightforward prompt structure, specifically \"TL;DR\" (depicted in red), facilitates the model\u2019s learning process. This simplified format enables faster pattern recognition in comparison to the none template (depicted in black). Furthermore, focusing on the four models and examining their performance with the \"TL;DR\" template (depicted in red) in Figure 4, it becomes evident that LLaMa2-7b and LLaMa213b outperform gp2-medium and Eleuther-Neo. This finding suggests that the larger models, LLaMa2-7b and LLaMa2-13b, possess superior capabilities in handling the summarization task, signifying their suitability for this specific application. Figure 4: Comparison of Four Language Models with Fewshot Learning using Two Templates 4.2 Enhance ELearn via Selective Samples during Prompting To further improve the perforamcne of ELearn, inspired by the idea of RAG for prompting, we utilize semantic search to retrieve support article-summary samples that are contextually relevant to each testing article, which enables ELearn to learn from these samples in prompts and potentially generate more accurate responses. In this experiment, we broaden the range of support samples used in the prompt by including the entire filtered dataset, excluding the samples designated for testing. The outcomes obtained using this expanded scope align closely with those achieved using the original support samples from training set, so we solely showcase the results obtained from the latter (the complete filtered dataset, excluding the 125 testing samples) in this paper. Note that the order of prompting may potentially lead to different performance results as compared to random prompt ordering. Research conducted by 4 \f[11] demonstrates that in the QA problem, the location of relevant information within the language model\u2019s input context follows a U-shaped performance curve. Moreover, the 7B Llama-2 models are biased towards recent information, performing best when it is located at the end of the input context. However, exploring the impact of prompt order for news summerization is beyond the scope of this research paper. Figure 5 depicts that the utilization of selective samples during few-shot learning does not significantly affect the performance of the model. One potential explanation for this outcome could be that our straightforward implementation is incapable of capturing the extensive range of topics encompassed in news articles. As a result, the support samples may not adequately represent the diverse range of subjects covered by the articles in the test dataset. Figure 5: An Evaluation of In-Context Learning Methods: Comparing Random Samples vs. Selective Samples in Prompts 4.3 Investigate EFit We explore the effectiveness of parameter-efficient fine-tuning using two approaches: LoRA (LoRA4, LoRA16, and LoRA32 algorithms) and selective layers. Figure 6 shows the results of the various fine-tuned models with LoRA as well as the models fine-tuned on specific layers (while freezing the remaining layers). The results suggest that increasing the number of training examples for fine-tuning generally leads to improved performance. When there is only one support example, all algorithms perform similarly. However, with a larger number of support examples (e.g., 8 and 64), fine-tuning the first layer and fine-tuning with LoRA16 results in significantly better performance. Furthermore, when the number of support examples is limited (e.g., 8), fine-tuning the first layer of a LLaMa2-7b often yields weaker results compared to fine-tuning with LoRA16. This is because LoRA16 makes slight modifications to each layer of the LLaMa2-7b, allowing it to adapt more effectively to a small number of examples. However, as the number of support examples increases (e.g., 64), fine-tuning the first layer of the LLaMa2-7b shows improved performance compared to fine-tuning with LoRA16. This is because fine-tuning the first layer allows the LLaMa2-7b to learn task-specific patterns and relationships more directly, leveraging the increased amount of training data. Additionally, fine-tuning with LoRA16 outperforms both LoRA4 and LoRA32. This suggests that the decomposed weight matrix with a rank of 16 is better suited for representing features learned from news articles compared to ranks 4 and 32. Finally, the model where only the last layer is fine-tuned performs the worst, suggesting that the pre-trained and fine-tuned data sets do not fully overlap. As a result, fine-tuning the lower-level, granular features proves more effective in improving performance as compared to focusing on high-level features, given an adequate number of support examples. These findings suggest that fine-tuning the first layer of LLMs has the most impact. Figure 6: Parameter-Efficient Fine-tuning using LoRA and Selective Layer Approaches (Please note that the x-axis is logarithmically scaled for values of the number of support examples greater than 4). In practice, annotated examples may not be readily available so we investigate the impact of sample size on model performance. In Figure 7, we observe that the Rouge-1 score reaches a local maximum around 64 training examples. Beyond that point, the performance exhibits fluctuations as the number of examples continues to increase. This finding suggests that 64 training examples could potentially represent a \"sweet spot\" for fine-tuning. 4.4 Enhance EFit via Selective Training Samples during Fine-tuning To enhance the performance of EFit, we draw inspiration from the concepts of selecting relevant samples in the prompting phase. In this experiment, for each testing sample, we select the top 1 or top 2 most similar training samples from the entire filtered dataset, excluding the 125 testing samples. We then fine-tune the model using these selected samples. Table 2 shows the results. When fine-tuning the first layer of LLaMa2-7b, using the more similar samples during fine-tuning did not impact model performance. On the other hand, when the model is fine-tuned LoRA16, using the more similar samples led to slightly improved performance. Interestingly, the improved results under LoRA16 are comparable to the results under the model with the 5 \fFigure 7: Impact of Number of Training Examples on Finetuning the First Layer of LLaMa2-7b (note that the x-axis has been logarithmically scaled). Table 2: Comparison of Rouge-1 (%) between EFit with Random Samples and Selective Samples EFit Sampling LLaMa2-7b, Finetuned First layer LLaMa2-7b, LoRA16 Random Sample 36.32 32.43 Top 1 Selective Sample 35.36 36.16 Top 2 Selective Samples 36.62 34.38 fine-tuned first layer. This suggests that the LoRA16 model may benefit from having more relevant samples during fine tuning. 4.5 ELearnFit Optimize LLM by Combining ELearn and EFit We now look to combine the ELearn and EFit approaches to gain the benefits of both better prompting and fine-tuning. In this experiment, we focus on the TL;DR template in the prompt and two finetuned models (fine-tune the first layer of LLaMa2-7b or LLaMa2-7b with LoRA16). During each testing phase, we first fine-tune LLaMa2-7b and then apply few-shot in-context learning using different numbers (referred to as shots) of support examples (e.g., 0, 1, 2, 4, and 8 shots). The examples for in-context learning were randomly selected from the training set and incorporated into the prompts. The results, as depicted in Figure 8 and Figure 9, indicate that when there are limited annotations available for fine-tuning LLaMa27b, 4-shot learning leads to superior performance when compared to the results using less shots. Interestingly, both 4-shot and 8shot learning exhibit similar performance levels. However, this performance gap disappears when there are enough examples for fine-tuning and the results with different shot learnings converge. This suggests that few-shot learning has a lesser impact when a model is effectively fine-tuned with an adequate number of examples. Said another way, having more examples in the prompt can compensate for smaller sample sizes during the fine-tuning process. Similar to our investigation of selecting relevant samples in fewshot learning for ELearn, we now test whether this approach would Figure 8: Fine-tuning LLaMa2-7b with LoRA16 and Applying Few-shot In-context Learning Figure 9: Fine-tuning the First Layer of LLaMa2-7b and Applying Few-shot In-context Learning benefit ELearnFit during its few-shot learning phase. Figures 10 and 11 show the results of applying for ELearnFit after fine-tuning the first layer of LLaMa2-7b and fine-tuning with LoRA16, respectively. It is worth mentioning that when the number of training examples for fine-tuning is zero, it signifies pure in-context learning. Consistent with our findings in Section 3.2, these results suggest that randomly sampled examples offer a wider range of styles for the LLMs to effectively learn the summarization task. On the other hand, selective sampling faces challenges in capturing the desired diversity. Furthermore, it is worthwhile noting that when the model undergoes fine-tuning with LoRA16 and has an adequate number of examples (e.g., 64 examples), selective sampling demonstrates a slight improvement in overall model performance. We now use semantic search to identify the most similar training samples to fine-tune the model. Figure 12 shows that fine-tuning the first layer, using selective samples in training and 4-shot learning during incontext learning exhibits slightly inferior performance as compared to the proposed combined approach, which involves 64 examples for 6 \fFigure 10: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning the First Layer of LLaMa2-7b Figure 11: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning LLaMa2-7b with LoRA16 fine tuning and 4 examples in prompting. However, it outperforms the ELearnFit approach with 1or 2-shot learning. Additionally, as depicted in Figure 13, when fine-tuning with LoRA16, the combination of selective samples in training, and few-shot learning with four selective samples during in-context learning yields the overall best result. One possible explanation is that the use of selective samples for finetuning for prompting together could potentially enhance the effectiveness of finetuning LLaMA2-7b with LoRA16. This proposition finds support in the comparison between Figure 8 and Figure 9. Specifically, when evaluating the performance of the 4-shot learning scenarios, an increase in the number of examples for finetuning from 8 to 64 results in a degradation in performance for the former, as depicted in Figure 8. In contrast, the latter exhibits a stable performance, as illustrated in Figure 9. Figure 12: Comparing Fine-tuning the First Layer of LLaMa27b: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting Figure 13: Comparing Fine-tuning LLaMa2-7b with LoRA16: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting 4.6 Robustness Checks In our experiment, fine-tuning is performed over ten iterations. In each iteration, data are randomly sampled from the training set without replacement, introducing variability in the fine-tuning process. We now assess the robustness of three approaches: ELearn, EFit, and ELearnFit. The descriptions for each model are detailed in Table 3. Figure 14 presents the results obtained from five repeated trials for each approach. The x-axis represents the nth trial, while the y-axis displays the Rouge-1 score. While we were limited to five trials due to computational constraints, additional trials could be conducted to further assess the robustness of these approaches. This experimental setup allowed us to gain insights into the performance of each approach under varying conditions and to compare their effectiveness in different scenarios. 7 \fTable 3: Model Description for Robustness Comparison Model In-context Learning Fine-tuning ELearn 4 Shots EFit_first First Layer w/ 64 Examples EFit_LoRA6 LoRA16 w/ 64 Examples ELearnFit_first 4 Shots First Layer w/ 64 Examples ELearnFit_LoRA16 4 Shots LoRA16 w/ 64 Examples Table 4: Performance Details for Robustness Comparison Model Mean Standard Deviation ELearn 0.2962 0.0303 EFit_first 0.3465 0.0039 EFit_LoRA16 0.3274 0.0029 ELearnFit_first 0.3441 0.0086 ELearnFit_LoRA16 0.3273 0.0053 Table 4 reveals that in-context learning exhibits greater variability across trials compared to the other two approaches. This is evident from the higher standard deviation observed in the ELearn results. In contrast, both EFit_first and ELearnFit_first demonstrated similar performance, although ELearnFit_first had twice the standard deviation of EFit_first. A similar observation can be made for ELearnFit_LoRA16 and EFit_LoRA16. These findings further suggest that fine-tuning offers more stable performance than incontext learning. Additionally, when the number of samples for fine-tuning is limited, the combined approach ELearnFit yields consistent and reliable performance across different trials, highlighting its potential for enhancing robustness. Figure 14: Robustness Comparison of ELearn, EFit and ELearnFit Limitations In this paper, we primarily directed our attention to the LLaMa2-7b model, a formidable language model consisting of 7 billion parameters. Assuming that each parameter occupies a modest 4 bytes of memory, the estimated total memory requirement for this model is approximately 27.34 gigabytes, calculated as follows: Total Memory Size = 7 \u00d7 109 \u00d7 4 bytes/(10242) \u224827.34 gigabytes (1) where: 1 kilobyte (KB) = 1024 bytes 1 megabyte (MB) = 1024 kilobytes Similarly, the total memory requirements for the LLaMa2-13b and LLaMa2-70b models are approximately 51 gigabytes and 274 gigabytes, respectively. Due to limited resources on A100 GPUs, which offer up to 80 gigabytes of high-bandwidth memory, and the substantial computation time required for each experiment, we primarily focus on optimizing ELearn and EFiT with the LLaMa2-7b model in this paper. However, we believe that the insights gained from this research work can be readily extended to larger language models such as LLaMa2-70b, especially when coupled with more powerful GPU resources. Conclusion News summarization has become increasingly important as the volume of information has exploded. In our research, we explore different techniques to enhance news summaries. Under prompting (ELearn), we demonstrate that using larger models, adding more prompts, and utilizing simple templates improve performance. We also show that fine-tuning (EFit) enhances performance, especially when the first layer of models is fine-tuned. Surprisingly, for both prompt engineering and fine-tuning, leveraging more relevant samples does not improve performance. This is likely due to the fact that news articles are very diverse, and retrieving highly relevant samples during prompting or fine-tuning may result in over-learning, resulting in the model\u2019s failure to adequately capture the wide range of topics covered in the test dataset. Finally, we show that our combined model (ELearnFit) produces the best performance, particularly for situations where there are few annotated samples. In practice, our research suggests that a fine-tuned model (especially on the first layer) coupled with diverse examples during prompting, yields optimal performance for news summarization." + } + ], + "Mengyu Huang": [ + { + "url": "http://arxiv.org/abs/2111.03376v1", + "title": "Simplex Initialization: A Survey of Techniques and Trends", + "abstract": "The simplex method is one of the most fundamental technologies for solving\nlinear programming (LP) problems and has been widely applied to different\npractical applications. In the past literature, how to improve and accelerate\nthe simplex method has attracted plenty of research. One important way to\nachieve this goal is to find a better initialization method for the simplex. In\nthis survey, we aim to provide an overview about the initialization methods in\nthe primal and dual simplex, respectively. We also propose several potential\nfuture directions about how to improve the existing initialization methods with\nthe help of advanced learning technologies.", + "authors": "Mengyu Huang, Yuxing Zhong, Huiwen Yang, Jiazheng Wang, Fan Zhang, Bo Bai, Ling Shi", + "published": "2021-11-05", + "updated": "2021-11-05", + "primary_cat": "math.OC", + "cats": [ + "math.OC" + ], + "main_content": "Introduction Linear Programming (LP) is a minimization or maximization optimization problem with linear objective, linear equality and inequality constraints. The applications range from engineering, agriculture, transportation to food industry, manufacturing, etc. The foundation of LP dates back to the work proposed by Kantorovich [1939] where an optimization problem of production planning and organization was studied. Later, Dantzig, who is known as the father of LP, introduced the \ufb01rst general framework of LP and provided the basic primal simplex method for solving LPs [Dantzig, 1951] Since then, LPs and the simplex method have been greatly explored and have led to a large number of extensions. Speci\ufb01cally, the \u2217These authors contributed equally to this work. arXiv:2111.03376v1 [math.OC] 5 Nov 2021 \farXiv Template A PREPRINT dual simplex method [Lemke, 1954] was proposed in 1954. The primal and dual simplex are the two main streams in the simplex methods. In 1972, Klee and Minty [1972] constructed a special LP example and showed that the basic simplex method has exponential time complexity in the worst case. From that moment on, more research efforts turned to \ufb01nding more ef\ufb01cient algorithms, either by improving the basic simplex method or by proposing new algorithms from other perspectives. In 1984, the interior point method (IPM) for solving LPs, which is also called as Karmarkar\u2019s algorithm, was proposed in Karmarkar [1984]. This algorithm is the \ufb01rst practically feasible method that can solve LPs in polynomial time and has prompted many studies on the variants of IPM for solving LPs. The variants of the simplex method (primal and dual) and the variants of the IPM are two main methods for solving LPs. In general, the simplex method is more competitive in practice, while the IPM has a better time complexity in theory. In this survey, we will \ufb01rst introduce and compare the main ideas of the two methods and then focus on the extension works for improving the ef\ufb01ciency of the simplex method. The reason for investigating the simplex method are threefold. First, the simplex method plays an important role in solving some very common and important optimization problems, e.g., (mixed) integer LP problems, etc. Second, unlike IPMs, which are thought to have been theoretically and computationally matured, there still exist some research gaps for the simplex method. Last, most of the extension works of the simplex method include heuristics which are ad hoc in nature and could be further improved. In recent years, with the rapid development of machine learning and deep learning, it is interesting and meaningful to explore whether the existing heuristics can be improved by combining some learning methods, or to propose some learning-based simplex methods. Basically, the computational time of the simplex method is mainly spent in three stages, namely, the initialization stage, the iteration stage, and the termination stage. In the initialization stage, an initial point or an initial basis is provided for the simplex algorithm. Then in the iteration stage, given a pivot rule, the algorithm performs the selection of the entering variable and the leaving variable to obtain the improved point or basis iteratively. Finally, in the termination stage, the algorithm is ended when some designed termination conditions are satis\ufb01ed. Accordingly, the extension works of the simplex method, which to improve the algorithm ef\ufb01ciency and to reduce the computational cost, can be divided into three types based on which stage (initialization, iteration, or termination) is modi\ufb01ed and improved. For the iteration stage, many extension works investigated and proposed new pivot rules to improve the iteration ef\ufb01ciency, e.g., the minimal index pivot rule [Bland, 1977], the Edmonds-Fukuda pivot rule [Fukuda, 1982, Clausen, 1987], and the optimal pivot rule [Etoa, 2016], etc. For the termination stage, a general survey of different methods is given in Azlan et al. [2020]. In our survey, we focus on the extension works for improving the initialization stage. There are three reasons for studying this stage. First, the initialization stage plays a signi\ufb01cant role in increasing the algorithm ef\ufb01ciency. A good starting point can lead to less number of iterations or less computation time within each iteration in the following stages, thus resulting in a reduced calculation time of the entire solution process. Second, the computational time to obtain a suitable initial point or basis can sometimes be much greater than the time spent in subsequent stages. Therefore, how to propose some improved algorithms or methods for accelerating the initialization stage is an important research direction. Third, compared with the iteration stage and the termination stage, there are still many research gaps in the initialization stage that need to be done or completed. An overview of the related simplex initialization methods summarized in this survey is illustrated in Figure 1. The remainder of the survey is organized as follows. In the Preliminary section, we present the main algorithms of the simplex method and the IPM and compare these two methods. In the Initialization in Primal Simplex section and the Initialization in Dual Simplex section, we investigate the extension works for improving the initialization stage from the perspective of the primal simplex and the dual simplex, respectively. In the \ufb01nal section, we summarize the contributions of this survey and propose some possible future research directions. Notations: For a matrix A, Ai\u2022 and A\u2022j denote the i-th row and j-th column of A, respectively, and Aij represents the element at i-th row and j-th column in A. AT and A\u22121 respectively denote the transpose and inverse of matrix A. rank(A) = m denotes the rank of matrix A. For a vector b, bi denotes the i-th element of b. R is the set of real numbers and Rn is the n-dimensional Euclidean space. Rm\u00d7n denotes the space of m \u00d7 n real matrices. Given two sets C1 and C2, C1\\C2 = {s \u2208C1|s / \u2208C2}. \u222adenotes the intersection of sets. \u2225\u00b7 \u22252 and | \u00b7 | respectively denote the Euclidean norm of a vector and the absolute value of a scalar. Im denotes a m \u00d7 m identity matrix. 2 Preliminary As mentioned above, the simplex method and the IPM are two main branches of solving LPs. In practice, there are always LPs where one algorithm signi\ufb01cantly outperforms the other. In this section, we brie\ufb02y review these two algorithms and compare them from different perspectives. 2 \farXiv Template A PREPRINT Simplex Initialization Primal Finding the Initial Point Two Phase Method The Big-\ud835\udc40Method Cost Modifications (Wunderling 1996, Pan 2000) Nonfeasible Basis Method (Nabli 2009) Algebraic Simplex Initialization (Nabli andChahdoura 2015) The Most-Obtuse-Angle Column Rule (Pan 1994b) Infeasibility-Sum Method (Pan 2014) Logical Basis (Chvatal et al. 1983, Bertsimas and Tsitsiklis 1997, Maros 2012) Crash Basis (Maros 2012) CPLEX Basis (Bixby 1992) Tearing Basis (Gould and Reid 1989) The Cosine Criterion (Junior and Lins 2005, Hu 2007) Triangular and Fill-Reducing Basis (Ploskas et al. 2020) Idiot Crash Algorithm (ICA) (Galabova and Hall 2020) Finding an Improved Starting Point \ud835\udf16-Optimality Search Direction (Luh and Tsaih 2002) Hybrid-LP Method (Al-Najjar and Malakooti 2011) Accelerating the Initialization Quick Simplex Method (Vaidya and Kasturiwale 2014) Smart Crossover (Ge et al. 2021) Dual Generate a Dual Feasible Basis with Simplex The Most-Obtuse-Angle Row Rule (Pan 1990, 1994a, 1997) Right-Hand Side Modifications (Pan 1999) Generate a Dual Feasible Basis with Modified Simplex Dual Infeasibility-Sum Method (Maros 1986, 2003, Koberstein and Suhl 2007) Artificial Bounds Figure 1: Overview of the initialization methods in simplex 2.1 Linear Programming Formulation 2.1.1 Primal/Standard Form. Given a general LP problem, it can be formulated into the primal/standard form as: min x cT x s.t. Ax = b x \u22650, (P) where c \u2208Rn, b \u2208Rm and A \u2208Rm\u00d7n are problem-dependent parameters, and x \u2208Rn is the decision variable. Without loss of generality, we assume rank(A) = m. Though LP problems may appear in other forms, trivial approaches can be applied to transform them into the standard form Dantzig [1965]. Therefore, it is suf\ufb01cient to focus on this standard form to better introduce different LP methods. 2.1.2 Dual Form. The associated dual problem of (P) is de\ufb01ned as: max y,s bT y s.t. AT y + s = c s \u22650, (D) where y \u2208Rm is the (dual) decision variable associated with x and s \u2208Rn is the introduced slack variable. The mathematical relationship between the primal-dual pair is given by the duality theorems as follows. Theorem 2.1 (Weak Duality) Given arbitrary feasible solutions x to (P) and (y, s) to (D), we have cT x \u2265bT y. Proof. As x and (y, s) are feasible to (P) and (D), respectively, we have Ax = b, x \u22650, AT y + s = c, s \u22650. 3 \farXiv Template A PREPRINT Thus cT x \u2212bT y = (AT y + s)T x \u2212(Ax)T y = sT x \u22650. Theorem 2.2 (Strong Duality) If one of the primal-dual pair problems admits an optimal solution, the optimal solution exists for the other problem, and for any optimal solution pair x\u2217and (y\u2217, s\u2217), the duality gap is zero, i.e., cT x\u2217= bT y\u2217. 2.2 The Simplex Method The simplex method searches for an optimal solution by visiting adjacent vertices, i.e., basic feasible solutions, of the feasible region. With the elegantly designed entering/leaving basic variables at each iteration, the objective function will monotonically decrease/increase to the optimal value. 2.2.1 Basic Solutions. As rank(A) = m, A can be permuted into a partitioned-matrix form, i.e., A = [AB, AN], where AB \u2208Rm\u00d7m is a non-singular sub-matrix of A. De\ufb01nition 2.3 Any column collection of AB is called a basis of (P). Letting B and N be the associated column indices of AB and AN, respectively, (P) can then be rewritten into the canonical form as follows: min xB,xN cT BxB + cT NxN s.t. ABxB + ANxN = b xB, xN \u22650, (1) where cT = [cT B, cT N] and xT = [xT B, xT N] are permuted and partitioned similarly. The basic solution, which satis\ufb01es the equality constraints, is obtained by setting non-basic variables to zero. Thus, the primal basic solution based on the current partition is \u001axB = (AB)\u22121b xN = 0 , (2) Analogously, (D) can be written as max y,sB,sN bT y s.t. AT By + sB = cB AT Ny + sN = cN sB, sN \u22650. (3) Since the primal non-basic variables are complementary to the dual basic variables, the associated dual basic solution is obtained by letting sB = 0, i.e., \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 y = (AT B)\u22121cB sB = 0 sN = cN \u2212AT Ny , (4) In literature, \u00af b := A\u22121 B b is called the right-hand side (RHS) coef\ufb01cient, \u03c0 := (AT B)\u22121cB is the simplex multiplier and \u00af c := cN \u2212AT N\u03c0 is referred to as the reduced cost. Given the basis AB, the basic solution is said to achieve primal feasibility if and only if xB \u22650, while it achieves dual feasibility if and only if sN \u22650. Furthermore, if a basic solution is both primal feasible and dual feasible, it is an optimal solution as well. Additionally, a basis is said to be degenerate if there exists any xi\u2208B = 0. Degeneracy will cause cycling or stalling in practice, so as to in\ufb02uence the performance of simplex. 2.2.2 Pivoting. Starting with a feasible basis, the simplex method moves from one basis to a neighboring one, i.e., a basis that differs from the previous one by only one element, while preserving the feasibility. The selection of such entering/leaving (basis) variable is called the pivot rule. Geometrically, as the feasible basic solution is associated with a vertex of the feasible region, the simplex method goes through a vertex-to-vertex path to the optimum. After the pivoting operation, the newly generated bases have three features in common: 4 \farXiv Template A PREPRINT 1) Exactly one column of AB is changed; 2) The feasibility is preserved; 3) The objective function decreases/increases monotonically. 2.2.3 The Primal Simplex Method. According to the type of feasibility preserved during the iteration, the simplex method can be categorized into two classes, i.e., primal simplex and dual simplex. The primal simplex method is initialized with a primal feasible basis. The feasibility remains within iterations until optimality or unboundedness is detected. Therefore, the primal simplex algorithm can be summarized as Algorithm 1. In the algorithm, eq denotes the unit vector which takes one at position q and zeros otherwise. Algorithm 1 Primal Simplex Require: Arbitrary primal feasible basis AB 1: Repeat forever 2: If sN \u22650 then 3: Break with optimality 4: Else 5: Select the entering index q = {j \u2208N : sj < 0} (monotonicity) 6: Compute \u2206xB = A\u22121 B ANeq and t = \u0010 maxi\u2208B \u2206xi xi \u0011\u22121 7: If t \u22640 then 8: Break with unboundedness 9: Else 10: Select the leaving index p = arg maxi\u2208B \u2206xi xi (feasibility) 11: End If 12: End If 13: Perform pivoting: B \u2190B \u222a{q}\\{p} (one column is changed) 2.2.4 The Dual Simplex Method. Instead of starting with a primal feasible basis, the dual simplex method requires a dual feasible one. Analogously, the dual simplex algorithm can be summarized as Algorithm 2. Algorithm 2 Dual Simplex Require: Arbitrary dual feasible basis AB 1: Repeat forever 2: If xB \u22650 then 3: Break with optimality 4: Else 5: Select the leaving index p = {i \u2208B : xi < 0} (monotonicity) 6: Compute \u2206sN = (A\u22121 B AN)T ep and r = \u0010 maxj\u2208N \u2212\u2206sj sj \u0011\u22121 7: If r \u22640 then 8: Break with (primal) infeasibility 9: Else 10: Select the entering index q = arg maxj\u2208N \u0010 \u2212\u2206sj sj \u0011 (feasibility) 11: End If 12: End If 13: Perform pivoting B \u2190B \u222a{q}\\{p} (one column is changed) 2.3 Interior Point Method (IPM) Different from the simplex methods, IPMs provide an alternative way of solving LP problems. The main idea is that from an initial point in the feasible region, there exists a path across the interior of the polyhedron, along which an 5 \farXiv Template A PREPRINT optimal point can be reached. That is to say, the LP problem can be solved within a single iteration if the \u201cright\u201d direction, i.e., the exact direction from the initial point to the optimal point, is found. Although this is usually impossible, the idea does motivate the emergence of IPMs. The most remarkable type of IPMs is the path-following IPM, whose idea is to \ufb01nd an optimal solution by following a central path [Sonnevend, 1986]. Considering the primal-dual pair (P) and (D), by duality theory, it is clear that to \ufb01nd optimal solutions for both primal and dual problems, the following system need to be solved: Ax = b, x \u22650, AT y + s = c, s \u22650, xjsj = 0, j = 1, . . . , n. (5) Here the \ufb01rst line guarantees primal feasibility, the second one guarantees dual feasibility, and the last one, called complementarity condition, is componentwise and guarantees optimality. By adding logarithmic barrier terms in the objective functions in (P) and (D), we can remove the nonnegative constraints and construct the following logrithmic barrier problem pair: min x {cT x \u2212\u00b5 n X j=1 log(xj) : Ax = b}, max y,s {bT y + \u00b5 n X j=1 log(sj) : AT y + s = c}, (6) where \u00b5 > 0 is called barrier parameter and log(\u00b7) is logarithmic barrier function, which can guarantee the positiveness of the variables x and s. The primal-dual IPM relaxes the complementarity condition and generates a monotone descent sequence {\u00b5k} with limit 0. In the k-th iteration, \u00b5 = \u00b5k is \ufb01xed and the approximate solution of the following nonlinear system is obtained by employing Newton\u2019s method: Ax = b, AT y + s = c, Xs = \u00b51n, (7) where X = diag(x1, x2, . . . , xn) and 1n = (1, 1, . . . , 1)T \u2208Rn. For each \u00b5 > 0, the system (7) has a unique solution {x(\u00b5), y(\u00b5), s(\u00b5)}, and the primal central path and the dual central path are de\ufb01ned as {x(\u00b5) : \u00b5 > 0} and {(y(\u00b5), s(\u00b5)) : \u00b5 > 0}, respectively. The optimum will be approximately reached by letting \u00b5 \u21920. 2.4 Comparison Between Simplex Methods and IPMs The competition between the simplex methods and the interior point methods has lasted for a long time. In the following, we compare the simplex methods and (primal-dual) the interior point methods from \ufb01ve aspects. 1) Generated solution. The simplex methods can generate an optimal basic solution, while IPMs generate a sequence of strictly feasible primal and dual solutions and \ufb01nally produce an \u03f5-optimal solution, i.e., a feasible solution pair satisfying cT x \u2212bT y < \u03f5, where \u03f5 is small enough. 2) Complexity & Practical performance. Since the number of vertices of the polyhedron might increase exponentially with the problem dimension, and the simplex method might go through all the vertices in the worst case, it has exponential time computational complexity [Klee and Minty, 1972]. Fortunately, various heuristics can be applied to enhance the practical performance of the simplex method. As a result, the simplex method performs much better than its theoretical worst-case bound. Furthermore, the expected complexity of some simplex methods is proved to be polynomial under a probabilistic model [Borgwardt, 2012]. The worst-case complexity of IPMs is polynomial time. Moreover, IPMs are thought to be superior to the simplex method in solving large-scale sparse LP problems by some scholars [Andersen et al., 1996]. 3) Initialization. In most cases [Maros, 2012], due to the requirement of a feasible basic solution to start with, the simplex methods consist of two phases [Dantzig, 1965], where phase I is used to obtain the required feasible basic solution. A good initial basic solution can greatly reduce the computational cost of Phase II. However, in many cases, Phase I is much more time-consuming than Phase II. For IPMs, an initial point, which should be an interior point of the polyhedron, i.e., a feasible point for the LP problem, is needed. Usually, it is dif\ufb01cult to \ufb01nd a feasible initial 6 \farXiv Template A PREPRINT point in practice. Although infeasible IPM [Lustig, 1990] can be applied without such a feasible initial point, the price is some weaker complexity. Homogeneous and self-dual method [Ye et al., 1994] might be the most ef\ufb01cient interior-point method so far. It can be initialized by any point whose coordinate components are all positive, and its cost is only a slight increase in the computation of each iteration. 4) Warm-start. In some applications, after the original LP problem was solved, a new LP problem, which is derived by making some small modi\ufb01cations to the original one (e.g., perturbing the bound of some variables, adding/dropping variables or constraints, etc.), is needed to be solved. By starting from a point (a vertex for the simplex methods and an interior point for IPMs) yielded from the solving process of the original problem, it is expected that fewer steps/iterations are usually required to solve the new modi\ufb01ed problem since the obtained start point could be very close to an optimal point. This strategy is called \u201cwarm start\u201d. For the simplex methods, starting a new problem from the optimal point of the original problem allows reaching the optimal point of the new problem fast in most cases. However, despite many efforts that have been made [Gondzio, 1998, Gondizo and Vial, 1999, Yildirim and Wright, 2002, Gondzio and Grothey, 2002, Benson and Shanno, 2007, Gondzio and Grothey, 2008, John and Y\u0131ld\u0131r\u0131m, 2008], the warm start of IPMs does not show signi\ufb01cant ef\ufb01ciency as that of the simplex methods do. 5) Generalizations to other optimization problems. When using cutting plane methods to solve integer LP problems, a basic solution is required to generate cuts [Car\u00f8e and Tind, 1997, Ascheuer et al., 1993]. Therefore, the simplex methods have a natural advantage over IPMs. More importantly, due to the ef\ufb01ciency of warm start, dual simplex methods perform extremely well in branch-cut-and-bound schemes where a large number of modi\ufb01ed subproblems need to be solved [Banciu, 2011, Koberstein, 2008, Ikura and Nemhauser, 1986, Koberstein and Suhl, 2007]. Since by some extra puri\ufb01cation processes, IPMs can also obtain a basic solution [Marsten et al., 1989, Megiddo, 1991], cutting plane methods are applicable for IPMs as well [Mitchell, 2000, 2003, Elhedhli and Gof\ufb01n, 2004, Ding et al., 2004]. However, the warm start procedures of IPMs are not that ef\ufb01cient as the simplex methods, so there is no doubt that the simplex methods are superior to IPMs in solving (mixed) integer LP problems [Bixby, 2012]. Besides integer LP problems, some variants of simplex methods are designed for some kinds of nonlinear optimization problems [Bazaraa et al., 2013], such as linear complementary problems, nonlinear and semi-in\ufb01nite optimization problems, etc. These kinds of problems can also be solved by applying IPMs [Nesterov and Nemirovskii, 1994]. Moreover, IPMs are more ef\ufb01cient when solving conic-linear optimization problems, especially semi-de\ufb01nite and second-order cone optimization problems [Ben Tal, 1998]. 3 Initialization in Primal Simplex The initialization methods in different primal simplex algorithms can be classi\ufb01ed into three types. The \ufb01rst type is to generate an initial point or basis. The second is to obtain an improved point or basis based on a given point or basis. Then the improved one is utilized as the starting point of the following steps. The third type is to accelerate the calculation process of the \ufb01rst two types. In the following subsections, methods belonging to these three types will be investigated, respectively. 3.1 Finding the Initial Basis or Point 3.1.1 Two-Phase Method. The simplex method usually proceeds in two phases. Phase I is terminated either with a feasible basic solution, or with evidence that the problem is infeasible. If in Phase I, a feasible basic solution is successfully found, then in Phase II, starting from the obtained solution, the simplex algorithm introduced before can be executed in search of the optimum. Actually, in the two-phase method, Phase I proceeds similarly to Phase II, except that it instead deals with an auxiliary problem, i.e., min x,xa 1T mxa s.t. Ax + xa = b, x, xa \u22650, (8) where xa \u2208Rm is the introduced arti\ufb01cial variable and 1m \u2208Rm denotes the vector of ones. Since for any row with bi < 0, we can obtain bi > 0 by multiplying both sides by \u22121, without loss of generality, we can assume b \u22650. Therefore, the auxiliary problem has a straightforward feasible basic solution, i.e., x = 0 and xa = b. Starting from this solution, we can solve (8) with the simplex algorithm, and encounter two different cases at optimality: Case A xa \u0338= 0: The original problem (P) is infeasible; Case B xa = 0: There are two possibilities: 7 \farXiv Template A PREPRINT Sub-case B.1 No arti\ufb01cial variable remains in the basis: The basis of (8) is immediately a feasible basis of (P); Sub-case B.2 At least one arti\ufb01cial variable remains in the basis: Without loss of generality, assume such variable is in the i-th row, where \u00af bi=0. Select a column j with (A\u22121 B AN)ij \u0338= 0. Perform the pivoting with xj as the entering variable and the basic arti\ufb01cial variable in the i-th row as the leaving variable. After that, go to Sub-case B.1. Though the two-phase method can guarantee a feasible basic solution or evidence of infeasibility at Phase I, it introduces extra arti\ufb01cial variables, thus increasing the dimension, as well as the complexity of the problem. Actually, it has been proved that the problem of determining a feasible solution is of the same complexity degree as solving the LP problem itself [Papadimitriou and Steiglitz, 1998]. Therefore, Phase I can be very time-consuming in practice, usually even more time-consuming than Phase II [Stojkovi\u00b4 c et al., 2012]. 3.1.2 Big-M Method. The big-M method is a well-known method to initialize the simplex algorithm. It constructs a feasible basis by introducing arti\ufb01cial variables in the constraints, and eliminates them from the optimal basis by placing large penalty to them in the objective function. Speci\ufb01cally, the auxiliary problem is min x,xa cT x + M1T mxa s.t. Ax + xa = b, x, xa \u22650, (9) where xa \u2208Rm is the introduced arti\ufb01cial variable and M \u226b0 is a very large number. Similar to the two-phase method, problem (9) has a trivial feasible basis, i.e., xa = b and x = 0. With such a basis, the primal simplex method can be applied to solve the problem. It should be noted that since M is large, high cost will be paid for any xa \u0338= 0. Therefore, though we start with basic variable xa = b, it will be removed from the basis and be pushed to zero in the optimal solution. When xa = 0, problem (9) degrades to (P) and the obtained solution is directly an optimal solution to (P). If we have xa \u0338= 0 in the solution, the original problem is infeasible. Actually, the pivoting procedure of big-M is the same as that of the two-phase method. Hence, it is time-consuming as well. However, the introduction of M introduces two more disadvantages. First, it is dif\ufb01cult to determine how large M should be in order to successfully eliminate the arti\ufb01cial variables in practice. Second, numerical issue will occur when we deal with a large number like M. 3.1.3 Cost Modi\ufb01cations. The main idea of cost modi\ufb01cations is similar to the two-phase method. It starts with an auxiliary problem which has a straightforward feasible basis, and generates a feasible basis of the original problem by solving the auxiliary one. Recall the primal problem formulated as (P). The auxiliary problem is constructed by modifying the objective function, i.e., \u02c6 cj = \u001a AT \u2022j(AT B)\u22121cB + \u03b4j, if j \u2208J cj, otherwise , (10) where J = {j \u2208N|sj < 0} denotes the set of infeasible variables in the dual problem, and \u03b4j is a small positive perturbation to alleviate the problem of degeneracy. It is easy to verify that the basis of (10) is dual feasible, i.e., sj\u2208J = \u03b4j \u22650. After applying the dual simplex method, we obtain xB \u22650 at optimality. Therefore the optimal basis of (10) is immediately a feasible basis to (P), and the primal simplex method can be applied subsequently to compute the optimal solution of the original problem. For a detailed review of this method, as well as its implementation and the corresponding tableau form, readers may refer to Wunderling [1996a] and Pan [2000]. 3.1.4 Nonfeasible Basis Method. The so-called nonfeasible basis method (NFB) was proposed by Nabli in 2009 [Nabli, 2009]. This method is used to construct an initial feasible basis from an infeasible one. It can be employed easily without arti\ufb01cial variables and any perturbation in the objective function. The feasibility of the obtained basis is achieved by modifying the selection rules of the entering and leaving variables. This method is completely new and different from the dual simplex algorithm and the criss-cross method [Zionts, 1969]. In the same paper [Nabli, 2009], Nabli introduced the notion of formal tableau. By combining the NFB with the formal tableau, he proposed another new method called formal nonfeasible basis method (FNFB). 8 \farXiv Template A PREPRINT As mentioned above, the NFB is used to handle the cases with infeasible initial basis, i.e., the RHS vector \u00af b has at least one negative component. For such scenarios, it is supposed that the matrix \u03b2 = A\u22121 B AN satis\ufb01es the following condition: \u2200i \u2208{i|\u00af bi < 0}, \u2203j s.t. \u03b2ij < 0. (11) If this condition cannot be satis\ufb01ed, then the LP problem is infeasible. Considering the standard form given in (P), the procedure of the NFB is given as follows: 1) Determine k = arg min i {\u00af bi|\u00af bi < 0}. 2) Build the set K = {j|\u03b2kj < 0}. If K = \u2205, end the process and the original problem is infeasible. 3) Calculate \u00af c = cN \u2212AT N(AT B)\u22121cB. 4) Select the entering variable index q = arg minj\u2208K{\u2212\u00af cj \u03b2kj }. 5) Select the leaving variable index p = arg max i\u2208{1,...,m}{ \u00af bi \u03b2iq |\u00af bi < 0 and \u03b2iq < 0}. 6) Repeat the process until \u00af bi \u22650, \u2200i \u2208{1, . . . , m} (the feasible basis has been obtained). 3.1.5 Algebraic Simplex Initialization. In 2015, Nabli and Chahdoura [2015] developed a new initialization method based on the notion of linear algebra and Gauss pivoting (hereinafter referred to as algebraic simplex initialization). This method can \ufb01nd a nonsingular initial basis, i.e., AB is nonsingular, but not necessarily feasible. Therefore, this method was combined with the NFB in the previous subsection by the authors to achieve feasibility. Also, a new pivot rule for the NFB was proposed in this paper [Nabli and Chahdoura, 2015], which is advantageous in reducing the number of iterations and the computational time. The algebraic simplex initialization consists of at most four consecutive steps. The \ufb01rst step is to select all the slack variables as basic variables and put their corresponding columns in the coef\ufb01cient matrix into the formed matrix AB, which is empty before this step. If the LP problem contains no equality constraint, then the obtained basis has been valid, i.e., the obtained AB is nonsingular; otherwise, the subsequent steps need to be executed. The second and the third step are straightforward. Their main purpose is to continue to select variables from decision variables as new basic variables and \ufb01ll the columns of the matrix AB accordingly, so that all the columns of the formed matrix AB are linearly independent. After these two steps, if the formed matrix AB is still nonsingular, the last step is required. In order to complete the basis AB, some so-called pivoting variables need to be introduced in this step. Finally, the obtained matrix AB has the following form as shown in Figure 2. Figure 2: The form of the obtained matrix AB. The symbol \u2018\u00d7\u2019 indicates that the corresponding entry is nonzero, whereas \u2018\u2217\u2019 means that the entry is arbitrary. Note that the pivoting variables are different from arti\ufb01cial variables since the objective function and the initial problem do not depend on them, and they will be transformed outside the basic variable set by some pivoting steps. Therefore, the algebraic simplex initialization is also arti\ufb01cial-free. Moreover, the redundant constraints and infeasibility can be detected during the pivoting steps. 9 \farXiv Template A PREPRINT 3.1.6 The Most-Obtuse-Angle Column Rule. The most-obtuse-angle column rule [Pan, 1994a] combines, to some degree, the work of searching for a feasible basis with the work of searching for an optimal one [Wolfe, 1965]. In detail, this method suggests to achieve primal feasibility by iteratively using a modi\ufb01ed dual pivot rule. Geometrically, the leaving variable speci\ufb01es the most obtuse angle with the uphill direction determined by the entering variable. If the uphill direction is close to the direction of the dual objective function, from Figure 3 we can conclude that basis constructed in this way is more favorable from the perspective of the objective function. The full procedure of this method is shown in the following. 1) Select the entering index p = arg mini\u2208B xi. If xp \u22650, the basis is already feasible and go to step 4. 2) Compute \u2206sN = (A\u22121 B AN)T ep. If \u2206sN \u22650, the algorithm terminates with infeasibility. Otherwise, select the leaving index p = arg mini\u2208B \u2206si. 3) Perform pivoting B \u2190B \u222a{q}\\{p} and go to step 1. 4) Apply primal simplex to compute the optimum. Since the feasibility of other variables cannot be maintained in this method, cycling may occur even with no degeneracy. Although Guerrero-Garcia and Santos-Palomo [2005] provide a cycling example, such problem may rarely appear in practice. 3.1.7 Infeasibility-Sum Method. This method is named as infeasibility-sum method because it involves an auxiliary problem which intends to minimize infeasibility-sum, i.e., the negative sum of the infeasible variables. When the infeasible variables are all eliminated in a certain iteration, a feasible basis is obtained. Let I = {i \u2208B|xi < 0} denote the set of the infeasible basic variables and construct the auxiliary problem as min x \u2212 X i\u2208I xi s.t. ABxB + ANxN = b xB/I, xN \u22650 (12) Note that here we merely impose non-negativity on the variables which already satisfy the non-negative constraints. Therefore, the basic solution (2) is feasible to (12) and the primal simplex method can be applied to minimize the infeasibility-sum, so as to compute a feasible basis of (P). It should be noted that as we only impose non-negative constraints on part of the variables, the selection of leaving index is slightly different from the original algorithm. Theorem 3.1 Let {\u00af yB, \u00af sB, \u00af sN} be the dual basic solution of the auxiliary problem. The original problem is infeasible if \u00af sN \u22650. Based on the above theorem, the detailed procedure of the infeasibility-sum method is shown below. 1) If xB \u22650, the basis is feasible and go to step 3. 2) Form an auxiliary problem with respect to the current basis and compute \u00af sN. If \u00af sN \u22650, stop with infeasibility. Otherwise, apply one iteration of the modi\ufb01ed primal simplex method and go to step 1. 3) Apply primal simplex to compute the optimum of the original problem. For more details about the procedure of this method as well as the modi\ufb01ed primal simplex algorithm, readers can refer to Pan [2014]. 3.1.8 Logical Basis. Logical basis is the simplest initial basis [Chvatal et al., 1983, Bertsimas and Tsitsiklis, 1997, Maros, 2012]. To form such a basis, all constraints (equality & inequality) add a distinct logical variable after the decision variables. Then, the corresponding columns vectors of all the logical variables form a unit matrix that can be used as an initial basis, i.e., AB = I. The logical basis has three main advantages: \ufb01rst, its creation is trivial; second, the inverse of AB is just identity matrix I, which is available without any computation; third, the \ufb01rst iterations are very fast as the LU factorization is sparse. However, the logical basis generally leads to substantially more iterations, and thus more advanced initial bases are expected. 10 \farXiv Template A PREPRINT 3.1.9 Crash Basis. Compared with an extremely sophisticated algorithm which can provide a good initial basis, a crude algorithm which may provide a reasonably good initial basis quickly is more favored. Therefore, some heuristic algorithms, called crashing, have emerged. These crash algorithms are used to quickly \ufb01nd a good initial basis with as many decision variables as possible. Usually, the obtained basis is a triangular basis due to some irreplaceable bene\ufb01ts: \ufb01rst, there will be no \ufb01ll-in in the subsequent iterations; second, it is numerically accurate to calculate the inverse of a triangular matrix; third, it is easy to create a triangular basis; last but not least, operations with triangular matrices are less time-consuming. In the LP context, there are two types of triangular basis: the lower triangular basis and the upper triangular basis. Both of the two types have zero-free diagonals. Most triangular crash procedures are based on the same conceptual framework as following. First, partition the coef\ufb01cient matrix A into [ \u02c6 A, I], where \u02c6 A corresponds to the decision variables and I corresponds to the logical variables. Then, de\ufb01ne the row and column counts, Ri and Cj, as the number of nonzeros in the i-th row of \u02c6 A and that in the j-th column of \u02c6 A, respectively. For lower triangular basis, select a pivot row i = arg mink{Rk}. If Ri = 1, the pivot column is unique; otherwise, the column with the smallest column count should be selected to keep the number of nonzero elements in AB small. To avoid the transformation at inversion/factorization, all the columns with nonzero in row i will not be considered in the subsequent selection process. Then, update row and column counts for the remaining rows and columns of \u02c6 A and repeat the above procedure. The main idea for the upper triangular basis is similar. In practice, the triangular crash procedures include more other considerations, such as feasibility (CRASH(LTSF)) and degeneracy (CRASH(ADG)). Some of them can be found in Maros [2012], Maros and Mitra [1998] for more details. 3.1.10 CPLEX Basis. CPLEX basis was proposed by Bixby [1992]. The essential purpose is to construct a sparse, well-behaved basis with as few arti\ufb01cial variables and as many free variables as possible. Therefore, the CPLEX basis may not include many variables in any optimal basis, but can reduce the work of removing arti\ufb01cial variables. To \ufb01nd such a CPLEX basis, a preference order of the variables should be constructed \ufb01rst. Consider the given LP problem in the following form: min x,s1,s2 cT x s.t. A1x + s1 = b1, A2x \u2212s2 = b2, A3x = b3, l \u2264x \u2264u, s1 \u22650, s2 \u22650, (13) where x = (x1, . . . , xn)T are decision variables, and s1 = (xn+1, . . . , xn+m1)T and s2 = (xn+m1, . . . , xn+m1+m2)T are slack variables. All the indices of variables can be divided into four sets: C1 = {n + 1, . . . , n + m1 + m2}, C2 = {j : xj free}, C3 = {j \u2264n : exactly one of lj and uj is \ufb01nite}, C4 = {j : \u2212\u221e\u2264lj, uj \u2264+\u221e}, where Ci will be prefered to Ci+1 (i = 1, 2, 3). Note that C1 is just the set of indices of all the slack variables, which is the most prefered set due to the sparsity and numerical properties. Then, de\ufb01ne a penalty \u00af qj for j \u2208{1, . . . , n + m1 + m2}: \u00af qj = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0, if j \u2208C2, lj, if j \u2208C3 and uj = +\u221e, \u2212uj, if j \u2208C3 and lj = \u2212\u221e, lj \u2212uj, if j \u2208C4. (14) Let c = max{|cj| : 1 \u2264j \u2264n} and de\ufb01ne cmax = \u001a1000c, if c \u0338= 0, 1, otherwise. (15) 11 \farXiv Template A PREPRINT Finally, for j \u2208{1, . . . , n}, de\ufb01ne qj = \u00af qj + cj/cmax. (16) The indices in sets C2, C3 and C4 are sorted in ascending order of qj. The lists are concatentated into a single ordered set C = {j1, . . . , jn} with the freest decision variable in the front. Now, the basis AB can be constructed according to the steps as shown in Bixby [1992]. The construction of the CPLEX basis is quite simple and fast. As a result, it is considered a good choice of the default initial basis. The computational results suggested that the CPLEX basis has good performance on easy problems, but it is generally less effective for dif\ufb01cult ones. 3.1.11 Tearing Algorithm. Gould and Reid [1989] proposed a remarkable initialization algorithm for large-scale and sparse LP problems, which can \ufb01nd an initial basis as feasible as possible with a reasonable computational cost. This algorithm is called tearing algorithm. Its main idea is to break the initialization problem into several smaller pieces and solve each of them. There are two main assumptions for the tearing algorithm. First, the coef\ufb01cient matrix A can be transformed into a lower block triangular matrix with small blocks by permutating its rows and columns. Second, an ef\ufb01cient simplex solver is available for solving dense LP problems with fewer than t rows, where t is a small number. It is assumed that after some row and column permutations, the permuted matrix has the following form: \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 A11 A21 A22 . . . . . . ... Ar1 Ar2 \u00b7 \u00b7 \u00b7 Arr A(r+1)1 A(r+1)2 \u00b7 \u00b7 \u00b7 A(r+1)r 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (17) where Aij \u2208Rmi\u00d7nj. Note that mi and nj are positive integers for all i, j \u2208{1, 2, . . . , r}, but mr+1 may be zero and thus nonnegative. Generally, the size mi of the blocks are very small. Such a block lower-triangular form can be obtained by the algorithm proposed by Erisman et al. [1985]. Then, the following problem is considered: min v,w 1T m(v + w) (18a) s.t. \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 A11 A21 A22 . . . . . . ... Ar1 Ar2 \u00b7 \u00b7 \u00b7 Arr A(r+1)1 A(r+1)2 \u00b7 \u00b7 \u00b7 A(r+1)r 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 x1 x2 . . . xr xr+1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb + \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 v1 v2 . . . vr vr+1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u2212 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 w1 w2 . . . wr wr+1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 b1 b2 . . . br br+1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , (18b) li \u2264xi \u2264ui, vi \u22650, wi \u22650, i = 1, . . . , r + 1, (18c) where xi \u2208Rni, vi, wi, bi \u2208Rmi and li and ui are the lower bound and upper bound of xi, respectively. Note that a feasible solution can be reached with 1T m(v + w) = 0. Although sometimes the feasibility may not be achieved, a basis near to the feasible region can be obtained. It is easy to know that the \ufb01rst block of (18b) requires that the following conditions are satis\ufb01ed: A11x1 + v1 \u2212w1 = b1, l1 \u2264x1 \u2264u1, v1, w1 \u22650, (19) and it is expected that both v1 and w1 are driven to zero. Assuming that mi \u2264t, \u2200i, this may be achieved by using DLP to minimze 1T m1(v1 + w1) subject to (19), which produces the optimal solution \u02c6 x1, \u02c6 v1, \u02c6 w1 and a set of m1 basic variables probably including some of v1 and w1. The obtained solution \u02c6 v1 and \u02c6 w1 will be zero if the original problem is feasible. Moving to the k-th stage (1 < k \u2264r) of this algorithm, the optimal solution \u02c6 xi, \u02c6 vi, \u02c6 wi, \u22001 \u2264i < k and a set of m1 + \u00b7 \u00b7 \u00b7 + mk\u22121 basic variables have been obtained. Then, the following conditions need to be satis\ufb01ed: Akkxk + vk \u2212wk = bk \u2212 k\u22121 X i=1 Aik\u02c6 xi, lk \u2264xk \u2264uk, vk, wk \u22650, (20) and vk and wk had better to be zero. Similarly, this can be achieved by minimizing 1T mk(vk + wk) subject to (20). Last, as the (r + 1)-th block is 0, xr+1 will have no contribution. Therefore, \u02c6 vr+1 and \u02c6 wr+1 should be calculated according 12 \farXiv Template A PREPRINT to \u02c6 vr+1 = max{0, br+1 \u2212 r X i=1 A(r+1)i\u02c6 xi}, (21) \u02c6 wr+1 = max{0, \u2212br+1 + r X i=1 A(r+1)i\u02c6 xi}, (22) where the maximum is taken by elements. Variables in \u02c6 vr+1 and \u02c6 wr+1 with nonzero value are chosen to be basic. To make up a full basis, some variables should be arbitrarily picked to cover the remaining rows at the end of the process. For all k > 1, if \u02c6 vk \u0338= 0 or \u02c6 wk \u0338= 0, the backtracking will be executed: if mj + mj+1 + \u00b7 \u00b7 \u00b7 + mk \u2264t holds for some j, then the subproblems in stage j, j + 1, . . . , k can be integrated as one problem, which can be solved by DLP. 3.1.12 The Cosine Criterion. The cosine criterion is inspired by the observation that the optimal vertex is usually formed by the constraints that make the minimum angle with the objective function (Figure 3). Though similar idea has been researched in Murshed et al. [1993], Stojkovi\u00b4 c and Stanimirovi\u00b4 c [2001], these algorithms cannot be implemented ef\ufb01ciently due to the existence of redundant constraints. Junior and Lins [2005] and Hu [2007] proposed new algorithms which can handle the redundant constraints. In these algorithms, though the cosine criterion cannot guarantee an optimal solution, the obtained vertex turns out to be a near-optimal point. Starting from such a vertex can reduce the number of iterations required by the simplex method, thus speeding up the solution process. 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 0.5 1 1.5 0 0.5 1 1.5 Figure 3: Illustration of the observation, plotted using Plot 2D/3D region [Bergstr\u00f6m, 2021]. The bold lines represent the constraints that make the minimum angle with the objective function, which is denoted by the dashed line. With a bit of abuse of notations, we initialize B = \u2205and let N = {1, . . . , n} be the corresponding complementary set. At each time, one variable is moved from N to B, i.e., B = B \u222a{q} and N = N\\{q}, where q is selected based on the angle and the rank of AB, i.e., q = arg max j\u2208N {\u03b1j| \r \r \u00af (N2)\u2022j \r \r \u221e\u0338= 0}, (23) where \u03b1j = (A\u2022j)T b \u000e \u2225A\u2022j\u2225is named as the dual pivoting index, which is proportional to the cosine of the angle between A\u2022j (the i-th constraint of (D)) and b (the objective function of (D)), and \u00af N2 is a matrix calculated based on LU factorization. The condition \r \r \u00af N2 \r \r \u221e\u0338= 0 ensures that the constructed basis AB is non-singular. According to the feasibility of the obtained basis, either primal simplex or dual simplex is applied to solve the problem. Nevertheless, if the basis is infeasible, other initialization methods introduced in this section can be performed to generate a feasible one. The advantage of the cosine criterion is that it can reduce the number of iterations signi\ufb01cantly, up to 40% on Netlib problems. However, the calculation of \u00af N2 requires LU factorization, which unfortunately tends to be time-consuming as well. Furthermore, as the obtained basis is not likely to be sparse, the computation time per iteration may increase. Therefore, the overall ef\ufb01ciency may not be improved much. 13 \farXiv Template A PREPRINT 3.1.13 Triangular and Fill-Reducing Basis. One computation dif\ufb01culty of the simplex method lies in the calculation of basis inverse. In computational practice, LU factorization is used to accelerate such process. In order to further improve the time ef\ufb01ciency, Ploskas et al. [2020] intended to \ufb01nd a sparse and near-triangular basis so that the factorization becomes easier. In this case, though the number of iterations may increase, the computation time per iteration is reduced and the overall ef\ufb01ciency is improved. Ignoring the object and the constraints, the algorithm permutes matrix A as: A = \u0014 A11 A12 0 A22 \u0015 , (24) where A11 represents the maximal diagonal factors in A. If rank(A11) \u2265m, i.e., A11 is large enough to form an initial basis, the algorithm stops. Otherwise, A22 is subsequently ordered by a \ufb01ll-reducing order, and the \ufb01rst m \u2212rank(A11) columns are selected in completion of the basis. Note that with A11, the constructed basis will be as triangular as possible. Additionally, as A22 is ordered based on \ufb01ll-in effect, the sparsity will be preserved during iterations. Therefore, computation time per iteration is highly reduced. However, since both the objective function and the constraints are completely ignored, the initial vertex may be far from optimal, thus requiring more iterations to terminate. Compared with the cosine criterion, which focuses on a near-optimal starting point, there exists a trade-off between fast iteration and less iteration times. Near-optimal basis tends to require less iterations, while sparse and near-triangular basis speeds up the iteration process itself. 3.1.14 Idiot Crash Algorithm (ICA). The main idea of ICA [Galabova and Hall, 2020] is to relax the original LP problem to an approximate problem with \u201csoft\u201d constraint, and then solve this relaxed problem to obtain a near-optimal point. This point is later used as the starting point of the simplex method for solving the original problem. Recall the standard LP de\ufb01ned in (P), ICA obtains the relaxed problem by replacing the equality constraint with two extra terms in the objective function, i.e., a linear Lagrangian term and a quadratic penalty term, as follows: min x cT x + \u03bbT (Ax \u2212b) + 1 2\u00b5\u2225Ax \u2212b\u22252 2 s.t. x \u22650, (25) where \u03bb is the Lagrange multiplier and \u00b5 is a penalty weight. This relaxed problem can be easily solved by existing methods, such as IPMs. As \u03bb goes to in\ufb01nite, the optimal solution of this relaxed problem will converge to the optimal solution of the original LP problem. To obtain a near-optimal point of the original problem, in each iteration, ICA updates the parameters \u03bb and \u00b5 by some heuristic rules and then solves the corresponding relaxed problem. The total iteration number of ICA is \ufb01nite and prede\ufb01ned heuristically. As shown in the previous part, the basic simplex method begins with a basic feasible solution (a vertex of the feasible region). However, after \ufb01nite iterations, the near-optimal point obtained by ICA may be an interior point of the feasible region. To obtain a basic feasible solution near the point given by ICA, a crossover procedure is added. For some LP problems with special structures, the crossover procedure can be further accelerated to improve the ef\ufb01ciency. These methods will be discussed in the Accelerating the Initialization subsection. The advantage of ICA is that it can transforms the original problem into a more tractable problem without the equality constraint and obtain a near-optimal point quickly. Nevertheless, one challenge of ICA is how to design the heuristic parameter updating rules. If the rules are not chosen properly, the obtained point will not be a good starting point and the ef\ufb01ciency of the algorithm will not be improved much. 3.2 Finding an Improved Starting Point Instead of \ufb01nding an initial basis or point as discussed in the previous subsection, some methods tend to utilize the idea of IPMs to obtain an improved point based on a given initial point, and then regard this new point as the starting point of the simplex method. 3.2.1 \u03f5-Optimality Search Direction. The \u03f5-optimality search direction algorithm was proposed in Luh and Tsaih [2002]. This algorithm was motivated by the fact that the IPM can approach the neighborhood of the optimal solution faster than the simplex method. In this 14 \farXiv Template A PREPRINT algorithm, an improved point is obtained by moving in a proposed direction. This point is later used as the starting point of the simplex method for calculating the optimal solution of a given LP problem. The proposed direction combines an interior direction of the feasible region and the negative direction of the objective. This algorithm focuses on a normalized LP problem as follows: min x cT x s.t. Ax \u2265b x \u22650, (26) where \u2225c\u22252 2 = 1, \u2225Ai\u2022\u22252 2 = 1, \u2200i \u2208{1, 2, . . . , m}. Any LP problem can be easily transferred into the normalized version by choosing c = c \u2225c\u22252 2 , Ai\u2022 = Ai\u2022 \u2225Ai\u2022\u22252 2 , bi = bi \u2225Ai\u2022\u22252 2 . This normalization process will not change the optimal solution of the original LP problem. De\ufb01nition 3.2 Given a feasible point x, if \u2200\u03b4 > 0, the set {x\u2032 | \u2225x\u2032 \u2212x\u22252 2 < \u03b4} does not belong to the feasible region, then x is called a boundary point. Given a boundary point x, this algorithm de\ufb01nes two sets as follows: \u21261 = {i | Ai\u2022x = bi}, \u21262 = {j | xj = 0}. (27) These two sets collect the indices of active constraints at the given boundary point x. Based on these two sets, the algorithm calculates a vector h as follows: h = P i\u2208\u21261 AT i\u2022 + P j\u2208\u21262 ej \r \r \rP i\u2208\u21261 AT i\u2022 + P j\u2208\u21262 ej \r \r \r 2 , (28) where ej is a unit vector whose j-th element is 1 and other elements are 0. The dimension of ej is consistent with AT i . Then based on h, the direction for obtaining an improved point starting from the given point x is de\ufb01ned as: g = ( 0 if h = c, Proj \u0010 h\u2212c \u2225h\u2212c\u22252 \u0011 if h \u0338= c, (29) where c is the vector in the objective function. The projection operation Proj(\u00b7) is used to guarantee the feasibility of the direction and its mathematical form can be found in Luh and Tsaih [2002]. Starting from the current boundary point x, the direction g actually points to the interior of the feasible region. Since the LP problem attempts to minimize the objective, this algorithm also constructs a proper step size \u03b7 and proves that the objective can be reduced when moving in the direction of g with the step size \u03b7. Besides, with this speci\ufb01c step size, the new point obtained after one iteration, i.e., x\u2032 = x + \u03b7g, is also a boundary point of the feasible range. This iteration can then be repeated based on this new point x\u2032. One important detail in implementing this algorithm is that the search direction given in (29) will be replaced by Proj(c) when the step size is less than a prede\ufb01ned value. This is to improve the ef\ufb01ciency of the algorithm when the current point is close to the optimal point. According to the experimental results given in Luh and Tsaih [2002], the \u03f5-optimality search direction algorithm can reduce the iteration number of the basic simplex method by about 40%. An extension work of this algorithm was introduced in Chaderjian and Gao [2003]. In this extension work, a special example is given to show that the denominator term in (28) can be zero. Therefore, in order to handle the anomaly where the denominator is 0, the new algorithm changes the de\ufb01nition of h as follows: h = \uf8f1 \uf8f2 \uf8f3 0, P i\u2208\u21261 AT i\u2022 + P j\u2208\u21262 ej = 0, P i\u2208\u21261 AT i\u2022+P j\u2208\u21262 ej \u2225 P i\u2208\u21261 AT i\u2022+P j\u2208\u21262 ej\u22252 , P i\u2208\u21261 AT i\u2022 + P j\u2208\u21262 ej \u0338= 0. (30) The new algorithm also shows that if the initial direction is chosen as \u2212c, the algorithm ef\ufb01ciency can be further improved. In addition, the extension work corrects several mathematical errors in Luh and Tsaih [2002]. The \u03f5-optimality search direction algorithm and its improved version given in Luh and Tsaih [2002], Chaderjian and Gao [2003] can be regarded as auxiliary tasks before the basic simplex method. An interesting issue to be investigated is the trade-off between the number of iteration steps to \ufb01nd an improved starting point and the total computation time to solve the LP problem. 15 \farXiv Template A PREPRINT 3.2.2 Hybrid-LP Method. Hybrid-LP method was introduced in Al-Najjar and Malakooti [2011]. The idea of the hybrid-LP is similar to the \u03f5-optimality search direction algorithm, but it differs in that instead of designing the iterative direction to acquire the improved starting point based on a given point, it obtains the direction according to the non-basic variables (NBVs). The hybrid-LP method experimentally shows a reduction in both the iteration number and the total computation time. The process of the hybrid-LP method can be divided into the following \ufb01ve steps: 1) Select k NBVs to construct the iteractive direction. The value of k is chosen as: k = \u03b1 min(m, n \u2212m), where m is the number of constraints and n \u2212m is the number of NBVs. The variable \u03b1 \u2208[0, 1] is a pred\ufb01ned parameter. 2) Divide these k selected variables into two sets based on whether a change in the variable will result in an increase or decrease in the objective function. Denote these two sets by sI and sD, respectively. 3) Construct the iterative direction as d = \u03b6dI + (1 \u2212\u03b6)dD where dI is generated by NBVs in sI, while dD is given by NBVs in sD. The parameter \u03b6 is selected based on the rule that the direction can lead to an improved objective value. 4) Given the current point x, \ufb01nd the maximum step size \u03b8 such that the point x\u2032 obtained after one iteration along the direction d, i.e., x\u2032 = x + \u03b8d, is still within the feasible range. 5) Find a nearby basic feasible solution (vertex) of x\u2032 by the reduction process or some crossover methods, and treat it as the starting point of the basic simplex method to solve the original LP problem. In the experiments, the parameters \u03b1, \u03b6, etc, are chosen heuristically. Therefore, one possible way to improve the hybrid-LP method is to do an optimization on the parameter selection. Similar to the \u03f5-optimality search direction algorithm, it is also meaningful to study the trade-off between the iteration number of the hybrid-LP method and the total running time. 3.3 Accelerating the Initialization The methods in the previous two subsections either attempt to obtain an initial point (basis) or try to obtain an improved point along a de\ufb01ned direction. In fact, there are some other methods that try to speed up the ef\ufb01ciency of some of the methods mentioned above, i.e., accelerate the Phase I to improve the ef\ufb01ciency of the overall algorithm for solving the LP problem. 3.3.1 Quick Simplex Method. The idea of the quick simplex method [Vaidya and Kasturiwale, 2014] is to perform the pivoting based on multiple pairs of variables instead of only one pair. One important application of this method to accelerate the simplex initialization can be shown in the two-phase method introduced in the Finding the Initial Basis or Point subsection [Vaidya and Kasturiwale, 2016]. With the quick simplex method, the pivoting operation of the Phase I in the two-phase method is modi\ufb01ed as follows: 1) De\ufb01ne the largest number of variable pairs selected in each pivoting operation as Np. 2) Calculate the simplex tableau with the current basis. 3) Select at most Np entering variables based on the reduced cost given in the simplex tableau and calculate the ratios. 4) Select the corresponding number of leaving variables with the smallest ratios. The selected leaving variables should not be in the same row of the simplex tableau. 5) End the process if the remaining basis do not have any arti\ufb01cial variables. The quick simplex method can also be implemented to accelerate the pivoting of Phase II in the basic simplex method, or other simplex initialization methods with a similar iterative process. 16 \farXiv Template A PREPRINT 3.3.2 Smart Crossover. Most of simplex methods begin with a basic feasible point (vertex) to solve the LP problem. To obtain such a starting point, the crossover operation is needed in some simplex initialization methods, such as the ICA mentioned in the Idiot Crash Algorithm (ICA) subsection. However, the crossover operation can be very time-consuming. Therefore, it is necessary and meaningful to propose some smart crossover methods which are more ef\ufb01cient. Ge et al. [2021] introduced two network crossover methods which can deal with the minimum cost \ufb02ow (MCF) problem and the optimal transport (OT) problem. These two problems are two special types of LPs. Speci\ufb01cally, the MCF problem can be easily transformed into an equivalent OT problem. Column Generation Method: Given a directed graph G = (N, A), N and A are the entire node set and the arc set, respectively. The node set I(i), \u2200i \u2208N includes all nodes that have an arc from the node i, while the node set O(i), \u2200i \u2208N is comprised of all nodes that have an arc pointing to the node i. The general form of the MCF problem is given as follows: min x X (i,j)\u2208A cijxij s.t. bi + X j\u2208I(i) xji = X j\u2208O(i) xij, \u2200i \u2208N 0 \u2264xij \u2264uij, \u2200(i, j) \u2208A, (31) where ci,j and uij denote the cost and the capacity limit on the arc (i, j) \u2208A, respectively. The variable bi represents the external supply of the node i \u2208N. The goal of the MCF problem is to design the amount of \ufb02ow on each arc, i.e., xi,j, \u2200(i, j) \u2208A, to minimize the total \ufb02ow cost while satisfying the node \ufb02ow balance and the arc capacity constraint. Given the amount of \ufb02ow on each arc, the maximal \ufb02ow of a node i is de\ufb01ned as: xf i = X j\u2208O(i) xij + X j\u2208I(i) xji, \u2200i \u2208N. (32) Given an arc (i, j), the \ufb02ow ratio of the arc is de\ufb01ned as: fij = max{xij xf i , xij xf j }, \u2200(i, j) \u2208A. (33) In Ge et al. [2021], the authors provide an important property of the MCF problem, which can help to select the basis based on an approximate solution of (31). The property claims that the arc with a larger value of \ufb02ow ratio is more likely to be included in the basis. Therefore, given an approximate solution, we can sort all arcs according to their \ufb02ow ratio values and select the \ufb01rst |N| arcs to create a set as {s1, s2, . . . , s|N |}, where |N| is the number of elements in the set N. Intuitively, this set gives the potential arcs which should be included into the basis based on the current approximate solution. With this potential set, a column generation basis identi\ufb01cation process is used to obtain the feasible basis of the original problem. First of all, the original problem should be converted into an equivalent problem with the standard form. Then, the arti\ufb01cial variable is added based on the big-M method mentioned before. After that, at each iteration, an index set Dk is constructed as: Dk \u2190Dk\u22121 \u222aD{s1,s2,...,snk }, (34) where nk is a monotonously increasing sequence of integers. The set D{s1,s2,...,snk } includes the indices of the \ufb01rst nk arcs in the potential basis set. With the column generation method, the problem at each iteration becomes the following form: min x X i\u2208Dk cjxj s.t. X i\u2208Dk A\u2022jxj = b xj / \u2208Dk = 0, x \u22650. (35) Once there is no arti\ufb01cial variable in the basis at a particular iteration, the feasible basis for the original problem near the approximate solution is obtained. 17 \farXiv Template A PREPRINT This method can be further implemented to obtain an \u03f5-optimal feasible solution by changing the update rule of the index set Dk as follows: Dk \u2190Dk\u22121 \u222aD{s1,s2,...,snk } \u222a{j : cj < \u2212\u03f5}, (36) where ci is the reduced cost. Spanning Tree Method: Given an MCF problem, the basic feasible solution of this problem should be a feasible tree solution of the graph\u2019s spanning tree. Therefore, in the spanning tree method, the task of \ufb01nding the basic feasible solution can be replaced by the task of constructing a spanning tree with the largest sum of \ufb02ow ratios. However, if the approximate solution is not accurate, the tree solution may be infeasible. Thus, one more step to take under this condition is to push the infeasible tree solution to a feasible one. For the OT problem, this step can be easily implemented. Considering the OT problem, the nodes in a directed graph can be divided into the supplier set S and the consumer set C. Each node in S has arcs pointing to the nodes in C. The target of the OT problem is to design the amount of \ufb02ow on each arc to minimize the total cost under given constraints. The general form of the OT problem is given by: min x X (i,j)\u2208S\u00d7C cijxij s.t. X j\u2208C xi,j = si X i\u2208S xi,j = dj xi,j \u22650, \u2200(i, j) \u2208S \u00d7 C , (37) where ci,j is the cost on the arc (i, j). The variables si represents the supply value of a given node i \u2208S, while dj is the demand value of a node j \u2208C. For any OT problem, if the amount of \ufb02ow on a given arc (i, j) \u2208S \u00d7 C is negative or infeasible, there always exists a new \ufb02ow design that can be feasible. The detailed method for getting the new feasible design is given in Ge et al. [2021] and is omitted here. After this step, the spanning tree method can be used to construct the basic feasible solution as mentioned before. 4 Initialization in Dual Simplex Other than \ufb01nding a primal feasible basis, we require a dual feasible point to initialize the dual simplex algorithm. The initialization method in dual simplex can be classi\ufb01ed into two types. Methods of the \ufb01rst type solve either the original problem or an auxiliary one with the conventional simplex method. The obtained solution is directly a feasible basic solution to the dual problem. On the other hand, methods of the second type use a modi\ufb01ed simplex method instead. Extra operations are implemented at each iteration of the conventional simplex method. 4.1 Generate a Dual Feasible Basis with Simplex 4.1.1 The Most-Obtuse-Angle Row Rule. The most-obtuse-angle rule was \ufb01rst proposed by Pan in Pan [1990] and was later analyzed in Pan [1994b, 1997] for its application in achieving dual feasibility. It is a dual version of the most-obtuse-angle column rule. It is inspired by the same observation illustrated in Figure 3. In each iteration, one dual infeasible variable is moved from the non-basis into basis, while the leaving variable is selected according to the angle with the down-hill edge determined by the entering variable. If the down-hill direction is close to the negative direction of the objective function, the newly constructed basis is more likely to be an optimal basis. The detailed process is shown in the following: 1) Select the entering index q = arg minj\u2208N sj. If sq \u22650, the basis is already dual feasible and go to step 4). 2) Compute \u2206xB = A\u22121 B ANeq. If \u2206xB \u22640, the algorithm terminates with (primal) unboundedness. Otherwise, select the leaving index p = arg maxi\u2208B \u2206xi. 3) Perform pivoting B \u2190B \u222a{q}\\{p} and go to step 1). 4) Apply dual simplex to compute the optimum. 18 \farXiv Template A PREPRINT Though the feasibility of other variables cannot be preserved and cycling may arise during iterations, due to the geometrical bene\ufb01t of this method, the computational ef\ufb01ciency has been con\ufb01rmed in Pan [1997], Koberstein and Suhl [2007]. 4.1.2 Right-Hand Side Modi\ufb01cations. This method was proposed by Pan [1999], and can be regarded as the dual version of the cost modi\ufb01cation method, thus having a similar basic idea. By modifying the right-hand side (RHS) coef\ufb01cient \u00af b := A\u22121 B b, or equivalently b, of the initial problem, the obtained one becomes primal feasible. After solving this modi\ufb01ed problem and recovering the right-hand side data, a dual feasible basis can be derived. Recall I de\ufb01ned in 3.1.7 which denotes the set of primal infeasible variables. The modi\ufb01ed RHS takes \u02c6 \u00af bi = \u001a\u03b4i, if i \u2208I \u00af bi, otherwise, (38) where \u03b4i is a small positive number to avoid degeneracy. Since we have xi\u2208I = \u03b4i \u22650 for the modi\ufb01ed problem, the basis now becomes primal feasible. After solving the modi\ufb01ed problem, i.e., sN \u22650, the original problem can be recovered by restoring the value of b. The recovered problem is clearly dual feasible. 4.2 Generate a Dual Feasible Basis with Modi\ufb01ed Simplex 4.2.1 Dual Infeasibility-Sum Method. Though the main idea of dual infeasibility-sum method is not new [Maros, 1986], its ef\ufb01ciency was re-analyzed in Maros [2003], Koberstein and Suhl [2007]. Starting with a dual infeasible basis, this method aims to generate a feasible one by minimizing the infeasibility-sum. Different from the most-obtuse-angle row rule, this method can guarantee a monotonous decrease in infeasibility-sum during iterations. Recall the dual basic solution in (4). The basis is said to be infeasible if there exists sj\u2208N < 0. The infeasibility-sum method intends to maximize the second term in the summation of such variables, i.e., X j\u2208J sj = X j\u2208J cj \u2212( X j\u2208J A\u2022j)T y, (39) where the objective function is called infeasibility-sum and J is the set of dual infeasible variable de\ufb01ned in 3.1.3. Therefore, we can construct an auxiliary dual problem based on (4) as follows: max y,sB,sN \u2212( X j\u2208J A\u2022j)T y s.t. AT By + sB = cB AT Ny + sN = cN sB, sN/J \u22650 . (40) Note that (40) is an LP problem as well, and its basis is exactly the same as that of the original dual problem (D). Nevertheless, as the constraint sN \u22650 becomes sN/J \u22650, the basis is dual feasible. Therefore, the dual simplex method can be applied to generate an dual feasible basis of the original problem. One thing needs to mention is that as the constraints are slightly different here, the selection of entering index is slightly modi\ufb01ed as well. Theorem 4.1 Let {\u00af xB, \u00af xN} be the primal basic solution of the auxiliary problem, i.e., \u00af xB = \u2212A\u22121 B P j\u2208J A\u2022j and \u00af xN = 0. The original problem is (primal) unbounded if \u00af xB \u22650. With the above theorem, the dual infeasibility-sum method proceeds in the following steps: 1) If sN \u22650, the basis is dual feasible and go to step 3). 2) Form an auxiliary problem with respect to the current basis and compute \u00af xB = \u2212A\u22121 B P j\u2208J A\u2022j. If \u00af xB \u22650, stop with primal unboundedness. Otherwise, apply one iteration of the modi\ufb01ed dual simplex method and go to step 1). 3) Apply dual simplex to compute the optimum of the original problem. 19 \farXiv Template A PREPRINT 4.2.2 Arti\ufb01cial Bounds. The arti\ufb01cial bounds method is a dual version of the big-M method. Simply put, this method adds extra upper bounds to the non-basic variables so that the dual problem has a straightforward feasible basis. Consider the following problem where there exist newly added arti\ufb01cial bounds for non-basic variables compared with (P): min xB,xN cT BxB + cT NxN s.t. ABxB + ANxN = b xB, xN \u22650 xN \u2264M (41) where M \u226b0 is a large number. Since the arti\ufb01cial bounds can be rewritten as a constraint, i.e., xN + xa = M with xa \u22650, where xa \u2208Rn\u2212m is the introduced arti\ufb01cial variable, it can be easily veri\ufb01ed that the associated dual problem of (41) is: max y,ya,sB,sN bT y + M1T n\u2212mya s.t. AT Ny + ya + sN = cN AT By + sB = cB sB, sN \u22650 ya \u22640 , (42) where ya \u2208Rn\u2212m is an arti\ufb01cial variable, serving as the counterpart of xa. Recall the basic solution of the original dual problem where sN = cN \u2212AT Ny. We note that such solution is infeasible if there exists cj \u2212AT \u2022jy < 0 for any j \u2208N. Replacing such basic variable with ya in (42), we can directly obtain a feasible basis of (D). 5 Conclusion and Future Work In this survey, we investigate and summarize existing works related to the simplex initialization from the aspects of the primal simplex and the dual simplex, respectively. A comparison about the discussed methods for generating the initial or staring point is summarized in Table 1. We also discussed some methods to accelerate the initialization in some speci\ufb01c conditions. Table 1: Comparison of initialization methods in simplex. Methods Sparsity Triangularity Creation Time Feasibility Optimality Two Phase \u00d7 \u00d7 long \u221a \u00d7 Big-M \u00d7 \u00d7 long \u221a \u00d7 Nonfeasible Basis Method[Nabli, 2009] \u00d7 \u00d7 long \u221a \u00d7 Algebraic Initialization [Nabli and Chahdoura, 2015] \u00d7 \u221a long \u00d7 \u00d7 Modi\ufb01cations Cost [Wunderling, 1996b, Pan, 2000] \u00d7 \u00d7 long \u221a \u00d7 Right-Hand Side [Pan, 1999] Most-Obtuse-Angle Column Rule [Pan, 1994a] \u00d7 \u00d7 long \u221a \u221a Row Rule [Pan, 1990, 1994b, 1997] Infeasibility-Sum Primal [Pan, 2014] \u00d7 \u00d7 long \u221a \u00d7 Dual [Maros, 2003, Koberstein and Suhl, 2007] Logical Basis [Chvatal et al., 1983] \u221a \u221a short \u00d7 \u00d7 Crash Basis [Maros and Mitra, 1998] \u00d7 \u221a short LTSF \u00d7 CPLEX Basis [Bixby, 1992] \u221a \u00d7 short \u221a \u00d7 Tearing Basis [Gould and Reid, 1989] \u00d7 \u00d7 long \u221a \u00d7 Cosine Criterion [Junior and Lins, 2005, Hu, 2007] \u00d7 \u00d7 long \u00d7 \u221a Triangular and Fill-Reducing Basis [Ploskas et al., 2020] \u221a \u221a short \u00d7 \u00d7 Idiot Crash Algorithm [Galabova and Hall, 2020] \u00d7 \u00d7 short \u00d7 \u221a \u03f5-Optimality Search Direction algorithm [Luh and Tsaih, 2002] \u00d7 \u00d7 long \u221a \u221a Hybrid-LP [Al-Najjar and Malakooti, 2011] \u00d7 \u00d7 long \u221a \u221a For the future work, there are mainly two directions. One is based on the conventional initialization methods introduced in this survey, which include: investigate the ef\ufb01ciency of the existing algorithms for dealing with large scale LPs; provide some generalized algorithms; discover implementation techniques to further improve the ef\ufb01ciency of existing algorithms; formulate the dual version of algorithms provided in the primal part, and investigate their performance. The other one is based on advanced learning technologies. As machine learning techniques are gaining more and more research and development, many \ufb01elds are exploring how to combine existing methods with advanced machine learning techniques to get improved methods. However, there is almost no research in the \ufb01eld of simplex initialization that utilizes learning-based methods to improve the ef\ufb01ciency of solving LPs. Therefore, it is important to investigate and \ufb01ll the gap in this \ufb01eld. In the following part, we propose several potential directions for utilizing the learning-based methods to improve the simplex initialization. 20 \farXiv Template A PREPRINT 5.1 Learning-Based Initial Basis Construction In the last two sections, we have discussed many approaches to construct the initial basis or point from different perspectives. For example, the logical basis is one of the easiest ways to obtain a feasible initial basis, since it directly uses all logical variables to construct a basis. Different from the logical basis, the cosine criterion method obtains the initial basis based on the constraints with the minimum angle to the objective function. Other methods such as triangular and \ufb01ll-reducing basis focus on constructing a triangular and sparse basis. Choosing the appropriate initialization methods for each LP based on its features is one of the possible ways to improve the overall ef\ufb01ciency. With advanced machine learning technologies, we can design a classi\ufb01er for automatically selecting the appropriate simplex initialization methods to construct the initial basis. In addition, we can design a classi\ufb01er for determining whether a variable should be included into the basis directly. 5.1.1 Feature Design. For different LPs, we \ufb01rst need to design some features to distinguish them. There are two types of features, namely, self-designed features and graph embedding features. Self-Designed Features: Consider the LP standard form given in (P), the self-designed features can be further divided into the problem-dependent and the problem-independent features. The dimension of the problem-dependent feature changes when given different LPs, while the dimension of the problem-independent feature does not change. The details of the features can be designed as follows. 1) Problem-dependent features: the matrix A, the vectors c, b, the dimensions m, n. 2) Problem-independent features: the sparsity of the matrix A, which is quanti\ufb01ed by the percentage of zeros in A; the sparsity of the vector b, which is de\ufb01ned by the percentage of zeros in b; the triangularity of the matrix A, which is designed as: |zU \u2212zL| max(zU, zL), (43) where zU and zL are the percentages of non-zero elements in the upper part and the lower part of the matrix A, respectively. Graph Embedding Features: Recent advancements in graph embedding have drawn the attention of researchers in various \ufb01elds. Graph embedding provides a new approach to obtain low-dimensional features of nodes in a graph, which is signi\ufb01cant for reducing computational complexity. Intuitively, the main idea of graph embedding is to \ufb01nd a mapping from a node in a graph to a low-dimensional feature representation. This method has been applied in diverse areas such as social sciences, linguistics, biology, etc. Recently, graph embedding has been used to improve the performance of the branch-and-bound algorithm for solving mixed integer programming (MIP) [Nair et al., 2020]. Motivated by Selsam et al. [2018], Gasse et al. [2019], Ding et al. [2020] and Nair et al. [2020], we can also represent an LP as a bipartite graph. Considering the standard form (P) of LP, the following bipartite graph (Figure 4) can be constructed. In the graph, one partition has n (variable) nodes, which represent the n variables to be optimized, and the other has m (constraint) nodes, which represent the m constraints in the standard form of LP. If a variable appears in a constraint, there will exist an edge between the corresponding variable node and constraint node, and the edge is weighted by the corresponding entries of the matrix A. The objective coef\ufb01cients {c1, . . . , cn}, the right-hand side of the constraints {b1, . . . , bm}, and the non-zero entries of the matrix A can be utilized as scalar \u201cfeatures\u201d of the variable nodes, the constraint nodes, and the edges, respectively. The bipartite graph representation of LP enables the feature extraction through graph embedding. Generally, graph embedding methods can be devided into two types: homogeneous graph embedding (e.g., DeepWalk [Perozzi et al., 2014], LINE [Tang et al., 2015], Node2vec [Grover and Leskovec, 2016], and VGAE [Kipf and Welling, 2016a]) and heterogeneous graph embedding (e.g., Metapath2vec [Dong et al., 2017] and DMGI [Park et al., 2020]). Since the nodes in the constructed graph have two different types (variable nodes and constraint nodes), the graph should be considered as a heterogeneous graph. Therefore, all the existing heterogeneous graph embedding methods are possible candidates for learning the low-dimensional features of the nodes. However, most methods are not speci\ufb01c to bipartite graphs, and thus may not capture their unique structural characteristics. Some methods (e.g., IGE [Zhang et al., 2017], PinSage [Ying et al., 2018], BiNE [Gao et al., 2018], FOBE [Sybrandt and Safro, 2019], BiGI[Cao et al., 2021]) are specially designed for bipartite graphs, and they may be better choices for our problem. One sub\ufb01eld of graph embedding is deep learning (DL) based graph embedding [Cai et al., 2018, Goyal and Ferrara, 2018]. Based on whether random walk is adopted to sample paths from a graph, DL based methods can be divided into two categories. One is DL based graph embedding with random walk, including DeepWalk [Perozzi et al., 2014], 21 \farXiv Template A PREPRINT Variable nodes Constraint nodes Figure 4: Bipartite graph representation of LP Node2vec [Grover and Leskovec, 2016], etc. The other is DL based graph embedding without random walk, including SDNE [Wang et al., 2016], DNGR [Cao et al., 2016], graph convolutional networks (GCN) [Kipf and Welling, 2016b], VGAE [Kipf and Welling, 2016a], etc. Compared with factorization-based embedding (e.g., graph factorization (GF) [Ahmed et al., 2013], GraRep [Cao et al., 2015], and HOPE [Ou et al., 2016]), DL based graph embedding has wider application due to its robustness and effectiveness. Since the corresponding nodes of variables appearing in the same constraint are two-hop neighbors to each other in the constructed bipartite graph, the second (or higher) order proximity is expected to be preserved. Therefore, DL based methods will be preferred for our problem due to their good performance in preserving higher-order properties of graphs. 5.1.2 Initial Basis Construction with Deep Learning Based Classi\ufb01cation. The main purpose of this subsection is to provide a classi\ufb01cation mechanism based on a deep neural network, which can divide variables into basic variables and non-basic variables. The input of the neural network is the feature of each variable node, which can be obtained through graph embedding. The output is the probability that the corresponding variable should be selected as a basic variable. The architecture is given in Figure 5. Figure 5: Classi\ufb01cation of initial basic variable selection To train such a neural network, enough training data pairs are required. Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable, i.e., the label is \u201c1\u201d if it is a basic variable, otherwise \u201c0\u201d. As we have mentioned before, the features can be obtained by graph embedding. However, how to obtain the labels of variable nodes can be a tricky problem. One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label. Obviously, this approach is very costly. Another approach is to obtain the label by the initialization methods introduced before, but the obtained label can be inappropriate since some of these methods cannot guarantee 22 \farXiv Template A PREPRINT the feasibility or the nonsingularity of the derived basis. If enough training data have been obtained, the feature and the label can serve as the input and the output of the deep neural network, respectively. The architecture of the deep neural network, including the number of layers, the activation function, the loss function, etc., should be further designed. The trained neural network can be used to select basic variables for LPs. The main steps are as follows: \ufb01rst, construct the graph representation for the LP to be solved; second, learn the low-dimensional features of the variable nodes via graph embedding; third, input the derived features into the neural network to obtain probability outputs, then sort all variables in descending order of their corresponding outputs and select the \ufb01rst m variables as basic variables. 5.1.3 Initialization Method Selection with Deep Learning Based Classi\ufb01cation. The main idea of the learning-based classi\ufb01cation for the initialization method selection is illustrated in Figure 6. Features Choose one method to generate the initial bases Input layer Hidden layer 1 Hidden layer 2 Output layer Figure 6: Classi\ufb01cation of initialization method selection The input of the classi\ufb01er is the designed features of different LPs. These features can be the self-designed features, the graph embedding features, or a combination of different features, depending on the classi\ufb01cation performance. The output of the classi\ufb01er is a probabilistic distribution of choosing different simplex initialization methods to construct the initial basis. Given an LP, the method with the highest probability will be selected as the optimal initialization method. To train the classi\ufb01er, we \ufb01rst collect the total computation time with different candidate methods and label the method with the shortest time as \u201c1\u201d. The others can be labelled as \u201c0\u201d. Then these labels are regarded as the training dataset to train the classi\ufb01er. 5.2 Learning-Based Starting Point Construction In the Simplex Initialization section, we have covered some methods for \ufb01nding an improved starting point for the simplex method. These methods design different interior directions to do the iteration and obtain the improved point. However, the trade-off between the iteration steps and the total computation time has not been well studied. In this subsection, we will propose the reinforcement learning (RL)-based method for investigating the trade-off between the iteration steps and the total computation time in methods for \ufb01nding the improved starting point. 5.2.1 State Design. In the RL, an action is selected based on the current state. After the action selection, a reward and the next state will be returned. This process is then repeated in the following steps. In our problem, the action selection is equivalent to designing whether the iteration will continue or stop. The state includes two parts. One is the feature of the given LP. The feature can be the self-designed one or the graph embedding one. The other part is the state related to the current improved point. For example, in the \u03f5-optimality search direction method, the iteration direction is designed based on the active constraint of the current improved point. Therefore, the active constraint can be included in the state. The search direction can also be considered as part of the state. Based on the designed state, the action selection is learned by RL algorithms. 5.2.2 Starting Point Construction with RL. The general process of the learning-based starting point construction is given in Figure 7. At each step, based on the state and the selected action, a reward is returned. The main goal is to maximize the cumulative reward. The reward can be designed based on the computation time of each iteration. 23 \farXiv Template A PREPRINT Reward + Next state Action (iteration continue / stop) Current improved point State + Figure 7: Learning-Based Starting Point Construction According to the dimensionality of the designed state, different RL methods can be chosen. When the state dimension is small, we can choose a basic learning-based method such as Q-learning algorithm. When the state space is large or continuous, we can choose some advanced algorithms like the deep Q-network algorithm or double deep Q-network algorithm, etc. 5.3 Learning Heuristics in Simplex Initialization Most of methods we mentioned before have included some heuristic rules or heuristic parameters. In these methods, the selection of these rules and parameters are almost done manually. To improve the ef\ufb01ciency of different simplex initialization methods, we can try to improve the heuristic designs from two perspectives as follows. 1) Select the optimal heuristic rules among candidate rules: When there exist several different heuristic rules, we can construct a classi\ufb01cation method to choose the better one. Speci\ufb01cally, we can label these candidate rules based on their computation time. 2) Generate the heuristic parameters automatically: When there are some LPs with appropriate heuristic parameters, we can formulate a regression problem to obtain a mapping from the problem features to the heuristic parameters. Then when a new LP is given, we can generate the suitable heuristic parameter automatically based on the regression model." + } + ], + "Yuxing Zhong": [ + { + "url": "http://arxiv.org/abs/2312.09649v2", + "title": "Revisiting the Dragonfly Galaxy II. Young, radiatively efficient radio-loud AGN drives massive molecular outflow in a starburst merger at z=1.92", + "abstract": "Radio-loud active galactic nuclei (RLAGNs) are a unique AGN population and\nwere thought to be preferentially associated with supermassive black holes\n(SMBHs) at low accretion rates. They could impact the host galaxy evolution by\nexpelling cold gas through the jet-mode feedback. In this work, we studied\nCO(6-5) line emission in a high-redshift radio galaxy, MRC 0152-209, at z=1.92\nusing ALMA up to a $0.024''$-resolution (corresponding to ~200 pc). This system\nis a starburst major merger constituted of two galaxies: the northwest (NW) one\nhosting the RLAGN with jet kinetic power $L_{\\rm jet}\\gtrsim2\\times10^{46}$\nerg/s and the southeast (SE) one. Based on the SED fitting for the entire\nsystem (NW+SE galaxies), we found AGN bolometric luminosity $L_{\\rm\nAGN,bol}\\sim(0.9-3)\\times10^{46}$ erg/s for the RLAGN. We estimated BH mass\nthrough $M_{\\rm BH}-M_\\star$ scaling relations and found an Eddington ratio of\n$\\sim0.7-4$ conservatively. These results suggest that the RLAGN is radiatively\nefficient and the powerful jets could be launched from a super-Eddington\naccretion disc. ALMA reveals a massive ($M_{\\rm H_2}\\sim2\\times10^9$ Msun),\ncompact ($\\sim500$ pc), and lopsided molecular outflow perpendicular to the jet\naxis. The mass outflow rate (~1200-2600 Msun/yr) is comparable with the star\nformation rate of ~2000-3000 Msun/yr. The outflow kinetic power/$L_{\\rm\nAGN,bol}$ ratio of ~0.008-0.02 and momentum boost factor ~3-24 agree with the\nradiative-mode AGN feedback. On the other hand, the jets can also drive the\nmolecular outflow within its lifetime of $\\sim2\\times10^5$ yr without\nadditional energy supply from AGN radiation. The jets then could remove all\ncold gas from the host galaxy through long-term, episodic launching. Our study\nreveals a unique object where starburst, powerful jets, and rapid BH growth\nco-exist, which may represent a fundamental stage of AGN-host galaxy\nco-evolution.", + "authors": "Yuxing Zhong, Akio K. Inoue, Yuma Sugahara, Kana Morokuma-Matsui, Shinya Komugi, Hiroyuki Kaneko, Yoshinobu Fudamoto", + "published": "2023-12-15", + "updated": "2024-03-18", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "INTRODUCTION RLAGNs are rare amongst all AGN populations (15 \u221220%; Kellermann et al. 1989; Williams et al. 2018). At higher redshifts (1 < z < 2), there is an increasing space density of RLAGNs hosted by radiatively efficient (Eddington ratio \u03bbEdd \u22733 \u00d7 10\u22122; Merloni & Heinz 2008) SMBHs linked to high-excitation radio galaxies (see Hardcastle & Croston 2020 and references therein). These high-z galaxies hosting RLAGNs are often referred to as high-redshift radio galaxies (HzRGs) with rest-frame radio power L500 MHz > 1027.5 W Hz\u22121 (Miley & De Breuck 2008). The typical infrared (IR) luminosity of HzRGs is found to exceed 1012 L\u2299, in agreement with the classifications of ultra-luminous infrared galaxies (ULIRG; LIR \u22651012 L\u2299), while some are found to be even brighter than 1013 L\u2299, entering hyper-LIRG (HyLIRG; LIR \u22651013 L\u2299) regime (Drouart et al. 2014). U/HyLIRG populations are often massive, starburst systems and are thought to be triggered by galaxy mergers (Sanders & Mirabel 1996). The fact that HzRGs are also U/HyLIRGs suggests that powerful RLAGNs are often associated with major mergers, which is evident by accumulating observations (e.g., Ramos Almeida et al. 2012; Chiaberge et al. 2015; Pierce et al. 2022, 2023). An investigation into the cold molecular gas \u2013 the raw fuel that feeds both the growth of SMBHs and star formation \u2013 provides insights to enhance our understanding of galaxy mergers, starburst galaxies (SBGs), and AGN/SMBHs as the basis for co-evolution in the high-z universe. Recent studies of HzRGs find that the most powerful RLAGNs are likely to be associated with AGN with bolometric luminosity (LAGN,bol) comparable to high-z QSOs as the SMBHs are in a fast \u00a9 2023 The Authors arXiv:2312.09649v2 [astro-ph.GA] 18 Mar 2024 \f2 Yuxing Zhong et al. growth phase (e.g., Nesvadba et al. 2017; Ichikawa et al. 2021). For QSOs, the accreting gas around the SMBHs usually forms an optically thick, geometrically thin accretion disc (Shakura & Sunyaev 1973; Narayan et al. 1998). The question then arises, that, a standard thin disc (0.01 \u2272\u03bbEdd \u22721) is widely considered jet phobic, possibly because the large-scale magnetic field that powers the radio jets is diffused out of the accretion disc faster than being dragged inward (Lubow et al. 1994; Guilet & Ogilvie 2012) (however, recent simulations show that thin discs can sustain large-scale poloidal magnetic fields around the BH; see, e.g., Rothstein & Lovelace 2008; Liska et al. 2019). One possible explanation for the origin of the powerful jets is that these most powerful HzRGs (jet kinetic power Ljet \u22731047 erg s\u22121 and LAGN,bol \u22731046 erg s\u22121) may in general host over-massive SMBHs (10\u22122 \u2272MBH/M\u22c6\u22722 \u00d7 10\u22121; Ichikawa et al. 2021) that lie far above the BH mass (MBH) stellar mass (M\u22c6) scaling relations. The MBH estimated based on the scaling relations is underestimated, thus the Eddington ratio corresponding to a standard thin disc is an overestimation. These SMBHs are actually surrounded by optically thin, geometrically thick sub-Eddington accretion discs (Ichikawa et al. 2021). However, using rest-frame optical emission lines (H\u03b1, H\u03b2, Mg II, and C IV), Poitevineau et al. (2023) estimated the BH masses of a sample of RLAGNs at 0.3 < z < 4 with Ljet < 1047 erg s\u22121. They found a good agreement of the MBH M\u22c6scaling relations of RLAGNs and other AGN populations in samples across different redshifts. Therefore, these HzRGs seem to follow the scaling relations statistically. This raises another possibility, that is, many HzRGs host SMBHs accreting above the Eddington limit, and thus the accretion discs are both optically and geometrically thick, capable of powering the jets (Tchekhovskoy 2015). AGN feedback has been an important topic to study AGN-host galaxy co-evolution because it may lead to an outflow (compression) of the gas to suppress (enhance) the host galaxy star formation (e.g., Cano-D\u00edaz et al. 2012; Cicone et al. 2014; Shin et al. 2019; Duncan et al. 2023). Outflows in ionized and/or molecular forms are often observed in galaxies hosting AGNs with LAGN,bol \u22731044 erg s\u22121 from local to high redshifts (e.g., Fiore et al. 2017; Fluetsch et al. 2019; Bischetti et al. 2019), as well as those with lower AGN luminosities (Jarvis et al. 2021). These outflows are often ascribed to the radiative-mode AGN feedback no matter it is through the radiation pressure on dust or through shocks generated from inner AGN winds (Costa et al. 2014; Ishibashi et al. 2018). This radiative-mode AGN feedback has the potential to remove a significant fraction of molecular gas from the AGN host galaxy (Zubovas & King 2012). Without molecular gas as the fuel to support ongoing star-forming activities, the SFR can decrease and these galaxies will finally become quiescent. In addition to the radiative-mode, the jet (kinetic)mode AGN feedback, through the couplings between jets and the host galaxy interstellar medium (ISM), has also been found to be efficient in accelerating gas with colossal energy injections to power the outflow (Wagner et al. 2012; Mukherjee et al. 2018; Meenakshi et al. 2022b). Both simulations and observations find that the jetmode feedback can lead to the quenching of host galaxies (Nesvadba et al. 2006; Hu\u0161ko et al. 2024). Because of the simultaneously high Ljet and LAGN,bol in powerful HzRGs, jetand radiative-mode feedback co-exist in the host galaxies (Hu\u0161ko et al. 2024). This makes the outflow mechanism mysterious: which is the main driver of the outflow? Additionally, do outflows in these HzRGs show distinctly different molecular outflow properties when compared to QSOs and low-z AGNs? These questions remain unclear and require accumulating observational studies to investigate. RLAGNs at an early evolution phase, at a time of radio jets remaining powerful and efficiently injecting their energy into the ISM (Meenakshi et al. 2022b), are good targets to investigate the jet-mode feedback. Moreover, by investigating the morphology of RLAGN host galaxies at z > 1 using HST WFPC3 images, Chiaberge et al. (2015) found that 92% of these RL galaxies show recent or ongoing merging events. This finding is suggestive of possible mergertriggered AGN activities. Therefore, young RLAGNs at high-z serve as ideal proxies to scrutinize the co-evolution of AGN and its host galaxy through processes like galaxy interaction/merging and AGN feedback since the stochastic gas inflows accompanied with these events may both fuel star formation and trigger AGN activities (Hardcastle & Croston 2020; Stemo et al. 2021). In this work, we study a HzRG \u2013 MRC 0152-209 (L147 MHz = 3.2(\u00b11.0) \u00d7 1028 W Hz\u22121), named Dragonfly galaxy, which has starbursts (SFR \u223c3000 M\u2299yr\u22121; Drouart et al. 2014). It is a HyLIRG (LIR \u223c2 \u00d7 1013 L\u2299) and a major merger comprising three components: the North-West (NW) galaxy, the South-East (SE) galaxy, and a possible companion galaxy. Its double radio hotspots were identified by the Very Large Array (VLA) at 4.7 and 8.2 GHz (Pentericci et al. 2000). Zhong et al. (2023) (referred to as Paper I) further investigated the radio hotspots combining high-resolution and highfrequency VLA and Atacama Large Millimeter/submillimeter Array (ALMA) observations. They found that the Dragonfly galaxy can be classified as a Compact-Steep-Spectrum source. The radio hotspots have an age of \u223c2 \u00d7 105 yr, in line with the typical order of magnitude of young RLAGNs. We further present high-angular resolution ALMA observations (0.08\u2032\u2032 for Cycle 4 and 0.02\u2032\u2032 for Cycle 6) of CO(6-5) line emission from the Dragonfly galaxy to investigate the molecular gas within sub-kpc regions. Our new study provides a unique view of the molecular gas distribution among this merging system, revealing details of the AGN-driven outflow. Throughout this paper, we assume a \u039bCDM cosmology with \u2126m = 0.309, \u2126\u039b = 0.691, and H0 = 67.7 km s\u22121 Mpc\u22121 (Planck Collaboration et al. 2016). Based on these assumptions, the luminosity distance of the Dragonfly galaxy is \u223c15200 Mpc, and 1\u2032\u2032 corresponds to a projected physical scale of 8.62 kpc.1 2 OBSERVATIONS 2.1 ALMA Band 6 ALMA Cycle 4 observations (Project ID: 2016.1.01417.S, PI: Bjorn Emonts) were conducted on 9 and 17 August 2017 for 1.2 hours onsource time with 45 antennas and baselines of 12-m array up to \u223c3.6 km. ALMA Cycle 6 observations (Project ID: 2018.1.00293.S, PI: Bjorn Emonts) were conducted on 23 June 2019 for 2.2 hours onsource time with 48 antennas and baselines of the 12-m array up to 11.5 km. For both observations, there are four spectral windows configured to cover two 3.75 GHz bands, one of which includes 235.83\u2212239.58 GHz to observe the redshifted CO(6-5) line emission (\u03bdrest = 691.47 GHz) and another includes 251.20\u2212254.95 GHz such that only continuum is observed. The redshifted frequency is estimated based on z = 1.92 determined by the observation of CO(1-0) line emission (Emonts et al. 2011). The data calibrations were performed via the ALMA pipeline built within CASA (Common Astronomy Software Applications; McMullin et al. 2007; CASA Team et al. 2022) version 4.7.2 for Cycle 4 and 5.4.0 for Cycle 6, respectively, by running the calibration scripts supplied with the data by the North American ALMA Science Center (NAASC). Prior to imaging the CO(6-5) line emission, we subtracted the 1 The calculation has made use of Wright (2006). MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 3 Table 1. Summary of the Observations Observation Date Frequency Band Beam size Position Angle \u03c3 ALMA Cycle 4 continuum 9 and 17th August, 2017 237 GHz Band 6 0.11\u2032\u2032 \u00d7 0.08\u2032\u2032 80\u25e6 11 \u00b5Jy/beam ALMA Cycle 4 CO(6-5) 9 and 17th August, 2017 237 GHz Band 6 0.12\u2032\u2032 \u00d7 0.08\u2032\u2032 78\u25e6 183 \u00b5Jy/beam \u00b7 km/s ALMA Cycle 6 continuum 23th June, 2019 237 GHz Band 6 0.026\u2032\u2032 \u00d7 0.023\u2032\u2032 24\u25e6 6.5 \u00b5Jy/beam ALMA Cycle 6 CO(6-5) 23th June, 2019 237 GHz Band 6 0.027\u2032\u2032 \u00d7 0.024\u2032\u2032 24\u25e6 102 \u00b5Jy/beam \u00b7 km/s VLA BnA-configuration 29th May, 2015 44 GHz Band Q 0.14\u2032\u2032 \u00d7 0.08\u2032\u2032 80\u25e6 18 \u00b5Jy/beam continuum emission in the uv-plane. To do so, we first flagged the channels that include the real line emission, as well as those that include pseudo-line emissions because of the strong atmospheric absorption between 237.15 GHz and 239.1 GHz (private communication with Hiroshi Nagai through ALMA helpdesk, Jul 27, 2022). We then estimated the continuum emission by a linear function to the line-free channels and subtracted it in the uv-plane by using the task uvcontsub. We adopted the same methodology for Cycle 4 and 6 observations to image the line and continuum emissions in this work. First, we created a dirty image without any clean to calculate the root-meansquare noise (\u03c3) under the \u2018briggs\u2019 weighting with a robustness parameter +0.5. Then, we cleaned the image non-interactively by setting 3\u03c3 as the stop threshold of the cleaning. We used the \u2018h\u00f6gbom\u2019 deconvolution algorithm to produce the restored image and applied a primary beam correction on the restored image. For the CO(6-5) line emission, the channel width was set to be 20 km s\u22121 for both Cycle 4 and 6 observations, and the reference frequency was set as 236.69 GHz, corresponding to the line at z = 1.9214. We further imaged the CO(6-5) line emission using concatenated Cycle 4 and 6 (C46 hereafter) data. A robustness parameter of +2.0 was chosen for the \u2018briggs\u2019 weighting and a spectral resolution of 15 km s\u22121 was adopted. We chose this natural weighting to balance the beam size and noise level. The basic information and properties of the cleaned images of line and continuum emissions are summarized in Table. 1. The properties of the line profiles and their associated physical properties based on the Cycle 4 and 6 combined data are not presented in this paper due to an abnormal elevation in flux densities. This was considered a software issue of CASA, which has been reported to the CASA development team (private communication with Hiroshi Nagai through ALMA helpdesk, Mar 25, 2022). This abnormal elevation only influences the combined dataset and has no impact on Cycle 4 and 6 observations individually. 2.2 VLA Band Q The VLA observations reported in this work were conducted in BnA-configuration centered at 44 GHz with an effective bandwidth of 7.5 GHz. The total on-source time is 42 min. The observations were calibrated by requesting pipeline calibrations through the NRAO Science Helpdesk. We imaged the radio continuum emission using the \u2018h\u00f6gbom\u2019 deconvolution algorithm. A \u2018briggs\u2019 weighting scheme with a robustness parameter +0.5 was chosen to multiply the visibility value during the gridding of the dirty image. To create a clean image, we put a mask on the strongest signal and manually iterated until the strongest signal reached the noise level in order to circumvent artefacts as a result of overcleaning Due to the low signal-to-noise ratio of each spectral window, no self-calibration in any dataset can be applied, leaving lowlevel sidelobe contamination on the clean images. 3 RESULTS 3.1 Morphology In panels (a) and (c) of Fig. 1, we show the images of the dust continuum, and in panels (b) and (d), we show the integrated intensity (mom0) map of CO(6-5) line emission integrated over \u2212500 to +250 km s\u22121, for both Cycle 4 and 6 observations. All line and continuum images and contours are overlaid directly based on the world coordinate system (WCS) after the clean procedure without any manual manipulation to associate the features observed in different bands. A systematic offset in the WCS of VLA may exist because of the astrometry worsened by the very extended baseline configurations and atmospheric conditions2. Under a typical condition3, the VLA and ALMA observations are consistent in positions within 2\u03c3 (see also Paper I, Zhong et al. 2023) NW and SE galaxies show individual, putative disc structures in both line and continuum emissions of Cycle 4 observations. The molecular tidal bridge identified in ALMA Cycle 2 observations between SE and NW galaxies is not observed in both Cycle 4 and 6 observations because of its diffuseness and low molecular gas mass (see Emonts et al. 2015b; Lebowitz et al. 2023 for details). In addition to these two galaxies, the Cycle 4 dust continuum has identified a companion component (see panel (a) in Fig. 1), which is undetected in both Cycle 4 and 6 mom0 maps. In Cycle 2 observations, this companion was identified in CO(6-5) line emission while missing in the dust continuum, and argued to be a small companion galaxy (Emonts et al. 2015b). The highest resolution Cycle 6 observations have revealed the detailed morphology of the Dragonfly galaxy. The CO(6-5) line emission of the NW galaxy is still disc-like but is highly compact with a 2D Gaussian component of 0.091\u2032\u2032(\u00b10.01\u2032\u2032) \u00d7 0.083\u2032\u2032(\u00b10.01\u2032\u2032), corresponding to a radius of RNW \u223c0.8 kpc, after being deconvolved from the beam. This size is more compact than RNW \u22731 kpc based on Cycle 2 and 4 observations. To investigate whether such compactness is attributed to the non-detection of a significant fraction of extended molecular gas in Cycle 6 observations, we compare the integrated intensity ICO(6\u22125) calculated in three cycles (see \u00a73.2 and the rightmost panel in Fig. 1). In Cycle 6, the calculated ICO(6-5) = 1.79 \u00b1 0.13 Jy \u00b7 km s\u22121 is consistent with ICO(6-5) = 2.0 \u00b1 0.2 Jy \u00b7 km s\u22121 calculated by Emonts et al. (2015b) in Cycle 2 and ICO(6-5) = 1.98\u00b10.13 Jy \u00b7 km s\u22121 calculated in Cycle 4 within 1\u03c3 uncertainty. This suggests that the bulk of molecular gas is concentrated within a sub-kpc scale disc, though there can be \u223c10 per cent of the diffuse molecular gas missed by the high-resolution beam. In the dust continuum, the NW galaxy has its flux density concentrated within a radius of \u223c0.07\u2032\u2032 (\u22480.6 kpc), agreeing well with the 2 https://help.almascience.org/kb/articles/what-is-the-absolute-astrometricaccuracy-of-alma 3 https://science.nrao.edu/facilities/vla/docs/manuals/oss/performance/positionalaccuracy MNRAS 000, 1\u201319 (2023) \f4 Yuxing Zhong et al. Figure 1. A global view of the Dragonfly galaxy. (a) ALMA Cycle 4 continuum imaging (\u03c3 = 11 \u00b5Jy/beam) with contour levels of [3, 5, 10, 25, 40, 55] \u00d7 \u03c3. (b) ALMA Cycle 4 CO(6-5) mom0 map (\u03c3 = 23 mJy/beam \u00b7 km/s) with contour levels of [3, 5, 10, 15, 25] \u00d7 \u03c3. (c) ALMA Cycle 6 continuum imaging (\u03c3 = 6.5 \u00b5Jy/beam) with contour levels of [3, 5, 10, 20, 30] \u00d7 \u03c3. (d) ALMA Cycle 6 CO(6-5) mom0 map (\u03c3 = 13 mJy/beam \u00b7 km/s) with contour levels of [3, 5, 7, 9, 11, 13]\u00d7\u03c3. Throughout all images, the world coordinates are identical. The black contours indicate the VLA 44 GHz observation in BnA-configuration (\u03c3 = 18 \u00b5Jy/beam) with contour levels of [5, 7, 20, 40, 60, 80] \u00d7 \u03c3 with the detection coinciding with the NW galaxy indicating the radio core and the detection adjacent to the SE galaxy indicating the SE hotspot. The yellow dashed line indicates the jet axis that links the double radio hotspot (see also Fig. 1 in Paper I; Zhong et al. 2023). The beam size of the corresponding image is marked in the bottom left corner. In panels (e) (h), we show line profiles extracted from a circular aperture that encloses contours defined by 3\u03c3. molecular gas distribution, but shows diffuse emissions extended to the east. As discussed in \u00a77.1.2, this extended feature may originate from the interactions between the two galaxies. The CO(6-5) line emission distribution of the SE galaxy in Cycle 6 significantly differs from that observed in Cycle 2 and 4 observations, and the zoom-in is shown in Fig. 2. The main feature of the SE galaxy is a linear structure (labeled by #Main) elongated along the southeast to northwest direction. To the north of the centroid of the SE galaxy, there are several clumps of CO(6-5) line emission (white contours in region #Tidal), which are in agreement with the distribution of the dust continuum. These clumps viewed at \u223c100 pc-scale form a filamentary structure on a \u223c600 pc-scale and may come into existence through the interactions between two galaxies (see \u00a77.1.2 for a detailed discussion). On the opposite side of these clumps, there is no line emission detected. An equivalent lack of emission is seen in the dust continuum as well, save for a sub-component (see next paragraph). Additionally, in HST NICMOS F160W imaging (Fig. C1), the SE galaxy is associated with a tidal tail extended over 10 kpc. Such a long tidal tail is commonly observed in wet mergers whose bulge separation is smaller than 5 kpc (Ren et al. 2020) due to the tidal stripping (Mo et al. 2010), that is, the tidal force strips a significant fraction of the molecular gas away from the galaxy. The fact that the double radio hotspots identified in VLA 4.7, 8.2, and 44 GHz observations are symmetric relative to the NW galaxy suggests that the RLAGN resides in the NW galaxy (see Fig. 1 in Paper I; Zhong et al. 2023). This is supported by the identification of the radio core that overlaps with the NW galaxy (see Fig. 1), though it is offset from the centroid of ALMA Cycle 6 dust continuum by 0.04\u2032\u2032. The 237 GHz continuum at 0.024\u2032\u2032-resolution further reveals a sub-component adjacent to and offset from the bulk of SE galaxy dust continuum by \u223c0.15\u2032\u2032, corresponding to \u223c1.3 kpc (Fig. 2). This sub-component has its location coincided with the SE hotspot with an offset of merely 0.008\u2032\u2032. It is then argued to be the radio hotspot originating from the interaction between the medium and radio jet launched by the AGN and its flux density is dominated by the synchrotron radiation (Paper I, Zhong et al. 2023). If the medium at play is the ISM of the SE galaxy, the cool gas can be either blown away by the kinetic energy or heated up to a higher temperature by the thermal energy of the radio jet. 3.2 Line Profiles In the third column of Fig. 1, we show the spectra of NW and SE galaxies observed in Cycle 4 and 6. For both NW and SE galaxies, the line profiles are extracted using a circular aperture that encloses MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 5 Table 2. Measured and Derived Physical Properties from the Line Profiles Component Observation Center FWHM ICO(6-5) L\u2032 CO(6\u22125) r65/10 MH2 (km s\u22121) (km s\u22121) (Jy \u00b7 km s\u22121) (K km s\u22121 pc2) (M\u2299) (1) (2) (3) (4) (5) (6) (7) (8) NW 1 Cycle 4 CO(6-5) \u2212350 \u00b1 27 174 \u00b1 63 0.23 \u00b1 0.08 1.2 \u00b1 0.4 \u00d7 109 17 \u221236 1 \u22122.1 \u00d7 109 NW 2 Cycle 4 CO(6-5) \u221217 \u00b1 9 310 \u00b1 22 1.75 \u00b1 0.1 9.4 \u00b1 0.5 \u00d7 109 13 2.1 \u00b1 0.1 \u00d7 1010 NW 1 Cycle 6 CO(6-5) \u2212361 \u00b1 22 232 \u00b1 52 0.31 \u00b1 0.08 1.7 \u00b1 0.4 \u00d7 109 17 \u221236 1.3 \u22122.8 \u00d7 109 NW 2 Cycle 6 CO(6-5) \u221220 (fixed) 414 \u00b1 32 1.48 \u00b1 0.1 8.0 \u00b1 0.5 \u00d7 109 13 1.8 \u00b1 0.1 \u00d7 1010 SE 1 Cycle 4 CO(6-5) \u2212220 141 \u00b1 37 0.19 \u00b1 0.06 1.0 \u00b1 0.3 \u00d7 109 17 \u221236 0.8 \u22121.7 \u00d7 109 SE 2 Cycle 4 CO(6-5) \u221259 \u00b1 13 171 \u00b1 40 0.5 \u00b1 0.1 2.7 \u00b1 0.5 \u00d7 109 13 6.0 \u00b1 1.2 \u00d7 109 SE 3 Cycle 4 CO(6-5) 86 \u00b1 11 89 \u00b1 25 0.17 \u00b1 0.08 0.9 \u00b1 0.4 \u00d7 109 13 2.0 \u00b1 1.0 \u00d7 109 SE 1 Cycle 6 CO(6-5) \u2212141 \u00b1 28 260 \u00b1 58 0.86 \u00b1 0.19 4.6 \u00b1 1.0 \u00d7 109 13 1.0 \u00b1 0.2 \u00d7 1010 SE 2 Cycle 6 CO(6-5) \u22125 \u00b1 13 69 \u00b1 33 0.19 \u00b1 0.13 1.0 \u00b1 0.7 \u00d7 109 13 2.3 \u00b1 1.5 \u00d7 109 SE 3 Cycle 6 CO(6-5) 86 \u00b1 12 78 \u00b1 26 0.25 \u00b1 0.09 1.3 \u00b1 0.5 \u00d7 109 13 3.0 \u00b1 1.1 \u00d7 109 SE #Main #1 Cycle 6 CO(6-5) \u2212141 \u00b1 32 192 \u00b1 66 0.26 \u00b1 0.09 1.4 \u00b1 0.5 \u00d7 109 13 3.1 \u00b1 1.1 \u00d7 109 SE #Main #2 Cycle 6 CO(6-5) 46 \u00b1 18 142 \u00b1 32 0.26 \u00b1 0.09 1.4 \u00b1 0.5 \u00d7 109 13 3.1 \u00b1 1.1 \u00d7 109 SE #Tidal Cycle 6 CO(6-5) \u2212171 \u00b1 16 288 \u00b1 38 0.39 \u00b1 0.04 2.1 \u00b1 0.2 \u00d7 109 17 \u221236 1.7 \u22123.6 \u00d7 109 Column (1): measurements of the components labeled in the line profiles in Fig. 1 and Fig. 2. Column (2): component name as labeled in the line profiles of Fig. 1 and Fig. 2. Column (3): velocity center, where the zero velocity is set as 236.69 GHz, corresponding to the CO(6-5) line emission at z = 1.9214. Column (4): FWHM of CO(6-5) component. Column (5): the integrated intensity of CO(6-5) line emission. Column (6): the integrated brightness temperature of CO(6-5) line emission. Column (7): the line intensity ratio used to convert ICO(6\u22125) to ICO(1\u22120) to estimate the molecular gas mass based on (Emonts et al. 2015b; see texts in \u00a73.3). Column (8): the molecular gas mass. the spatial area defined by 3\u03c3-level contours. The physical properties measured and derived from the line profiles are listed in Table. 2. The NW galaxy shows a clear line splitting into two velocity components and a broadening of the main component, and thus the line profile can be fitted by two Gaussian components with velocity center (vcen), amplitude, and full width at half maximum (FWHM) as fitting parameters. In Cycle 4, the main component (NW 2 in Table. 2) has vcen = \u221217 \u00b1 9 km s\u22121 and FWHM = 310 \u00b1 22 km s\u22121, consistent with the measurements of vcen = \u221230 \u00b1 10 km s\u22121 and FWHM = 360 \u00b1 20 km s\u22121 in low-resolution Cycle 2 observations (Emonts et al. 2015b). In Cycle 6, we first applied Hanning smooth to the spectrum and then adopted a 2-component fitting by fixing vcen of the main component (NW 2) to -20 km s\u22121 found in both Cycle 2 and 4 observations within 1\u03c3 uncertainty. Otherwise, given the low signal-to-noise ratio, the fitting algorithm returned a third component with uncertainties exceeding 100%. The resultant Cycle 6 NW 2 has an FWHM of 414\u00b132 km s\u22121, larger than that of Cycle 4 but consistent with Cycle 2 within 1\u03c3 uncertainty. The blueshifted component (NW 1) is offset from the main component by at least 300 km s\u22121 in all observations. It has a broader FWHM in Cycle 6 than in Cycle 4 but almost the same amplitude, leading to a slightly higher ICO(6\u22125) by 0.08 Jy \u00b7 km s\u22121. As we will discuss in \u00a77.3.4, this blueshifted component (NW 1) may represent the outflow driven by the expanding bubble attributed to the radio jet in a 3D shell-like geometry (e.g., Garc\u00eda-Burillo et al. 2019). The SE galaxy has its line profile decomposable into three Gaussian components in Cycle 4, whereas large uncertainties are left, and the sum of the integrated intensities ICO(6\u22125),SE = 0.86 \u00b1 0.14 Jy/beam \u00b7 km s\u22121 lies far below the value of 1.4 \u00b1 0.2 Jy/beam \u00b7 km s\u22121 observed in Cycle 2 data by (Emonts et al. 2015b). In Cycle 6, ICO(6\u22125),SE is 1.3 \u00b1 0.37 Jy/beam \u00b7 km s\u22121, showing no significant deviations from that in the literature, though the SE galaxy shows a much more complex molecular gas distribution in Cycle 6. Additionally, in Cycle 2 and 4 observations, the brightest component has a center of \u223c60 km s\u22121 (SE 2 of Cycle 4 in Table. 2). However, the blueshifted component (SE 1 of Cycle 6 in Table. 2) has its ICO(6\u22125) dominant over the other two components in Cycle 6 observations. To understand the origin of such a change in the brightest component, we zoom in on the SE galaxy and divide its mom0 map into two regions: #Main and #Tidal, as shown in the left column of Fig. 2. #Main indicates the main structure in the SE galaxy and the region #Tidal is named after its possible tidal origin to be discussed in \u00a77.1. The spectra extracted from these regions are shown in the right column of Fig. 2, and the measured properties are listed in Table. 2. The #Main region shows a bimodal distribution of the line profile characteristic of a rotating structure. The #Tidal region contains \u223c30 per cent of the molecular gas in the SE galaxy and is globally blueshifted, which significantly contributes to the blueshifted SE 1 in Cycle 6. 3.3 Molecular gas mass We calculated the integrated source brightness temperature from CO(6-5) line intensity using the following equation (Carilli & Walter 2013): L\u2032 CO(6-5) = 3.25 \u00d7 107 \u00d7 S CO(6-5)\u2206\u03bdD2 L (1 + z)3\u03bd2 obs K km s\u22121 pc2, (1) where S CO(6\u22125)\u2206\u03bd is the integrated intensity of the CO(6-5) line in Jy \u00b7 km s\u22121, DL is the luminosity distance in Mpc, and \u03bdobs is the observed frequency of the CO(6-5) line in GHz. To estimate the molecular gas mass, L\u2032 CO(6-5) has to be converted to L\u2032 CO(1\u22120) via the line intensity ratio r65/10 = S CO(6\u22125)\u2206\u03bdCO(6\u22125)/S CO(1\u22120)\u2206\u03bdCO(1\u22120). And the molecular gas can then be calculated using (Carilli & Walter 2013): MH2 = \u03b1CO \u00d7 L\u2032 CO(1-0) M\u2299, (2) where \u03b1CO is the CO-to-H2 conversion factor. An \u03b1CO \u223c 0.8 M\u2299(K km s\u22121 pc2) \u22121 (Downes & Solomon 1998) found in the starburst nuclei of ULIRGs on scales < 1 kpc is adopted as a conservative estimation. MNRAS 000, 1\u201319 (2023) \f6 Yuxing Zhong et al. The CO(1-0) line emission was observed by the Australia Telescope Compact Array (ATCA) with a beam size of 4.0\u2032\u2032 \u00d7 1.3\u2032\u2032, incapable of resolving two galaxies (Emonts et al. 2015a). By tapering CO(6-5) observation from Cycle 2 to the same resolution as the CO(1-0) data, Emonts et al. (2015b) compared CO(6-5) and CO(1-0) spectra extracted from the same region covering the entire Dragonfly galaxy and found that the bulk molecular gas component has a line intensity ratio r65/10 \u223c13. Scaling the CO(1-0) spectrum by 13 and subtracting it from the CO(6-5) spectrum, Emonts et al. (2015b) found large CO(6-5) residuals corresponding to the high-excitation gas at the blueshifted side with v \u2264\u2212200 km s\u22121. Including 2\u03c3 measurement uncertainties as an upper limit for ICO(1\u22120), Emonts et al. (2015b) loosely constrained 17 \u2264r65/10(blue) \u226436 for the blueshifted, high-excitation components. We adopt this high ratio for molecular gas mass estimation of the blueshifted components and the corresponding MH2 are listed in Table. 2. 4 SED FITTING 4.1 Optical-to-FIR SED The electromagnetic radiation emitted by galaxies at multiwavelength provides proxies to investigate the formation and evolution of the galaxy. Fitting the observed spectral energy distribution (SED) with a combination of various templates allows us to disentangle the galaxy emission complexity and compute both host galaxy and AGN physical properties. We performed SED fittings using CIGALE (Code Investigating GALaxy Emission; Burgarella et al. 2005; Noll et al. 2009; Boquien et al. 2019). Based on the new photometric data from optical-to-near IR (NIR) collected from the Dark Energy Camera (DECam) constructed for the Dark Energy Survey (DES; Abbott et al. 2018, 2021) in addition to the existing IR photometry with Spitzer and Herschel, we re-examine the physical properties including SFR and M\u22c6and compare them with those reported in the literature (Drouart et al. 2014; De Breuck et al. 2010; Falkendal et al. 2019). All photometric data used for the SED fitting are summarized in Table. A1. We note that, since there are no spatially resolved optical-to-far IR (FIR) data available, the SED fitting thus can only compute the physical properties of the entire system, including NW and SE galaxies. Recent studies of AGN host galaxies and LIRG populations through SED fitting favor a star formation history (SFH) with a recent burst (Toba et al. 2021; Dey et al. 2022; Georgantopoulos et al. 2023). This is naturally expected for galaxy mergers, especially latestage mergers that have small bulge separations. In these systems, the gas inflow \u2013 attributed to angular momentum removal by the gravitational torque \u2013 towards the galaxy center may enhance the SFR within this central region (Moreno et al. 2015, 2021; McElroy et al. 2022; Shah et al. 2022). Therefore, we assumed a delayed SFH model with an additional burst for the host galaxy (Ciesla et al. 2015). In addition to the e-folding time (\u03c4main) and age (tmain) of the main stellar population, \u03c4burst, tburst, and a mass fraction of the late burst population (fburst) can be parameterized. We adopted Bruzual & Charlot (2003) stellar population synthesis library to model the stellar emission with a solar metallicity 0.02, Chabrier initial mass function (Chabrier 2003), and a standard model for nebular emission (Inoue 2011). Although AGNs would enrich their environments, resulting in higher chemical abundance, we consider such an enrichment negligible in this HyLIRG (Zubovas et al. 2013; Taylor & Kobayashi 2015). The dust emission was modeled adopting a dl2014 template (Draine et al. 2007, 2014). This template includes a parameter qPAH which optimizes the mass fraction of the polycyclic aromatic hydrocarbon (PAH) emission that dominates the MIR emission, a radiation field Umin that models the diffuse dust emission, and a power-law distribution of the dust corresponding to the star-forming regions with index \u03b1. The \u03b1 is defined in a way such that dMd(U)/dU \u221dU\u2212\u03b1, where Md is the dust mass and U is the radiation field intensity. The initial estimations of these parameters are referred from recent studies of AGN host galaxies (Yamada et al. 2023; Buat et al. 2021). UV and optical emissions from the AGN will be absorbed and then re-emitted at longer wavelengths owing to the existence of the obscuring structure, i.e., the dusty torus. To simulate this reddening effect, we adopted SKIRTOR, a clumpy two-phase torus model derived from a modern radiative-transfer method, assuming that the dusty torus is made of dusty clumps rather than a smooth structure (Stalevski et al. 2012, 2016). This model depends on several parameters including the average edge-on optical depth at 9.7 \u00b5m, the half-opening angle of the dust-free cone, inclination (i, 0\u25e6for face-on and 90\u25e6for edge-on), and the AGN fraction defined as the fraction of AGN IR luminosity to total (AGN+host) IR luminosity. The models described above and the values of corresponding parameters are summarized in Table. 3. 4.2 Radio SED In the radio regime, CIGALE models the flux densities attributed to different mechanisms, including synchrotron radiation from AGN and star formation and nebular continuum emission that involves mainly free-free emission (Boquien et al. 2019). There are four parameters controlling the model of the synchrotron radiation: the FIRto-radio correlation parameter qIR, radio loudness RAGN defined as the ratio of L5GHz/L2500\u00c5 where L2500\u00c5 reflects the AGN disc luminosity, and the spectral index of star formation(\u03b1SF) and AGNrelated synchrotron radiation \u03b1AGN. In normal star-forming galaxies (SFGs), the synchrotron radiation is primarily produced by the electrons accelerated in supernova remnants. In the Milky Way and many other SFGs, the overall slope of the observed radio SED has been known to have a mean value of \u03b1SF \u2248\u22120.75 at \u03bd \u22481 GHz in a negative convention (S \u221d\u03bd\u03b1) (Condon & Ransom 2016; Klein et al. 2018). Hence, we fix \u03b1SF to -0.75 for the synchrotron radiation relevant to the star formation. In the Dragonfly galaxy, the bulk of the synchrotron radiation originates from the SE hotspot and its associated diffuse radio lobe and can be significantly enhanced (see \u00a76.2). Therefore, the observed radio emission no longer traces the intrinsic AGN activities but the physical conditions in situ. Thanks to VLA 44 GHz observations, the radio core associated with the RLAGN in the NW galaxy has been identified (Fig. 1). Such a core component generally has a spectral slope flatter than those of radio hotspots and lobes because of synchrotron self-absorption and/or free-free absorption arising from the dense environment. We adopt a spectral index of \u03b1AGN = 0, which is a commonly observed value in flat-spectrum radio quasars that are core-dominated populations (Chen et al. 2009). RAGN and qIR are fixed at 10 and 2.3, respectively. So far, the SED fittings for powerful radio galaxies primarily make use of S 1.4GHz without spatially decomposing the emission source and considering enhancements on the flux densities. The applications of qIR and RAGN on the radio fitting are highly motivated in this manner. Therefore, a consideration of only the radio core, excluding radio hotspots and lobes, for the Dragonfly galaxy may lead to a systematic bias. We then perform a fitting using 1.4 GHz data instead of the 44 GHz data assuming a spectral index of \u03b1AGN = 1.0 as a comparison. In this fitting, we fixed RAGN = 1000 and qIR = 0. MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 7 Figure 2. The zoom-in investigation of the SE galaxy with white contours for CO(6-5) and black contours for 237 GHz continuum laid on the CO(6-5) mom0 map based on Cycle 6 observations. We divide the mom0 map of the SE galaxy into two regions: #Main and #Tidal. The #Tidal region is named after its possible association with the tidal bridge (see \u00a77.1.1). There is a continuum \"sub-component\" \u223c1 kpc to the west of #Main and coincided with the location of the SE hotspot. The line profiles extracted from #Main and #Tidal regions using the elliptical apertures indicated in the figure are shown in the right column. The structure in #Main shows a bimodal line profile characteristic of a rotating structure. The ellipse in the bottom left corresponds to the beam size. Table 3. CIGALE user-specified components for SED fitting described in \u00a74 Parameter Values Delayed star formation history with additional burst: delayedSFH (Ciesla et al. 2015) e-folding time [Myr]: \u03c4main 1000, 2000, 5000, 8000 Age of the main stellar population [Myr]: tmain 100, 500, 1000, 1500, 2000, 4000 e-folding time of the starburst [Myr]: \u03c4burst 10, 20, 50, 100, 200 Age of the starburst [Myr]: tmain 10, 20, 50, 100 Mass fraction attributed to starburst 0.1, 0.3, 0.5 Single-age stellar population (SSP): BC03 (Bruzual & Charlot 2003) Initial mass function Chabrier (2003) Metallicity: Z 0.02 (fixed) Nebular Emission (Inoue 2011) Ionization parameter: log U -3.0 (fixed) Dust attenuation: dustatt_modified_starburst (Calzetti et al. 2000) Interstellar reddening of the nebular lines: E(B \u2212V)lines 0.5, 1, 1.5 Fraction of continuum to line color excess: E(B \u2212V)factor 0.44 (fixed) Dust emission: dl2014 (Draine et al. 2007, 2014) Mass fraction attributed to PAH emission: qPAH 0.47, 1.12, 1.77, 2.50 Minimum radiation field: Umin 10, 30, 50 Power-law slope relates dust mass and radiation field intensity: \u03b1 2, 3 Fraction illuminated from Umin to Umax: \u03b3 0.02 (fixed) Clumpy two-phase torus model: SKIRTOR2016 (Stalevski et al. 2012, 2016) Average edge-on optical depth at 9.7 \u00b5m: \u03c4 3, 7, 11 The angle between the disc plane and dust torus [\u25e6]: \u03b8 60, 70, 80 The inclination angle relative to the LOS [\u25e6]: i 20, 30, 40 AGN fraction measured as the IR luminosity of AGN to the total value: fAGN 0.1, 0.3, 0.5 MNRAS 000, 1\u201319 (2023) \f8 Yuxing Zhong et al. Table 4. Host galaxy and AGN physical properties computed using CIGALE Parameter 44 GHza 1.4 GHzb No radio Drouart et al. (2014) Falkendal et al. (2019) De Breuck et al. (2010) M\u22c6[\u00d71010 M\u2299] 9.2 \u00b1 2 9.2 \u00b1 4 11 \u00b1 3 58 Recent burst fraction 0.34 \u00b1 0.15 0.46 \u00b1 0.16 0.27 \u00b1 0.16 SFR [M\u2299yr\u22121] 2700 \u00b1 300 2100 \u00b1 100 2300 \u00b1 200 3100 \u00b1 200 1900 SFR10Myr [M\u2299yr\u22121] 3100 \u00b1 400 2200 \u00b1 200 2500 \u00b1 300 SFR100Myr [M\u2299yr\u22121] 600 \u00b1 200 900 \u00b1 300 900 \u00b1 200 Main age [Myr] 930 \u00b1 710 570 \u00b1 650 650 \u00b1 670 Burst age [Myr] 13 \u00b1 6 36 \u00b1 26 28 \u00b1 25 LAGN,bol [\u00d71046 erg s\u22121] 0.9 \u00b1 0.1 2.9 \u00b1 0.1 3 \u00b1 0.3 1.7 \u00b1 0.5 3.7 \u00b1 0.5 AGN fraction \u223c0.1 \u223c0.3 \u223c0.3 \u223c0.2 \u223c0.4 aThe spatially resolved 44 GHz flux density of the radio core that overlaps with the NW galaxy is used for the fitting. bThe unresolved total 1.4 GHz flux density, including the radio core, hotspots, and lobes is used for the fitting. 4.3 Fitting Results We have performed three fits to the SED: firstly based on S 1.4GHz of the total radio emission, then based on S 44GHz,core of the radio core, and finally without radio data. We summarize the best-fitting results in Table. 4 and plot the modeled SEDs in Fig. 3. The stellar mass of NW plus SE galaxies is M\u22c6,NW+SE \u223c(9 \u221211) \u00d7 1010 M\u2299with an instantaneous SFR of \u223c2000 \u22123000 M\u2299yr\u22121. For all fittings, the instantaneous SFR is close to the SFR averaged over 10 Myr but beyond that averaged over 100 Myr, suggesting recent starbursts. We performed a test without an additional starburst by fixing the starburst mass fraction to 0. Different from the fiducial fitting with the recent burst, the test fitting includes only a single population that forms within 100 Myr, still suggesting recent star-forming activities. 4.3.1 Comparisons with literature Using Spitzer and Hershcel data (3-500 \u00b5m), Drouart et al. 2014 decomposed the IR luminosity into AGN and starburst components by simultaneously calculating AGN and host galaxies contributions to the SED. They found LIR SB = 1.72 \u00d7 1013 L\u2299attributed to star-forming activities and estimated SFR \u223c3000 M\u2299yr\u22121 based on the SFR and IR luminosity relation (Kennicutt 1998). Using a non-starburst SED template and combining low-frequency VLA observations at 1.4, 4.7, and 8.2 GHz, Falkendal et al. (2019) found SFR \u223c1900 M\u2299yr\u22121. Our SED fitting analysis involving an SFH with an additional starburst finds SFR \u223c2100 \u22122700 M\u2299yr\u22121 that agrees with Drouart et al. (2014) and slightly higher than Falkendal et al. (2019). The stellar mass of our fitting is merely one-fifth the value of M\u22c6\u223c5\u00d71011 M\u2299reported in De Breuck et al. (2010). Such a difference could be explained by the choice of elliptical galaxy templates instead of a star-forming one in the SED fitting performed by De Breuck et al. (2010), the lack of IR data at \u03bb \u226570 \u00b5m, and the lack of optical photometry. An elliptical galaxy SED dominated by old stellar populations is a reasonable choice for low-redshift populations since powerful radio galaxies in the nearby Universe are found to preferentially reside in massive, early-type galaxies (ETGs), typical of the final evolution stage of an AGN (Hopkins et al. 2008; Shankar et al. 2020). However, RLAGNs at high redshifts are often hosted by mergers that are often accompanied by intense star-forming activities, thus the choice of SED templates of ETGs is not always proper for HzRGs. The AGN has an IR contribution of \u223c0.1 to the total IR luminosity given by our fitting including 44 GHz data, significantly smaller than the value of \u223c0.4 given by Falkendal et al. (2019). Our fitting including the observed 1.4 GHz flux density, as well as the one without radio data, returns a similar AGN fraction of 0.3 compared with \u223c0.2 given by Drouart et al. (2014) and \u223c0.4 given by Falkendal et al. (2019). These results are naturally expected since when the radio data is included in our fitting, the AGN disc luminosity depends on the normalization of the luminosity at 2500 \u00c5 using RAGN. The 1.4 GHz data can obviously lead to a higher luminosity at 2500 \u00c5 than 44 GHz data does because of a much larger radio loudness. The AGN component dominates over the dust component up to \u223c80 \u00b5m in the 1.4 GHz case and merely up to \u223c20 \u00b5m in the 44 GHz case. Accordingly, the AGN contributes more to the total IR luminosity, resulting in a larger fAGN. 4.3.2 Caveats to fitting results Since there is no spatially resolved optical-to-FIR data available for the SED fitting, the reported physical properties regarding the AGN may not be the proxy to the RLAGN that resides in the NW galaxy solely. If the SE galaxy also has an AGN, though we find no signature yet, the AGN component presented here is the sum of the NW AGN and the potential SE AGN. The fitting of synchrotron emission does not include detailed physical processes \u2013 how jets interact with the medium in situ \u2013 but depends on the empirical qIR and RAGN for radio galaxies with lobes and/or hotspots. And the finally derived LAGN,bol will be regulated by the RAGN when radio data is included. Therefore, a fitting based on the radio core but neglecting radiation from radio hotspots/lobes requires values of qIR and RAGN for weaker AGNs, putting a lower limit on the LAGN,bol in our target. On the other hand, the imbalanced flux densities between SE and NW hotspots at 1.4 GHz are suggestive of possibly enhanced synchrotron radiation due to the environmental difference or the Doppler boosting effect (see Paper I, Zhong et al. 2023 and \u00a76.2). Then, the fitting requires values of qIR and RAGN for more powerful AGNs, which may result in overestimated LAGN,bol. After removing the radio data, we find fAGN and LAGN,bol consistent with the results based on 1.4 GHz. This may suggest that the application of qIR and RAGN is reliable even when dealing with powerful radio galaxies with biased flux densities. However, we still call for caution when trying to retrieve LAGN,bol for these HzRGs. In conclusion, the Dragonfly galaxy is a massive, starburst one with M\u22c6\u223c(0.9 \u22121.1) \u00d7 1011 M\u2299and an SFR of \u223c2000 \u2212 3000 M\u2299yr\u22121. The RLAGN reaches a QSO-level luminosity of LAGN,bol \u223c(0.9 \u22123) \u00d7 1046 erg s\u22121. We can take the fitting using 44 GHz as the lower limit for AGN properties, which sufficiently suggests that the RLAGN is radiatively efficient (\u03bbEdd > 3 \u00d7 10\u22122; see \u00a76.1 for detailed discussions). In \u00a77.2 and \u00a77.3, we will further MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 9 Figure 3. The best fitting models of UV-to-radio SED for the Dragonfly galaxy. The upper figure shows the fitting up to 44 GHz of the radio core. The lower one is up to 1.4 GHz without the 44 GHz flux density. In each figure, the upper panel shows the observed and modeled flux densities against the observed wavelength and the lower panel shows the residuals of the model in each corresponding band. The reduced chi-square defined by CIGALE is \u03c72 red,CIGALE = \u03c72/(data points \u22121), where the degree of freedom is not involved. We re-calculated \u03c72 red considering the number of fitted free parameters. discuss the AGN activities by investigating the Eddington ratio and outflow properties. 5 GAS KINEMATICS 5.1 Modeling with 3DBarolo We investigated the molecular gas kinematics by fitting a kinematic tilted-ring model to the CO(6-5) line emission data cube using 3DBarolo (Di Teodoro & Fraternali 2015). 3DBarolo assumes that the rotating gaseous disc of a galaxy is made up of a series of concentric rings, where each ring is described by geometrical and kinematical parameters, including spatial center (x0, y0), systemic velocity (Vsys), position angle (PA), inclination angle (i), scale height of the disc (z0), rotation velocity (Vrot), and velocity dispersion (\u03c3V). The modelings are based on the C46 dataset. We adopted 8 rings for the NW galaxy and 9 rings for the SE galaxy. We chose a radial separation of 1/5 along the minor axis of the beam (Ramos Almeida et al. 2022). We tested modelings with larger separations up to beamminor/2 and found similar results. The outermost ring of each galaxy was limited to be smaller than the aperture size that includes 3\u03c3 contours in the mom0 map to exclude possible sub-structures such as tidal tails/spiral arms. None the less, as we will see in \u00a75.3 and Fig. 4, these features still contaminate the velocity field on the redshifted side of the NW galaxy. We set the scale height as 0 adopting a thin disc approximation where the effect of any vertical structure is neglected (Ramos Almeida et al. 2022; Lelli et al. 2022). We fixed the kinematic centers of both galaxies to the coordinates of 1.2 mm dust continuum peak positions. These coordinates were uniform such that the rings remained concentric. The rotational velocity, velocity dispersion, systemic velocity, position angle, and inclination angle were left as free parameters. We adopted a twostage fitting strategy such that the algorithm will perform a first fitting stage to estimate all free parameters. Then, after regularizing the free parameters, the algorithm proceeds to a second stage where the radial profiles of inclination and position angles follow a Bezier function and kinematical parameters (Vrot and \u03c3V) are allowed for free parameters (Di Teodoro & Fraternali 2015). In the upper row of Fig. 4, we show the velocity field (mom1) and velocity dispersion (mom2) maps directly generated from the observed CO(6-5) data cubes by excluding pixels under 3\u03c3. The noise level is calculated by averaging the standard deviation of each channel of the data cube. The position-velocity diagrams (PVDs) are shown in the lower row of Fig. 4 for NW and SE galaxies, respectively. The PVDs are extracted along the kinematic major and minor axes indicated by arrows in the mom1 and mom2 maps with a slit width the same size as the FWHM of the synthesized beam along the major axis, which are automatically produced by running 3DBarolo. The width of the region used for the extraction is the same as the region shown in Fig. D1 and D2. 5.2 SE Galaxy The PVDs of the SE galaxy clearly show ordered rotation along the major axis. A clear velocity gradient, as well as the zero velocity region, can be seen in the main structure in the mom1 map generated from Cycle 6 observations (see Fig. B1), which reveals a PA of 132\u25e6. This value is consistent with PA = 130\u25e6retrieved by 3DBarolo, suggesting that the rotating structure in the PVD corresponds to the structure in #Main. This is also supported by that the main structure has a bimodal line profile characteristic of rotation (Fig. 2). In addition to these rotational molecular gas, the PVD along the minor axis shows non-circular motions with offset > 0.05\u2032\u2032. The component with VLOS \u223c\u2212100 km s\u22121 corresponds to the #Tidal but is partially smeared because of the larger beam size of C46. Because of the tidal effect, which may give rise to the tidal bridge between the two galaxies (see \u00a77.1), the filamentary structure in #Tidal cannot be simply treated as a part of rotation subject to the gravitational force. The modeling returns an inclination angle of 28\u25e6. This value varies by only 1\u25e6from the innermost to outermost rings despite the gas being distorted by the tidal force. With the inclination angle, we can calculate the ratio of the rotational velocity (Vrot = VLOS/ sin i) to velocity dispersion for a series of concentric rings returned by MNRAS 000, 1\u201319 (2023) \f10 Yuxing Zhong et al. Figure 4. From left to right, the panels show the velocity field (mom1), the velocity dispersion (mom2) maps generated from the combined C46 dataset by excluding pixels under 3\u03c3, and the position velocity diagrams (PVDs) extracted from the observed and modeled data cube. The upper row shows the results from the NW galaxy and the lower row for the SE galaxy. In PVDs, the color code (with black contours) indicates the observational data, the cyan contours indicate the modeled data, and the yellow dots indicate the modeled rotational velocity. \u2206VLOS is the observed velocity field VLOS corrected for systemic velocity (\u2206VLOS = \u00b1||VLOS| \u2212|Vsys||). The black arrows in mom1 maps indicate the slits of major and minor axes used to extract the PVD. The slits here only indicate the extraction direction and the slit width is approximately the beam size. 3DBarolo. These rings have a minimum ratio of Vrot/\u03c3V \u22655, indicative of a rotation-dominated disc (Simons et al. 2017). 5.3 NW Galaxy The high-resolution observations reveal non-circular and tangential motions that are diluted because of the beam-smearing effect in Cycle 2. These non-circular motions include the massive (MH2 \u223c 2 \u00d7 109 M\u2299), lopsided molecular outflow (see \u00a77.3), which corresponds to NW \u22121 in Fig. 1. This outflow component is clearly reflected in the NW galaxy PVDs with VLOS \u2264\u2212300 km s\u22121 (Fig. 4). The diffuse molecular gas in the northeast part (see C46 moment maps in Fig. 4 and Off-2 in Fig. 2 of Lebowitz et al. 2023) is another origin of non-circular motions and is labeled in the PVD along the minor axis. This additional CO(6-5) line emission has \u2212100 \u2272VLOS \u22720 km s\u22121 with an offset < \u22120.05\u2032\u2032 as clearly shown in the C46 PVDs along the major axis. In the current stage, lack of observations primarily tracing cold molecular gas, we cannot confirm whether this feature originates from the spiral arm, thus belonging to part of the rotation, or stems from tidal effects. In the C46 mom1 map, as well as indicated by the green curve in the observational data constructed by 3DBarolo (\"VELOCITY\" in Fig. D1), the projected zero-velocity region is curved. This curvature feature has been reproduced in the simulated rotating disc plus jetdriven outflows, showing that the kinematic centers could be curved rather than being a straight line (Meenakshi et al. 2022b). Consequently, the kinematic major axis and inclination angle of the CO disc cannot be unambiguously defined, as we will discuss below. The non-circular motions make it more difficult to model the rotational velocity field. Depending on the initial guesses of position MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 11 and inclination angles, the modeled PA ranges within [69\u25e6, 82\u25e6], and the inclination angle ranges within [27\u25e6, 55\u25e6]. By exploring the PA\u2212inclination parameter space using the SPACEPAR task of 3DBarolo, we found that the PA and inclination angles have degenerated. The main kinematic parameter influenced by this loosely constrained parameter space is the rotation velocity Vrot as it is calculated by Vrot = VLOS/ sin i, where VLOS is the observed velocity. We then directly estimated the inclination angle using (Courteau et al. 2014) i = cos\u22121 s (b/a)2 \u2212q2 0 1 \u2212q2 0 , (3) where a is the semi-major axis, b is the semi-minor axis, and q0 is the axial ratio of a galaxy viewed edge-on. Adopting q0 = 0.13 for late-type galaxies (Hall et al. 2012), the inclination is 24\u25e6based on Cycle 6 observations where the NW galaxy is spatially resolved. Hence, although we cannot determine the precise geometry of the NW galaxy, an i \u223c27\u25e6could be used to set an upper limit for Vrot, which is \u223c520 km s\u22121. The modeled rings have a minimum Vrot/\u03c3V of \u223c3 and a maximum of \u223c50, suggesting that the NW galaxy is rotationdominated as well. Although this ratio is highly varied because of the large uncertainties arising from the non-circular motions, in lowresolution Cycle 2 observations where the most non-circular motions are diluted, the velocity field map of the NW galaxy shows distinct blueshifted and redshifted regions, indicating a rotating disc (Emonts et al. 2015b; Lebowitz et al. 2023). 5.4 Dynamical Mass As argued by Emonts et al. (2015b), the two galaxies should have low inclinations (10 \u221220 degrees) such that the dynamical mass is comparable with the high stellar mass M\u22c6\u223c5.8\u00d71011 M\u2299computed by De Breuck et al. (2010). However, based on our SED fitting results, the stellar mass is M\u22c6\u223c(0.9 \u22121.1) \u00d7 1011 M\u2299, which differs from the literature value by a factor of 5 but should be a more robust estimate (see \u00a74.3). In this case, neither the ALMA Cycle 2 observation has significantly missed the extent of rotating discs nor an extremely low inclination is required for the dynamical mass to match the stellar mass. The inclination angle estimated from modeling gas kinematics using 3DBarolo is 28\u25e6for the SE galaxy and 27\u25e6, which is likely to be the lower limit (see \u00a75.3), for the NW galaxy. We then estimate the dynamical mass of each galaxy by (Courteau et al. 2014) Mdyn = 2.33 \u00d7 105RV2 obs/ sin2 i [M\u2299], (4) where R is the radius along the major axis, Vobs is the observed rotation velocity, and i is the inclination angle. R is determined by fitting a 2D Gaussian component to the mom0 map of Cycle 6 observations and being deconvolved from the beam size. For the NW galaxy with R = 0.8 kpc and i = 27\u25e6\u00b1 2\u25e6, the observed rotation velocity is Vobs = 236+23 \u221232 km s\u22121, which gives a dynamical mass of Mdyn,NW = 5.0+1.1 \u22121.2 \u00d7 1010 M\u2299. And for the SE galaxy with R = 0.8 kpc, Vobs = 155+8 \u221210 km s\u22121, and i = 28\u25e6\u00b1 2\u25e6, the dynamical mass is Mdyn,SE = 2.0+0.3 \u22120.3 \u00d7 1010 M\u2299. The dynamical mass can be compared with other mass estimates (e.g., Tripodi et al. 2022). Without considering the BH mass, the NW galaxy has a stellar mass of M\u22c6,NW = Mdyn,NW \u2212MH2,NW \u223c 3\u00d71010 M\u2299within a 0.8 kpc radius. This rough estimate likely rejects a large inclination angle (i \u227340\u25e6). Otherwise, the dynamical mass will be even smaller than the molecular gas mass. For the SE galaxy, M\u22c6,NW \u223cMdyn,SE \u2212MH2,SE \u223c1 \u00d7 1010 M\u2299within a 0.8 kpc radius. Within a 0.8 kpc radius, M\u22c6,NW+SE is expected to be \u223c4 \u00d7 1010 M\u2299, significantly smaller than the value 9.2 \u00d7 1010 M\u2299returned by the SED fitting. This suggests the existence of extended stellar components out of the CO discs, which can be seen from Fig. C1 that the stellar continuum extends well beyond the CO(6-5). A size comparison between stellar and molecular gas components in local LIRGs has been done by Bellocchi et al. (2022), finding out that the stellar size traced by ionized gas is typically three times that of the molecular gas. 6 AGN PROPERTIES 6.1 BH mass One crucial quantity in understanding AGN evolution is the BH mass. To provide an estimate of MBH related to the RLAGN hosted by the NW galaxy, we start with the scaling relation between central SMBH masses and the host galaxy stellar masses (Kormendy & Ho 2013). The SED fitting returns M\u22c6= (9.2 \u00b1 0.4) \u00d7 1010 M\u2299including NW, SE, and the possible companion galaxy. Since there are no spatially resolved data allowing for an estimate of the individual galaxy, we then set an upper limit of M\u22c6,NW \u223c8 \u00d7 1010 M\u2299for the NW galaxy such that it does not exceed the total stellar mass in this system after considering M\u22c6,SE \u223c1 \u00d7 1010 M\u2299for the SE galaxy within the CO disc. The lower limit is 3 \u00d7 1010 M\u2299, which matches M\u22c6,NW \u223cMdyn,NW \u2212MH2,NW estimated from the dynamical mass of \u223c5 \u00d7 1010 M\u2299(see 5.4 for details). We note that these assumptions do not lead to significant quantitative changes in the estimates of the Eddington ratio to be discussed in \u00a77.2.1 that are directly related to the BH mass and all successive discussions because the scaling relations are intrinsically scattered. The relation between the BH and stellar masses can be parameterized as log MBH = \u03b1+\u03b2\u00b7log \u0010 M\u22c6 1010 M\u2299 \u0011 . We adopt the values for \u03b1 and \u03b2 parametrically modeled by Georgakakis et al. (2021) for X-rayselected AGNs up to z = 4, which gives MBH \u223c(0.2 \u221210) \u00d7 108 M\u2299. 6.2 Jet kinetic power The total kinetic power of AGN jets can be estimated from the flux densities of the radio hotspots. We have confirmed the existence of the SE and NW hotspots in Paper I, and the NW hotspot is situated towards the northwest of the radio core, along the jet axis as shown in Fig. 1 (see also Fig. 1 in Zhong et al. 2023). The observed flux density of the SE hotspot is larger than that of the NW one by at least a factor of 6, which is argued to be a result of Doppler boosting and/or a jet-ISM interaction in the SE galaxy (Zhong et al. 2023). Therefore, a restoration of the intrinsic flux density is required to estimate Ljet. In a Doppler boosted case, the intrinsic (S int) and observed (S obs) are related by \u03b42\u2212\u03b1 < S obs S int < \u03b43\u2212\u03b1, where \u03b1 is the spectral index defined as S \u03bd \u221d\u03bd\u03b1, \u03b4 \u2261[\u03b3(1 \u2212\u03b2 cos \u03b8)]\u22121 is the Doppler factor, and \u03b3 \u2261(1 \u2212\u03b22)\u22121/2 is the Lorentz factor. Following the scenario proposed in Paper I, that is, the SE hotspot has its flux density enhanced whereas the NW one dimmed, the upper limit of intrinsic flux density is then the case the NW hotspot is dimmed by a factor of four, i.e., \u03b4 = 4. On the other hand, the imbalanced flux densities may be only attributed to that NW and SE jets interact with the medium of different electron densities. In this case, we can set a lower limit for the intrinsic flux density, that is \u03b4 = 1. Assuming \u03b1 = \u22121.2 between 1.4 and 4.7 GHz, the intrinsic flux density for a MNRAS 000, 1\u201319 (2023) \f12 Yuxing Zhong et al. single jet at 1.4 GHz is then S int,1.4 = 123 mJy for \u03b4 = 4 and 31 mJy for \u03b4 = 1, respectively. Adopting a relation between Ljet and the radio power at 1.4 GHz (P1.4GHz = 4\u03c0D2 L(1 + z)\u2212(\u03b1+1)S int,1.4\u03bd1.4GHz; Cavagnolo et al. 2010): Ljet \u22485.8 \u00d7 1043(P1.4GHz/1040)0.7 erg s\u22121, (5) the total jet kinetic power then ranges from 2 \u00d7 1046 erg s\u22121 for the Doppler boosted case and 4 \u00d7 1046 erg s\u22121 for the jet-medium interaction case, respectively. 7 DISCUSSION 7.1 The Dragonfly Galaxy as a Merger 7.1.1 A Late-stage Merger Low-resolution ALMA Cycle 2 observations have revealed a bridgelike structure that connects NW and SE galaxies and was interpreted as tidal debris that arises from the gravitational interaction between the two rotational gaseous discs (Emonts et al. 2015b). This speculated structure has been further investigated by Lebowitz et al. (2023), arguing that this structure is the tidal bridge with MH2 = (3\u00b11)\u00d7109 M\u2299, which is often observed in major mergers in both simulations and observations (Scoville et al. 2017; Sparre et al. 2022). Combining our kinematic modelings (see \u00a75), these results support the consensus that the Dragonfly galaxy is a major merger constituting two likely rotating discs (Emonts et al. 2015b). A merger will undergo a pair phase between the first and the second pericentric passage and a merging phase after the second pericentric passage and before a full coalescence (McElroy et al. 2022). During the passage, gravitational torques will exert on the gas, (Mihos & Hernquist 1996; Barnes & Hernquist 1996), leading the gas material to lose angular momentum and inflow towards the centers of the merging galaxies, fueling star formation activities. The tidal bridge seen in Cycle 2 and the tails revealed by HST/NICMOS F160W image (Pentericci et al. 2001; Emonts et al. 2015b) serve as signatures by which the Dragonfly galaxy at least can be classified as a major merger at stage 2 characterized by obvious tidal bridges and tails (Larson et al. 2016). They are evident that the Dragonfly galaxy has already undergone the first pericentric passage. Additionally, recent studies (e.g, Stemo et al. 2021) of dual and offset AGNs linked to major mergers have found significantly enhanced AGN activities at bulge separations ranging from 14 \u221211 kpc, attributed to the first pericentric passage, and 4 \u22122 kpc, attributed the second passage. Considering also the small nuclear separation of \u223c4 kpc and high IR luminosity (LIR \u223c2 \u00d7 1013 L\u2299), the Dragonfly galaxy could be a late-stage merger on its way to coalescence. The starburst activities are thus an outcome of the second pericentric passage. This is supported by our results of SED fitting adopting an SFH with an additional recent burst, which shows that there is no significant difference between the instantaneous SFR and SFR10Myr averaged over 10 Myr, but a comparably low SFR100Myr averaged over 100 Myr for all three fittings (see Table. 4). In this scenario, the intense gas inflow accompanied by this passage results in a starburst that contributes to at least thirty per cent of the total stellar mass, which is reflected in the recent burst fraction \u22730.3 in Table 4. 7.1.2 Beads-on-a-string? In late-stage mergers that truly have a final coalescence, overlapped gaseous and stellar discs are commonly observed (Di Matteo et al. 2007). Such overlapping of the stellar components between SE and NW galaxies traced by rest-frame UV and optical photometry can be found in HST/WFPC2 F814W and NICMOS F160W images (Pentericci et al. 2001; Emonts et al. 2015b). From the aspect of gaseous discs, both SE and NW galaxies have their molecular contents concentrated within a sub-kpc scale region, showing no recognizable overlap other than the tidal bridge (Lebowitz et al. 2023). In Cycle 6 observations, the SE galaxy has a significant fraction (\u227330 per cent) of the molecular gas originating from the filamentary structure in #Tidal, which is constituted of several clumps, as shown in Fig. 2. In the image of the dust thermal continuum, the NW galaxy has a diffuse and extended structure elongated towards the east. The potential tidal bridge, argued by Emonts et al. (2015b) and Lebowitz et al. (2023), links these two structures in SE and NW galaxies. Based on these features, we further discuss the Dragonfly galaxy as a high-redshift analog to a local late-stage merger, the famous Antennae galaxy (NGC 4038/4039) that has a separation of 6.6 kpc and shows a beads-on-a-string morphology of the molecular gas distribution (Elmegreen & Elmegreen 1983; Whitmore et al. 2014). In a beads-on-a-string scenario, the filamentary structure forms from the tidal effects ascribed to the recent pericentric passage, as suggested for the Antennae galaxy (Espada et al. 2012). Each clump in #Tidal can be treated as a molecular filament, which is the string, that extends over \u223c400 pc in the projected plane. Each filament may embed two or more supergiant molecular clouds (SGMCs) at a 100 pc scale, namely the beads, that host star clusters. Such beads and strings are reproduced in the high-resolution hydrodynamic simulations of a major merger of disc galaxies with a total mass of 108 M\u2299 (Teyssier et al. 2010). They are argued to be the result of the gas turbulent motions and will gain more masses through interactions between the merging pairs. Shocks originating from galaxy-galaxy collisions/interactions can result in higher turbulence such that the observed line width is determined by the turbulent motions and kinetic temperature (Tkin) of gas. In this case, we can estimate the Mach number \u2013 a measure of the velocity dispersion of the molecular cloud \u2013 following Leroy et al. (2016): M = \u221a 3 \u03c3 0.38 km s\u22121 T 0.5 25K , (6) where T25K is the kinetic temperature of molecular gas divided by 25 K and \u03c3 is equal to FWHM/2.35. Assuming a typical Tkin = 45 K for the warm molecular gas in high-redshift star-forming galaxies (Birkin et al. 2021), the Mach number reaches \u223c400 with FWHM \u223c280 km s\u22121 for the line profile extracted from #Tidal. This high Mach number is in line with the scenario in which the collisions between cold clouds could be supersonic (Struck 1999). Such a large value is more likely to be a result of the unresolved large-scale structure. A more reasonable scenario is that, under a \u2018beads-on-a-string\u2019 speculation, the observed FWHM \u223c280 km s\u22121 can be a composition of several SGMCs with narrower FWHMs and different velocity centers (e.g., Whitmore et al. 2014). Assuming an FWHM \u223c60 km s\u22121 which is a typical value for the SGMCs in the overlap regions of the Antennae galaxy and Tkin = 45 K, the resultant Mach number is \u223c80, in agreement with the values found in molecular clouds at 60 pc scale in mergers and SBGs (Leroy et al. 2016). This scenario requires future observations with similar resolutions and higher sensitivities to confirm. MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 13 7.2 A radiatively efficient RLAGN 7.2.1 A growing SMBH Apart from starburst activities, the gas inflow accompanied by merger events can also fuel the growth of SMBHs (e.g., Lin et al. 2023). Therefore, we wonder whether this RLAGN can be associated with an active BH in a rapid growth phase. We, therefore, need to derive the corresponding Eddington ratio. For the reason discussed in \u00a74, we use the SED fitting results adopting 44 GHz radio data as a conservative estimate. The RLAGN host galaxy refers to the NW galaxy throughout the discussions in this section. AGNs can be classified as radiatively efficient (high accretion rate) and inefficient (often low accretion rate) populations based on whether its Eddington ratio is above \u03bbEdd = 3 \u00d7 10\u22122 or not, respectively (Merloni & Heinz 2008). The Eddington ratio is defined by \u03bbEdd = LAGN,bol/LEdd, where LEdd = 1.26 \u00d7 1038 \u0010 MBH M\u2299 \u0011 . Since there exist no obvious observational features of AGN activities in the SE galaxy, such as high-excited molecular gas, molecular outflows, and PSF-dominated morphology in HST images (e.g., Zhong et al. 2022), the RLAGN that resides in the NW galaxy is treated as the primary contributor to the computed LAGN,bol. The Eddington ratios are then \u03bbEdd \u223c0.07\u22124 adopting MBH \u223c(0.2\u221210)\u00d7108 M\u2299based on the BH mass estimates presented in \u00a76.1 These values suggest that the RLAGN is likely to be radiatively efficient and the central SMBH is in a fast growth phase and may even enter the super-Eddington regime. We also note that this \u03bbEdd is the most conservative estimate and \u03bbEdd could be larger by a factor of \u223c3 if we adopt the fitting including 1.4 GHz data or without radio data. It should also be noted that the scaling relation is broadly scattered for all populations of AGNs, precipitating inevitable uncertainties in \u03bbEdd. If our target RLAGN is a radiatively inefficient one (\u03bbEdd < 0.03), the SMBH should at least have MBH(= LAGN,bol 1.26\u00d7(\u03bbEdd=0.03)\u00d71038 ) \u22657 \u00d7 109 M\u2299even adopting LAGN,bol = 0.9 \u00d7 1046 erg s\u22121 as the lower limit. However, Poitevineau et al. (2023) has studied the RLAGN host galaxy properties at 0.3 < z < 4 and found statistical consistency in the scaling relations between RLAGNs and other AGNs populations. This colossal mass will make the NW galaxy an outlier extremely deviating from the scaling relations, and only a few such objects are discovered. We, therefore, forward our discussions without considering this uttermost possibility and we rely on the radiatively efficient scenario for the RLAGN in the NW galaxy. 7.2.2 Dragonfly as an Extremely Radio-loud Galaxy Defining the ratio of Ljet to LAGN,bol as the intrinsic radio loudness Rint, we find that log Rint is likely to lie above 0, following Ljet calculated in \u00a76.2 and LAGN,bol \u223c0.9 \u00d7 1046 erg s\u22121 listed in Table. 4. We further calculate the specific black hole accretion rate, which is defined as sBHAR = LAGN,bol/M\u22c6erg s\u22121 M\u2299 \u22121 (Ichikawa et al. 2021). The NW galaxy with M\u22c6\u223c(3 \u22128) \u00d7 1010 M\u2299corresponds to sBHAR \u223c(1 \u22123) \u00d7 1035 erg s\u22121 M\u2299 \u22121. The obtained values of Rint and sBHAR are similar to the average values (\u27e8log Rint\u27e9= 0.0 and \u27e8log (sBHAR/ erg s\u22121 M\u2299 \u22121\u27e9= 35.3) of extremely radioloud galaxies (ERGs) defined by log( f1.4GHz,rest/fg band,rest) > 4 with log L1.4GHz > 1024 W Hz\u22121 (Ichikawa et al. 2021). Therefore, the RLAGN in the NW galaxy is classified as an ERG. At low redshifts, it has been well established that the powerful radio galaxies with jets are representative of the final evolution stage of AGNs (Hopkins et al. 2008). These AGNs host SMBHs surrounded by accretion discs with weak gas inflow, socalled advection-dominated accretion flow (ADAF), which leads to low bolometric luminosity but hard spectrum. This accretion state is called as the low/hard state and the accretion disc is in the subEddington phase. Based on the findings presented here, however, it appears that the Dragonfly galaxy, including ERGs, represents a distinct population, which involves AGNs with rapidly growing SMBHs at high redshifts. 7.2.3 Origin of jets from a radiatively efficient SMBH The estimates for \u03bbEdd in 7.2.1 merely represent a statistical expectation since it is directly related to MBH. However, the intrinsic dispersion around the MBH \u2212M\u22c6scaling relation may still push the SMBH residing in the NW galaxy (\u03bbEdd \u223c0.07 \u22124) into the superEddington accretion regime (\u03bbEdd \u22731). In this case, the jets in the Dragonfly galaxy, as well as ERGs, might be launched from a superEddington accretion disc (or slim disc; Abramowicz et al. 1988) being both optically and geometrically thick. The disc thickness prevents the diffusion of magnetic fields, and these jets are powered by large-scale magnetic fields in the innermost region and rotating SMBHs and can have an Eddington-order luminosity. Simulations of jet-disc connections show that jets can be persistently driven by the Blandford-Znajek mechanism (BZ mechanism; Blandford & Znajek 1977) when the accretion flow is super-Eddington (McKinney et al. 2014; Qian et al. 2018). The super-Eddington accretion is short-lived and episodic with a timescale ranging 104 \u2212107 yr, depending on the replenishment of cold gas from host galaxies (Volonteri et al. 2015). This is because the gas accretion timescale is inversely proportional to the mass accretion rate squared (Kato et al. 2008). The accretion rate of a super-Eddington thick disc launching powerful jets will decrease as time evolves and finally enter the standard \u2018thin\u2019 disc regime. Shown by the simulation performed by Ricarte et al. (2023), following this state transition, the jet power shows a rapid decrease by more than three orders of magnitude. The jets are then incapable of maintaining the steady state of the synchrotron radiation spectrum. As a result, the jets appear to be switched off and no longer interact with the medium, which is speculated in Paper I (Zhong et al. 2023). From this aspect, RLAGNs showing high accretion rates at high redshifts such as the Dragonfly galaxy may represent an indispensable stage of the galaxy-BH co-evolution, through which BHs rapidly gain their masses. At the same time, the intense star-forming activities lead to a rapid depletion of the molecular gas in the merger as the merging pairs gradually coalesce. In the end, the accretion rate drops down and the accretion disc is finally settled in the low/hard state. Correspondingly, the slowly accreting SMBH then resides in a massive, low-SFR ETG with giant radio lobes, which is the typical picture we observe in the nearby Universe. Another possibility is established upon RLAGNs being analogs to black hole X-ray binaries (BHBs) and microquasars that launch transient jets. It is widely argued that jets are likely to be launched, which are called continuous jets when the accretion disc is geometrically thick (e.g., Narayan et al. 1998; Meier 2001; S\u02db adowski et al. 2014; S\u02db adowski & Narayan 2015; Tchekhovskoy 2015; Blandford et al. 2019), while geometrically thin discs have low continuous jetlaunching probabilities since the magnetic fields will diffuse outward at a rate faster than being dragged inward, incapable of powering the jets (Lubow et al. 1994; Guilet & Ogilvie 2012). However, by modeling jet-disc coupling in BHB systems, Fender et al. (2004) found that transient jets are launched as the accretion disc transitions from low/hard to high/soft state. In the low/hard state, the system launches weak continuous jets. Then the disc luminosity increases MNRAS 000, 1\u201319 (2023) \f14 Yuxing Zhong et al. accompanied by the emergence of powerful transient jets, and the disc is in an intermediate state. When the accretion disc is finally settled in the high/soft state, there will be no jets (Fender et al. 2004). The power of transient jets is argued to correlate with the BH spin, but how come they are correlated remains a mystery. Observational evidence of such a transitioning phase is limited to BHBs or microquasars for which the state transition timescale is short enough. State transitions in AGN or quasars, if scaled by the orders of magnitude of central BH masses against BHBs, have a period of 104 \u2212107 yr (Tchekhovskoy 2015). As studied in Paper I, the RLAGN may have transient or intermittent activities (Zhong et al. 2023). The synchrotron age, i.e., the lifetime of radio jets, has an order of \u223c105 yr. Therefore, the galaxy merging results in the fueling of the BH and we are likely to be observing an RLAGN whose accretion has transitioned from a low/hard state to a high/soft one. 7.3 AGN-driven Outflow 7.3.1 Outflow geometry To better investigate the high-velocity component (NW\u22121 in Fig. 1), we have generated the mom0 map of the NW galaxy integrated over \u2212500 to \u2212280 km s\u22121, which is shown in Fig. 5. This makes it clear that the high-velocity component is likely to extend along a direction perpendicular to the jet axis. This component also shows lopsidedness \u2013 only towards the southwest direction. A similar lopsided outflow cone has been observed in the radio galaxy B2 0258+35 and reproduced in simulations (Murthy et al. 2022). In B2 0258+35, the non-detection of the outflow on the opposite side of the monopolar outflow may be attributed to the obscuration by the CO disc or a much larger clumpiness of the ISM. This is likely to be the case for the NW galaxy, as evidenced by the diffuse dust continuum \u2013 which could be a result of galaxy interactions (see \u00a77.1.2) \u2013 that extends towards the east direction (panel (c) in Fig.1). What is even more interesting revealed by Cycle 6 observations is that, along the jet axis and towards the northwest, there exists weak but extended CO(6-5) line emission. Lebowitz et al. (2023) also found a broad, high-velocity component at this position based on combined Cycle 2 and 4 observations tapered to a low resolution of 0.19\u2032\u2032. Additionally, at this position, Lebowitz et al. (2023) detected a faint blob in the 237 GHz dust continuum (see their Fig. 8), which appears to be an extended, diffuse emission just above the centroid of the NW galaxy in our Cycle 4 dust continuum image (see panel (a) in Fig. 1). If these observational features in Cycle 4 and 6 observations are real, then they may be indicative of an interaction between the NW jet and CO disc in a few tens pc-scale, which is proposed to explain the high-velocity component coincided with the faint blob by Lebowitz et al. (2023). Because of this jet-ISM interaction, the propagation direction of the NW jet is then distorted, resulting in a further misalignment in the inclinations of approaching and receding jets relative to the LOS, as discussed in the Doppler boosting effect in Paper I (Zhong et al. 2023). Numerical simulations show that an inclined jet could strongly couple with the ISM, leading to sub-relativistic and wide-angled outflows (Mukherjee et al. 2018; Talbot et al. 2022). Such a jet-ISM interaction in the innermost region is proposed for the geometry of the narrow-line region of NGC 1068, where splitted CO line emission is observed from the torus to the edge of the circumnuclear disk (Garc\u00eda-Burillo et al. 2019). Due to the low signal-to-noise ratio (SNR = 4) of the detection in the current Cycle 6 dataset, this circumnuclear disc-scale jet-ISM interaction is more like a hypothesis and requires future observations with higher sensitivities to confirm. 7.3.2 Outflow energetics We further investigate the outflow properties and energetics by adopting the time-averaged, thin-shell approximation (Rupke et al. 2005) as a conservative estimate instead of a scenario in which the outflow occupies a spherical volume with a uniform density and has a mass outflow rate larger than the thin-shell scenario by a factor of 3 (Maiolino et al. 2012; Gonz\u00e1lez-Alfonso et al. 2017). The mass outflow rate is given by \u02d9 Mout = Mout \u00d7 vout Rout , (7) where vout \u223c450 km s\u22121 is the maximum outflow velocity defined as \u2206vbroad + FWHM/2 and \u2206vbroad is the line center shift between the narrow and broad components. We determine the outflow radius by fitting a Gaussian component to the mom0 map (Fig. 5) integrated from \u2212500 to \u2212280 km s\u22121, by which Rout \u223c0.5 kpc based on Cycle 6 observations. The corresponding lower and upper limit, given the molecular gas mass MH2 \u223c(1.3\u22122.8)\u00d7109 M\u2299(NW\u22121 in Table 2), for the mass outflow rate is 1200+300 \u2212300 \u2272\u02d9 Mout \u22722600+600 \u2212600 M\u2299yr\u22121. The associated outflow kinetic power is \u02d9 Eout = 1 2 \u02d9 Mout \u00d7 v2 out \u223c(8 \u2212 16) \u00d7 1043 erg s\u22121 and momentum boost factor momentum boost factor = \u02d9 Pout \u02d9 PAGN = \u02d9 Mout \u00d7 vout LAGN,bol/c (8) ranges within 11 \u221224 for LAGN,bol \u223c0.9 \u00d7 1046 erg s\u22121 and 3 \u22127 for LAGN,bol \u223c3 \u00d7 1046 erg s\u22121, where \u02d9 PAGN is the AGN radiation momentum rate. We compare these outflow properties with the values in literature in Fig. 6, including a sample of low-redshift AGNs (Fluetsch et al. 2019), four radio-loud quasar host galaxies at z = 1.4 \u22122.3 (Vayner et al. 2021b), a Type 2 QSO at z = 0.085 (Audibert et al. 2023), and AGNs at z \u223c0.1\u22123 (Fiore et al. 2017). The Dragonfly galaxy shows a high molecular outflow rate comparable with the values found in LAGN,bol-matched AGN host galaxies for both molecular and ionized outflows. This high rate is a combined result of the compactness (\u223c0.5 kpc) and massiveness (\u223c2\u00d7109 M\u2299). However, this mass represents merely \u227210% of the total molecular gas in the NW galaxy. Additionally, the starburst activity of the Dragonfly galaxy consumes the molecular gas reservoir at a rate (SFR \u223c2000 \u22123000 M\u2299yr\u22121) comparable with \u02d9 Mout. These results suggest that, although the AGN leads to instantaneous negative feedback in the Dragonfly galaxy, it is the long-term and combining effects of AGN and star-forming activities that are responsible for the final quenching, in line with the ideas proposed in recent simulations (e.g., Bollati et al. 2023). 7.3.3 Radiative-mode Based on the estimates of outflow energetics, we first discuss the situation that the outflow stems from the radiative-mode feedback. In this scenario, the black hole wind drives the expansion of a hot shocked wind bubble that interacts with the host galaxy ISM surrounding the central BH (King 2003; King & Pounds 2015). If this energy bubble suffers efficient radiative cooling and only transmits the ram pressure to the ISM, the outflow is momentum-driven. And if the bubble expands adiabatically, the outflow is energy-driven. The outflow in the NW galaxy has a kinetic conversion efficiency \u03f5kin = \u02d9 Eout/Lbol varying within \u223c0.008 \u22120.02 (see middle panel of Fig. 6). In a momentum-driven outflow scenario, \u03f5kin depends on the outflow velocity by \u03f5kin = 6 \u00d7 10\u22124 \u0010 vout 1000 km s\u22121 \u0011 (Costa et al. 2014). And \u03f5kin will be \u223c3 \u00d7 10\u22124 with vout \u223c450 km s\u22121, which is a factor of 10 smaller than the observed lower limit of MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 15 Figure 5. The mom0 map integrated from \u2212500 to \u2212280 km s\u22121 based on Cycle 4 (left) and 6 (right) observations and indicated by black contours. Cycle 4 has contour levels of [3, 5, 7, 9] \u00d7 \u03c3 with \u03c3 = 15 mJy beam\u22121 and Cycle 6 has contour levels of [3, 4, 5, 6] \u00d7 \u03c3 with \u03c3 = 8 mJy beam\u22121. The white contours are the mom0 maps integrated from \u2212280 to +250 km s\u22121 with contour levels of [3, 5, 10, 15, 20] \u00d7 \u03c3 with \u03c3 = 18 mJy/beam \u00b7 km s\u22121 for Cycle 4 and of [3, 5, 7, 9, 11] \u00d7 \u03c3 with \u03c3 = 11 mJy/beam \u00b7 km s\u22121 for Cycle 6 observations. In these two figures, we manually move the yellow dashed line that represents the jet axis by 0.05\u2032\u2032 towards the east horizontally given that the astrometry of VLA may lead to a systematic offset of the imaging result. Figure 6. Molecular outflow properties traced by CO(6-5) of the RLAGN in the NW galaxy are shown by the five-pointed stars. Left: mass outflow rate as a function of LAGN. Middle: outflow kinetic power as a function of LAGN. Right: momentum boost factor as a function of outflow velocity, while the coral stars indicate the calculation adopting LAGN,bol \u223c0.9 \u00d7 1046 erg s\u22121, and the blue one adopting LAGN,bol \u223c3 \u00d7 1046 erg s\u22121. Through all panels, dots indicate the molecular outflow of low-redshift AGNs and SFGs from Fluetsch et al. (2019), filled (open) diamonds indicate the molecular (ionized) outflow from radioloud quasar host galaxies at z = 1.4 \u22122.3 from Vayner et al. (2021b), triangle indicates the upper limit of the molecular outflow in the Type 2 QSO SDSS J1430+1339 at z = 0.085 (Audibert et al. 2023), where LAGN,bol \u223c(0.7 \u22126) \u00d7 1045 erg s\u22121 from Venturi et al. (2023) is used, and crosses indicate the ionized outflow from AGNs at z \u223c0.1 \u22123 (Fiore et al. 2017) and recomputed based on Eqs. (7-8). In the middle panel, the dashed lines indicate the thermal-tokinetic conversion efficiency of 100, 10, and 1%, while the dotted line indicates 0.06% for the upper limit of the momentum-driven molecular outflow with \u03f5kin = 6 \u00d7 10\u22124 \u0012 vout\u223c1000 km s\u22121 1000 km s\u22121 \u0013 (Wagner et al. 2012; Costa et al. 2014). In the right panel, the dashed and dotted lines indicated the expected momentum boost factor for energy( \u02d9 Pout/ \u02d9 PAGN = 20) and momentum-driven ( \u02d9 Pout/ \u02d9 PAGN = 1) outflow, respectively. \u03f5kin = 0.008. Therefore, the outflow is unlikely momentum-driven. The derived \u03f5kin \u223c0.008\u22120.02 is consistent with \u03f5kin = 0.005\u22120.05 predicted by simulations for the energy-conserving outflow (Costa et al. 2014; King & Pounds 2015). As shown in the right panel of Fig. 6, the derived \u02d9 Pout/ \u02d9 PAGN \u223c11 \u221224 adopting a lower limit of LAGN,bol \u223c0.9 \u00d7 1046 erg s\u22121 supports the energy-driven scenario with the theoretical predictions of \u02d9 Pout/ \u02d9 PAGN \u223c20 (Zubovas & King 2012). When LAGN,bol \u223c3 \u00d7 1046 erg s\u22121 is considered, we get \u02d9 Pout/ \u02d9 PAGN \u223c3 \u22127, congruent with the prediction of \u02d9 Pout/ \u02d9 PAGN \u223c1 \u221210 if the outflow is directly driven by the radiation pressure with \u03f5kin = 0.001 \u22120.01 (Ishibashi et al. 2018; Costa et al. 2018). This is compatible with our derived \u03f5kin \u223c0.008 \u22120.02. In conclusion, the radiative-mode AGN feedback is sufficient to drive the massive molecular outflow. These results yield previous studies of multiphase outflows in LAGN,bol-matched AGNs (e.g., Fiore et al. 2017; Fluetsch et al. 2019) and QSOs at different redshifts (e.g., Bischetti et al. 2019; Vayner et al. 2021a), showing that AGN radiative activities are sufficient MNRAS 000, 1\u201319 (2023) \f16 Yuxing Zhong et al. to explain the outflows observed in systems. They are also in line with the molecular outflows found in Ljetand \u223cLAGN,bol-matched radio-loud quasars at z \u223c2 (Vayner et al. 2021b) that reveal energyconserving outflows. 7.3.4 Jet-mode However, as shown in \u00a76.2, Ljet is comparable and can even exceed LAGN,bol, thus we have to consider a second situation that the outflow stems from the jet-mode feedback. The radio jets can strongly couple with its host galaxy ISM if the jet has a small angle relative to the disc plane of the host galaxy (Wagner & Bicknell 2011; Wagner et al. 2012; Mukherjee et al. 2016). The dense molecular clouds can be dispersed away from the jet axis through the ram pressure, forming an expanding cocoon of the molecular gas along the jet path. The jet flow will channel in various directions because of the porosity of the ISM, resulting in a larger-scale impact on the host galaxy. However, the outflow component in the NW galaxy extends no more than 0.5 kpc in the projection. Along with the enhanced velocity dispersion perpendicular to the jet axis, this compactness, as indicated by the simulation performed by Meenakshi et al. (2022b), may represent the early phase of the jet-driven outflow. In many previous studies of outflows in nearby jetted systems with radio lobes, the structures of multiphase outflows traced by integrated intensity maps are often found to be elongated along the jet axis (Oosterloo et al. 2017; Girdhar et al. 2024). On the other hand, the [O III]\u03bb5007 line velocity widths4, which is indicative of enhanced velocity dispersion, are found to extend along the direction perpendicular to the jet axis (see Venturi et al. 2021; Meenakshi et al. 2022b and references therein). In the case of our target, the outflowing molecular gas is perpendicular to the jet axis in both integrated intensity and velocity dispersion maps. A similar observational example is the local type 2 QSO SDSS J1430 (LAGN,bol \u223c(0.7\u22126)\u00d71045 erg s\u22121; Venturi et al. 2023) whose radio jets have kinetic power of Ljet \u223c(1 \u22123) \u00d7 1043 erg s\u22121. The molecular gas outflow is argued to emanate from the lateral outflow/turbulence constituted of warm dense gas excited by the cocoon of shocked gas (Audibert et al. 2023). The jet-medium coupling efficiency is only \u03b7 = \u02d9 Eout/Ljet \u223c0.03 for the Dragonfly galaxy. Considering the inevitable large uncertainties in the empirical Ljet \u2212P1.4GHz relation, this value represents an upper limit for \u03b7. The 1.4 GHz flux density is corrected for the particle acceleration and a direct estimate of the jet power could have an order of magnitude increase (\u22732 \u00d7 1047 erg s\u22121), resulting in an extremely low coupling efficiency of \u03b7 \u223c0.001. These results suggest that, if the molecular outflow is the dominant phase, the radio jets are weakly coupled with the ISM. None the less, adopting \u03b7 = 0.01 and Ljet = 2 \u00d7 1046 erg s\u22121, the jets provide an total energy injection of \u223c2\u00d71057 erg during its active timescale of \u223c2\u00d7105 yr, comparable with the outflow kinetic energy of Eout = 1 2 Mout \u00d7v2 out \u223c3\u00d71057 erg. Albeit this is no more than an order of magnitude estimate, it further shows that the jets are capable of expelling the cold gas during its short active phase. Based on the recurrent jet scenario proposed in Zhong et al. (2023), the jets may finally remove all molecular gas from its host galaxy through long-term, episodic bursts. On the other hand, both the outflow mass and kinetic power could be governed by the ionized rather than the molecular phase, which 4 W70 = v85% \u2212v15% or W80 = v90% \u2212v10% characterised by the difference between 85th and 15th (or between 90th and 10th) percentile velocities of the line profile has been observed in RL quasars (Vayner et al. 2021b). For the target with log(LAGN,bol) \u223c47 erg s\u22121 shown in Fig. 6, the ionized phase has a mass outflow rate larger than that of the molecular phase by almost an order of magnitude and requires more energy to be powered. Similarly, in HzRGs with both Ljet and LAGN,bol \u22731 \u00d7 1047 erg s\u22121 at z \u223c2 (Nesvadba et al. 2017), \u02d9 Eout of the ionized outflow can be two orders of magnitude larger than those derived for the Dragonfly galaxy. In addition, as studied by Venturi et al. (2023), although SDSS J1430 is weak in molecular outflow, its ionized outflow may require a hundred per cent jet-ISM coupling efficiency in a jet-mode scenario, while the ionized outflow kinetic power and momentum boost factor agree well with the radiative-mode scenario. Actually, the simulation shows that when Ljet \u223cLAGN,bol, jets can more efficiently shock-ionize gas than AGN radiation even jets are aligned 70\u25e6relative to the galactic plane (Meenakshi et al. 2022a). Therefore, for the Dragonfly where Ljet \u2273LAGN,bol, observations of ionized gas are then essential to investigate whether and how \u2013 in outflow mass and/or kinetic power \u2013 ionized outflow dominates over the molecular phase. All in all, the jet-mode feedback is capable of driving massive molecular outflow within a short timescale without additional energy supply from AGN radiation. On the other hand, the radiative activity of this RLAGN is adequately responsible for the outflow without the assistance of powerful jets. This makes HzRGs such as the Dragonfly galaxy a unique AGN population amongst which jet(characterized by the existence of jets) and radiative-mode (characterized by high LAGN,bol) feedback should co-exist while their relative importance in shaping host galaxies remain unsettled. Additionally, unlike the RL quasars studied by Vayner et al. 2021b showing consistent distributions between jet propagation axes and outflows, the fact that the outflow in the NW galaxy is perpendicular to the jet axis differs from the common picture. Future statistical studies of multiphase outflows in radiatively efficient RLAGNs are required to explore the connections between Ljet, LAGN,bol, jet ages, and outflow kinematics. 8 CONCLUSION In this work, we have studied the molecular gas traced by CO(6-5) line emission in the Dragonfly galaxy, a hyper-luminous infrared, starburst major merger at z = 1.92 using ALMA and VLA observations. We have studied the gas kinematics using 3DBarolo and performed SED fitting covering optical-to-radio using CIGALE. Our major findings are as follows: (i) The SED fitting using optical-to-44 GHz photometric data gives an SFR \u223c3100 M\u2299yr\u22121 averaged over 10 Myr and SFR \u223c600 M\u2299yr\u22121 averaged over 100 Myr, implying a recent starburst that contributes to thirty per cent of the total stellar mass. The stellar mass of the Dragonfly galaxy computed by the new fitting is \u223c9\u00d71010 M\u2299, which is one-fifth the value computed using a SED template for early-type galaxies in previous studies (De Breuck et al. 2010). The fitting including the radio core flux density returns an AGN fraction of 0.1. If using the flux density at 1.4 GHz originating from radio hotspots and lobes in the SED fitting, the AGN fraction becomes 0.3. The corresponding AGN bolometric luminosity is LAGN,bol \u223c2.9 \u00d7 1046 erg s\u22121, which is larger than the cases without 1.4 GHz data where LAGN,bol \u223c0.9 \u00d7 1046 erg s\u22121 but consistent with the fitting without radio data. (ii) The bulk of the molecular gas in both SE and NW galaxies is concentrated within a radius of \u223c0.8 kpc. The NW galaxy MNRAS 000, 1\u201319 (2023) \fRevisiting the Dragonfly Galaxy II 17 shows a simple, circular distribution of the bulk of the molecular gas while some extended, diffuse molecular gas exists due to tidal effects. For the SE galaxy, the molecular gas is primarily constituted of two structures, named #Main and #Tidal (Fig. 2). One shows a double-peaked line profile characteristic of a rotating structure. Another one is likely to be associated with tidal effects and has its line center significantly blueshifted by \u223c200 km s\u22121 In a \u2018beads-on-a-string\u2019 speculation, this structure may embed several supergiant molecular clouds with a Mach number of \u223c80, in line with the values derived for cold clouds at 60 pc-scale in mergers and starburst galaxies. (iii) The gas kinematic modelings of the SE galaxy find a position angle of 130\u25e6, consistent with the orientation of extension of the main structure named #Main. The SE galaxy has a rotation velocity to velocity dispersion ratio of at least Vrot/\u03c3V \u22655, indicative of being rotation-dominated. The molecular gas in the NW galaxy is highly disturbed because of the AGN activities and possible tidal effects, leading to a loosely constrained inclination angle of \u223c28\u25e6, in line with the value of \u223c24\u25e6directly derived from the integrated moment map based on the aspect ratio. The NW galaxy has Vrot/\u03c3V \u22733 and is therefore likely to be rotating as argued by Emonts et al. (2015b) and Lebowitz et al. (2023). (iv) Using the MBH \u2212M\u22c6scaling relations for X-ray-selected AGNs at high-redshifts, we find MBH \u223c(0.2 \u221210) \u00d7 108M\u2299, corresponding to an Eddington ratio of \u03bbEdd \u223c0.07 \u22124. If the SMBH is accreting at the super-Eddington rate, the powerful jets might be launched from an opticallyand geometricallythick super-Eddington accretion disc. Given also the young age and possible transient and intermittent activities of this RLAGN, the jets might be transient and launched from an accretion disc in a state transition, either from low/hard to high/soft or from a super-Eddington one to the standard thin disc. (v) The molecular outflow has an outflow rate of \u223c1200+300 \u2212300 \u2212 2600+600 \u2212600 M\u2299yr\u22121, comparable with the powerful QSO systems. If the radiative-mode AGN feedback dominates the outflow, the outflow kinetic power of \u223c(8 \u221216) \u00d7 1043 erg s\u22121 and momentum boost factor of 3 \u22127 adopting LAGN,bol \u223c 0.9 \u00d7 1046 erg s\u22121 and of \u223c11 \u221224 adopting LAGN,bol \u223c 03 \u00d7 1046 erg s\u22121 suggest that the outflow could be energyconserving or be driven through direct radiation pressure on dust. If the outflow is a result of the powerful jets, i.e., the jetmode feedback, the jet-ISM coupling efficiency is \u22720.03 and could be as low as \u22720.001. During its short active timescale, the jets can provide a total energy injection of \u223c2 \u00d7 1057 erg with a jet-ISM coupling efficiency of 0.01, which is comparable with the outflow kinetic energy of \u223c3 \u00d7 1057 erg. Based on available observations, which mode dominates the outflow remains mysterious. (vi) The outflowing molecular gas in the NW galaxy is compact with a radius of \u223c500 pc in projection and appears to be perpendicular to the jet axis and lopsided towards the blueshifted side of the NW galaxy. Although it is very massive (MH2 \u223c(1.1 \u22122.3) \u00d7 109 M\u2299), this mass is only \u223c10% of the total cold gas deposited in the NW galaxy, suggesting that the AGN feedback is negative but not responsible for the quenching in the short-term. However, considering the possible intermittent nature of the powerful jets, the jet-mode feedback may quench the host galaxy through the long-term, episodic bursts of jets that will eventually clear out the cold gas. ACKNOWLEDGMENTS We thank the reviewer for the helpful comments to improve the manuscript. We thank the staff in ALMA and NRAO helpdesk for their kind help in data calibration and reduction. This paper makes use of the following VLA data: VLA/15A-316 and VLA/17B-444. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2016.1.01417.S and ADS/JAO.ALMA#2018.1.00293.S. Data analysis was carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan. This research was supported by a grant from the Hayakawa Satio Fund awarded by the Astronomical Society of Japan. AKI, YS, and YF are supported by NAOJ ALMA Scientific Research Grant Numbers 2020-16B. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. DATA AVAILABILITY The ALMA data used in this work are publicly available at https://almascience.nao.ac.jp/aq/. The VLA data used in this work are publicly available at https://data.nrao.edu." + }, + { + "url": "http://arxiv.org/abs/2305.03979v1", + "title": "Revisiting the Dragonfly Galaxy I. High-resolution ALMA and VLA Observations of the Radio Hotspots in a Hyper-luminous Infrared Galaxy at $z=1.92$", + "abstract": "Radio-loud active galactic nuclei (RLAGNs) are rare among AGN populations.\nLacking high-resolution and high-frequency observations, their structure and\nevolution stages are not well understood at high redshifts. In this work, we\nreport ALMA 237 GHz continuum observation at $0.023''$ resolution and VLA 44\nGHz continuum observation at $0.08''$ resolution of the radio continuum\nemission from a high-redshift radio and hyper-luminous infrared galaxy at\n$z=1.92$. The new observations confirm the South-East (SE) and North-West (NW)\nhotspots identified by previous low-resolution VLA observations at 4.7 and 8.2\nGHz and identify a radio core undetected in all previous observations. The SE\nhotspot has a higher flux density than the NW one does by a factor of 6,\nsuggesting that there can be a Doppler boosting effect in the SE one. In this\nscenario, we estimate the advance speed of the jet head, ranging from\n$\\sim$0.1c -- 0.3c, which yields a mildly relativistic case. The projected\nlinear distance between the two hotspots is $\\sim13$ kpc, yielding a linear\nsize ($\\leq20$ kpc) of a Compact-Steep-Spectrum (CSS) source. Combined with new\n\\black{high-frequency ($\\nu_\\text{obs}\\geq44$ GHz) and archived low-frequency\nobservations ($\\nu_\\text{obs}\\leq8.2$ GHz)}, we find that injection spectra of\nboth NW and SE hotspots can be fitted with a continuous injection (CI) model.\nBased on the CI model, the synchrotron ages of NW and SE hotspots have an order\nof $10^5$ yr, consistent with the order of magnitude $10^3 - 10^5$ yr observed\nin CSS sources associated with radio AGNs at an early evolution stage. The CI\nmodel also favors the scenario in which the double hotspots have experienced a\nquiescent phase, suggesting that this RLAGN may have transient or intermittent\nactivities.", + "authors": "Yuxing Zhong, Akio K. Inoue, Yuma Sugahara, Kana Morokuma-Matsui, Shinya Komugi, Hiroyuki Kaneko, Yoshinobu Fudamoto", + "published": "2023-05-06", + "updated": "2023-05-06", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "INTRODUCTION Radio galaxies hosting radio-loud active galactic nuclei (RLAGNs, L6GHz \u22731023.2 erg s\u22121; Kellermann et al. 2016) have their radio emissions dominated by magnetobremsstrahlung, i.e., synchrotron radiation attributed to the central AGN. RLAGNs are rare amongst all AGN populations (15\u221220%; Kellermann et al. 1989; Williams et al. 2018). In the local Universe, RLAGNs are found to preferentially reside in early-type galaxies with high stellar masses, low star formation rates, and radiatively ine\ufb03cient AGNs (Best et al. 2005; Heckman & Best 2014; G\u00fcrkan et al. 2015). At higher redshifts, there is an increasing number of RLAGNs related to active supermassive black holes (SMBHs) showing high accretion rates, and these AGNs are intrinsically radiatively e\ufb03cient (Williams et al. 2018; Hardcastle & Croston 2020). This di\ufb00erent feature provides a good channel to access a more complete picture of the AGN evolution itself, as well as the co-evolution with its host galaxy. To select radio AGNs hosted by high redshift radio galaxies (HzRGs), a feature called Ultra-Steep-Spectrum (USS), with a spectral index \u03b1 \u2264\u22121.3 (S \u03bd \u221d\u03bd\u03b1) in the low-frequency radio spectral energy distribution (SED) has been proven an e\ufb00ective technique (De Breuck et al. 2000). Such a steeper slope at high redshifts is considered a result of two possible causes. The \ufb01rst one is the Malmquist e\ufb00ect, due to which the cut-o\ufb00frequency may migrate towards lower frequencies since it is anti-correlated with the radio luminosity. The second possibility is that HzRGs may reside in denser environments because of the evolution of cosmic neutral hydrogen density, leading to a steeper initial power-law of the energy distribution (Miley & De Breuck 2008). Recent studies have emphasized the importance of cosmic microwave background (CMB) radiation (e.g., Saxena et al. 2017). The inverse Compton scattering that involves CMB photons and synchrotron-radiating electrons could be the dominant energy loss and result in a rapid decrease in the energy of synchrotronemitting relativistic electrons. Other than USS, many HzRGs show Peaked-Spectrum (PS) or Compact-Steep-Spectrum (CSS) (O\u2019Dea & Baum 1997). PS sources \u00a9 2023 The Authors arXiv:2305.03979v1 [astro-ph.GA] 6 May 2023 \f2 Yuxing Zhong et al. always have a frequency at which the \ufb02ux density peaks. This frequency is known as peaked or turnover frequency, below which the medium is optically-thick for the synchrotron-emitting source due to the synchrotron self-absorption (SSA) and/or free-free absorption (FFA), showing an inverse spectral index (Condon & Ransom 2016; Snell et al. 2019). These sources are considered the youngest populations of radio AGNs. The PS sources may evolve into CSS sources with evolving jets, characterized by a typical projected linear size \u226420 kpc and \u03b1 \u2264\u22120.5 at low frequencies (O\u2019Dea & Saikia 2021). Many high redshift CSS sources have been observed at low frequencies over the decades, while the high-frequency observations are still lacking (Pentericci et al. 2000; Patil et al. 2020; Sotnikova et al. 2019). The high-frequency behavior of the synchrotron radiation is crucial to investigate the evolution of the radio AGN, especially when the radio jets launched by the AGN have interacted with their ambient medium. Radio hotspots that are working surfaces where the radio jet has interacted with the medium are essential to answer this question. The synchrotron age of the radio hotspot is determined by the break frequency above which the spectrum steepens, together with the magnetic \ufb01eld in equipartition. This break frequency can be estimated by \ufb01tting the observed radio spectrum covering a range of frequencies from low to high to an injection spectrum of the accelerated synchrotron-emitting electrons. In this work, we have studied a HzRG \u2013 MRC 0152-209, named Dragon\ufb02y galaxy, which has a star formation rate (SFR) of \u223c3000 M\u2299yr\u22121 and M\u22c6\u223c5 \u00d7 1011 M\u2299(Emonts et al. 2015b,a). This galaxy constitutes of three components: the North-West (NW) galaxy, the South-East (SE) galaxy, and a Companion (C) with massive molecular gas (see panels (a), (NW), and (SE) in Fig. 1). The low-frequency observations at 4.7 and 8.2 GHz using the Very Large Array (VLA) were studied by Pentericci et al. (2000), and they identi\ufb01ed two radio hotspots, the NW and SE ones, while the radio core remained non-detection (see green contours in panel (a) in Fig. 1). New VLA Band Q observations (Project ID: VLA/15A316 and VLA/17B-444, PI: Bjorn Emonts), combined with highangular resolution Atacama Large Millimeter/submillimeter Array (ALMA) Cycle 4 (Project ID: 2016.1.01417.S, PI: Bjorn Emonts) and 6 (Project ID: 2018.1.00293.S, PI: Bjorn Emonts) observations in Band 6, allow us to have a deeper investigation into the radio properties including the lifetime of hotspots and the geometry of radio jets. Throughout this paper, we assume a \u039bCDM cosmology with \u2126m = 0.309, \u2126\u039b = 0.691, and H0 = 67.7 km s\u22121 Mpc\u22121 (Planck Collaboration et al. 2016). Based on these assumptions, the luminosity distance of the Dragon\ufb02y galaxy is \u223c15200 Mpc, and 1\u2032\u2032 corresponds to a projected physical scale of 8.62 kpc.1 2 OBSERVATIONS In this section, we brie\ufb02y describe the observational data used in this work, including observation setups, calibration procedures, and imaging strategies. The observation dates, frequencies, bands, restored beam sizes, and root-mean-squares (RMS, \u03c3) are summarized in Table. 1. 1 The calculation has made use of Wright (2006). 2.1 VLA Band Q The VLA observations were conducted in B-, BnA-, and Acon\ufb01gurations. All the observations were centred at 44 GHz with an e\ufb00ective bandwidth of 7.5 GHz. The total on-source time is 42 min for Band BnA-con\ufb01guration observations, and 178 min for Acon\ufb01guration. For all the observations, calibrator J2253+1608 was chosen to calibrate the bandpass response, calibrator J0204-1701, which is separated from the Dragon\ufb02y galaxy by 4.4\u25e6, was chosen to calibrate complex gain, including amplitude and phase, between the scans of the science target, and 3C147 was chosen to calibrate the \ufb02ux density scale. The B-con\ufb01guration observation was calibrated by means of running the VLA calibration script that is provided by the National Radio Astronomy Observatory (NRAO) on the Common Astronomy Software Applications (CASA) package version 6.2.1 embedding the VLA calibration pipeline (McMullin et al. 2007; THE CASA TEAM et al. 2022). The BnAand A-con\ufb01guration observations were calibrated by requesting pipeline calibrations through the NRAO Science Helpdesk. All the imaging procedures were performed on CASA version 6.4.0. For all the observations, using the \u2019hogbom\u2019 deconvolution algorithm, we imaged the data choosing the \u2019briggs\u2019 weighting with a robustness parameter +0.5, and put a mask on the strongest signal to avoid artefacts. Due to the low signal-to-noise ratio of each spectral window, no self-calibration in any dataset can be applied, leaving low-level sidelobe contamination on the clean images. 2.2 ALMA Band 6 The ALMA Cycle 4 observations were performed on 9 and 17 August 2017 for 1.2 hours of on-source time with 45 antennas and baselines up to \u223c3.6 km. The ALMA Cycle 6 observations were conducted on 23 June 2019 for 2.2 hours of on-source time with 48 antennas and the longest baseline is up to 11.5 km. For both observations, there are four spectral windows con\ufb01gured to cover two 4 GHz bands, one of which includes 235.83 \u2212239.58 GHz to observe the redshifted CO(6-5) line emission (\u03bdrest \u2248691.47 GHz) and another includes 251.20\u2212255.95 GHz such that only continuum is observed. The data calibrations were performed via the ALMA pipeline that is included in CASA version 4.7.2 for Cycle 4 and 5.4.0 for Cycle 6, respectively, by running the calibration script supplied with the data by the North American ALMA Science Center (NAASC). The main objective of these observations is to image the CO(6-5) line emission, which is beyond the scope of this paper. An analysis of CO(6-5) line emission based on Cycle 4 data is discussed in Lebowitz et al. (2023) and another analysis of CO(6-5) using combined data from Cycles 4 and 6 will be presented in a forthcoming paper (Paper II; Zhong et al. in prep.) The detailed analysis of CO(6-5) line emission will be presented in a forthcoming paper (Paper II; Zhong et al. in prep.). We adopt a uniform method for Cycle 4 and 6 observations to image the continuum in this work. Prior to imaging the continuum, we \ufb02agged the channels with CO(6-5) line emission, as well as pseudo-lines due to the severe atmospheric absorption. Next, we created a dirty image without any clean to calculate the root-mean-square noise (\u03c3) under the \u2019briggs\u2019 weighting with a robustness parameter +0.5. Then, we cleaned the image noninteractively by setting 3\u03c3 as the stop threshold of the cleaning. MNRAS 000, 1\u201312 (2023) \fSynchrotron Radiation from the Dragon\ufb02y Galaxy 3 Table 1. Summary of the Observations Observation Date Frequency Band beam size \u03c3 (mJy beam\u22121) VLA A-con\ufb01guration 5th and 13th August, 2015 44 GHz Band Q 0. \u2032\u203207 \u00d7 0. \u2032\u203204 0.01 VLA BnA-con\ufb01guration 29th May, 2015 44 GHz Band Q 0. \u2032\u203214 \u00d7 0. \u2032\u203208 0.018 VLA B-con\ufb01guration 30th December, 2017 44 GHz Band Q 0. \u2032\u203226 \u00d7 0. \u2032\u203213 0.024 ALMA Cycle 4 9 and 17th August, 2017 237 GHz Band 6 0. \u2032\u203211 \u00d7 0. \u2032\u203208 0.011 ALMA Cycle 6 23th June, 2019 237 GHz Band 6 0. \u2032\u2032026 \u00d7 0. \u2032\u2032023 0.0065 2.3 Alignment The imaging of ALMA Cycle 4 and 6 and VLA 44 GHz observation in three con\ufb01gurations is the direct \u2019tclean\u2019 product of the measurement sets that are calibrated from raw \ufb01les using the calibration pipeline provided by NRAO or under the assistance of the NRAO and EA ALMA sta\ufb00. No postprocessing, including self-calibration and manipulations on the WCS of the imaging results, was performed. Then the contours from di\ufb00erent observations are overlaid on false-color images based on their WCS using Cube Analysis and Rendering Tool for Astronomy (CARTA, Comrie et al. 2021). VLA 8.2 GHz observation is a reproduction of Pentericci et al. (2000). It was self-calibrated, and thus its WCS cannot be trusted2. We, therefore, take the peak pixel of the SE hot spot observed in VLA Acon\ufb01guration at 44 GHz as the reference to shift the WCS of VLA 8.2 GHz such that they match. Taking the SE peak pixel as the reference, the ALMA Cycle 6 has an o\ufb00set of 0. \u2032\u2032025 between VLA 44 GHz in BnA-con\ufb01guration along RA, and an o\ufb00set of 0. \u2032\u2032011 between VLA 44 GHz in Acon\ufb01guration along RA. O\ufb00sets along the DEC are completely negligible. Although these o\ufb00sets can be larger than the typical position uncertainty of one-tenth of beam size since there can be low-level systematic uncertainties, the current alignments are su\ufb03cient to suggest that the features originate from the same radio source. 3 RESULTS 3.1 Spatial Distribution In Fig. 1 we show the imaging results of VLA 44 GHz and ALMA Band 6 observations, supplemented with previous VLA observation centred at 8.2 GHz (Pentericci et al. 2000). Observed by VLA at 8.2 GHz (green contours in Fig. 1), the radio emission shows clear double components that potentially correspond to the hotspots where the radio jet launched from the AGN interacts with the ambient medium. The double hotspots are also visible in the VLA observation at 4.7 GHz (Pentericci et al. 2000). Thanks to the high \ufb02ux density, the SE hotspot is clearly detected in all VLA 44 GHz observations, while the NW hotspot falls below a 5\u03c3 detection threshold in the B-con\ufb01guration. South of the SE hotspot, there is a di\ufb00use emission region that may be corresponding to the expanding radio lobe, which is clearly shown in the B-con\ufb01guration imaging. We draw a yellow dashed line in Fig. 1 to indicate the direction of the propagation of the radio jets under the assumption that the radio hotspots originate from the well-collimated jets. In addition to the double hotspots, another radio emission is 2 See Post-processing Section at https://science.nrao.edu/ facilities/vla/docs/manuals/oss2015B/performance/ positional-accuracy detected at a position very close to the NW galaxy in BnAcon\ufb01guration (see panel (NW) in Fig. 1), while it remains nondetection in VLA Band A-con\ufb01gurations. Although slightly spatially o\ufb00set by \u223c0. \u2032\u203206, which is insigni\ufb01cant compared with the beam size 0. \u2032\u203214 \u00d7 0. \u2032\u203208, this radio source coincides well with the location of the NW galaxy in which the AGN resides. This source also lies on the dashed line that links the double hotspots, suggesting that it can be the radio core \u2013 the launch point of the bipolar radio jets \u2013 of the RLAGN. One serendipitous and interesting \ufb01nding is a sub-component adjacent to the SE galaxy through the ALMA Band 6 observation (see panel (SE) in Fig. 1). This sub-component has its location perfectly coincided with the SE hotspot, which is identi\ufb01ed in VLA observations, that is supposed to be dominated by synchrotron radiation arising from the in situ acceleration due to a jet-ISM interaction. However, the rest-frame frequency of the ALMA observation is \u223c700 GHz which corresponds to \u223c400 \u00b5m, entering the far-infrared (FIR) regime. A mixture of di\ufb00erent emission mechanisms, including dust thermal emission, synchrotron radiation, and free-free emission, can contribute to the observed \ufb02ux density of this sub-component. The SFR of the entire system is \u223c3000 M\u2299yr\u22121, adopting a power-law black body radiation (Drouart et al. 2014). The integrated \ufb02ux density of this sub-component is about 10 per cent of the dust thermal emission from SE and NW galaxies at 237 GHz. Then, if this subcomponent is dominated by jet-induced star formation, it may have an \u223c300 M\u2299yr\u22121, which corresponds to a synchrotron \ufb02ux density of 3 \u00b5Jy and free-free \ufb02ux density of 10 \u00b5Jy. Therefore, synchrotron and free-free radiations attributed to the SFR contribute negligible contaminations to the sub-component at 237 GHz. However, since a jet-induced star formation cannot be con\ufb01rmed and its associated dust thermal emission cannot be evaluated in the current phase, we treat this sub-component as dominated by synchrotron radiation related to AGN activities throughout this work. 3.2 Spectral Energy Distribution (SED) Although the Dragon\ufb02y galaxy has been observed at low frequencies over decades, the low angular resolutions leave the radio emissions unresolved. Hence, there is no available structural information on radio emissions in observations at \u03bdobs \u22641.4 GHz. We listed the available archived data at di\ufb00erent frequencies in Table 2 and plot the integrated \ufb02ux density against frequency in Fig. 2. The speci\ufb01c luminosity has exceeded 1028 W Hz\u22121 at \u03bdobs \u22641.4 GHz, making this object an unambiguous radio-bright AGN (Yuan & Wang 2012; Kellermann et al. 2016; Hardcastle & Croston 2020), though this radio luminosity can be smaller than that of the extreme populations found at z > 5 by two orders of magnitude (van Breugel et al. 1999; Saxena et al. 2018). Based on the imaging results, we divide the radio emissions into three components: radio core, SE, and NW hotspots, and the total is the sum of these components. All observations at \u22641.4 GHz have MNRAS 000, 1\u201312 (2023) \f4 Yuxing Zhong et al. Figure 1. (a) ALMA Cycle 6 Band 6 continuum image (\u03c3 = 6.5 \u00b5Jy beam\u22121) with contour levels [3, 5, 10, ..., 30, 35]\u00d7\u03c3. The yellow contours indicate the ALMA Cycle 4 Band 6 continuum of the Companion component with [3, 5] \u00d7 \u03c3 levels, where \u03c3 = 10.7 \u00b5Jy beam\u22121. The white contours indicate the VLA BnA-con\ufb01guration observation, as explained in panel (b). The green contours indicate the VLA 8.2 GHz observation (Pentericci et al. 2000). (b) VLA BnA-con\ufb01guration image (\u03c3 = 0.018 mJy beam\u22121). The white contours start at 4\u03c3 and increase by a factor of 1.5 with a step of 3\u03c3. The contour levels are given by [start, start + step, start + (1 + factor) \u2217step, start + (1 + factor + factor2) \u2217step, ...]. (c) VLA B-con\ufb01guration image (\u03c3 = 0.024 mJy beam\u22121). The white contours start at 5\u03c3 and increase by a factor of 1.5. (d) VLA A-con\ufb01guration image (\u03c3 = 0.01 mJy beam\u22121). The white contours start at 5\u03c3 and increase by a factor of 1.5 with a step of 10\u03c3. (NW) Zoom in on the NW galaxy. The white contours indicate the VLA BnA-con\ufb01guration. (SE) Zoom in on the SE galaxy. The white and green contours indicate the VLA BnAand A-con\ufb01guration, respectively. In all images, the black contours indicate the ALMA Cycle 6 Band 6 continuum, the yellow contours indicate the ALMA Cycle 4 Band 6 continuum imaging of the Companion component, and the yellow dashed line indicates the direction of the propagation of the radio jet. The beam size of each false-color image is shown at the lower left by the white ellipse. The new observations have identi\ufb01ed a radio core that in general overlaps with the NW galaxy. The Companion component lies 10 kpc southeast of the SE galaxy in projection. The Companion and SE and NW galaxies are nearly in alignment with the propagation direction of the collimated radio jets. The ALMA Cycle 6 observation reveals a sub-component, adjacent to the SE galaxy, that perfectly coincides with the SE hotspot identi\ufb01ed by VLA 4.7 and 8.2 GHz observations. no resolved components, thus the integrated \ufb02ux densities of these observations are simply treated as the sum of all three components. As shown in Fig. 2, the radio spectrum of the sum of di\ufb00erent components shows no apparent turnover point, below which the spectral index becomes positive due to SSA and/or FFA and the medium is optically-thick to the radio emission (Condon & Ransom 2016). The globally negative spectral index suggests that the medium is likely to remain optically thin to the radio emission. However, we do observe a gradual \ufb02attening of the slope towards lower frequencies, by which we cannot exclude the possibility that there can be a broad peak around 74 MHz at the observed frequency because of SSA/FFA. At 4.7 and 8.2 GHz, we treated the peak \ufb02ux densities of NW and SE hotspots as the integrated \ufb02ux densities, i.e., S peak = S total, since both NW and SE hotspots are marginally resolved (Prandoni et al. 2001; Bondi et al. 2008). At \u22654.7 GHz, the SE hotspot dominates over the observed \ufb02ux densities, indicating that the SE hotspot may be Doppler boosted (see \u00a75.1). The slope of the SE hotspot becomes slightly shallower at 237 GHz than that at 44 GHz (\u03b1 = \u22121.87+0.07 \u22120.07 \u2192\u22121.64+0.06 \u22120.06) because of a possible mixture of various emission mechanisms other than pure synchrotron radiation at 237 GHz. Otherwise, since contaminations from the radio lobes at 8.2 GHz may result in an overestimation of the spectral index at 44 GHz (see \u00a75.2 and \u00a75.4), this slight \ufb02attening is not robust. The NW hotspot falls below the 3\u03c3 detection limit in ALMA Cycle 6 observation and has its slope steepened at 4.7 and 8.2 GHz, but becoming shallower at 44 GHz (see column (4) in Table. 2). Since the radio core is only observed at 44 GHz, we here estimate its contribution to \ufb02ux densities at other frequencies and investigate its impact on the global shape of the total radio SED. At 44 GHz, MNRAS 000, 1\u201312 (2023) \fSynchrotron Radiation from the Dragon\ufb02y Galaxy 5 Table 2. Flux Measurements of the Dragon\ufb02y galaxy Frequency Component S \u03bd Spectral Index Lradio P\u03bd0 Aperture Ref. (MHz) (mJy) (W Hz\u22121) (erg s\u22121) (1) (2) (3) (4) (5) (6) (7) (8) 74 Total 3100 \u00b1 620 (2.9 \u00b1 0.6) \u00d7 1028a (2.2 \u00b1 0.4) \u00d7 1043 80\u2032\u2032 1 147 Total 2620 \u00b1 10 \u22120.25 \u00b1 0.29 (3.2 \u00b1 1.0) \u00d7 1028 (4.7 \u00b1 1.5) \u00d7 1043 25\u2032\u2032 2 365 Total 1580 \u00b1 30 \u22120.56 \u00b1 0.02 (2.7 \u00b1 0.1) \u00d7 1028 (9.9 \u00b1 0.3) \u00d7 1043 6\u2032\u2032 1 1.4 \u00d7 103 Total 453 \u00b1 91 \u22120.93 \u00b1 0.15 (1.2 \u00b1 0.3) \u00d7 1028 (1.6 \u00b1 0.4) \u00d7 1044 45\u2032\u2032 1 4.7 \u00d7 103 Total 115 \u00b1 12 \u22121.13 \u00b1 0.19 (3.6 \u00b1 0.8) \u00d7 1027 (1.7 \u00b1 0.4) \u00d7 1044 3 8.2 \u00d7 103 Total 47 \u00b1 4.7 \u22121.61 \u00b1 0.25 (2.5 \u00b1 0.6) \u00d7 1027 (2.0 \u00b1 0.4) \u00d7 1044 3 44 \u00d7 103 Total 2.42 \u00b1 0.32 \u22121.77 \u00b1 0.08 (1.5 \u00b1 0.2) \u00d7 1026 (6.7 \u00b1 1.1) \u00d7 1043 4 4.7 \u00d7 103 NW 7.2 \u00b1 0.72 1. \u2032\u203216 \u00d7 0. \u2032\u203244 3 8.2 \u00d7 103 NW 2.8 \u00b1 0.28 \u22121.7 \u00b1 0.25 1.6 \u00d7 1026 1.3 \u00d7 1043 0. \u2032\u203272 \u00d7 0. \u2032\u203224 3 44 \u00d7 103 NW 0.29 \u00b1 0.07 \u22121.35 \u00b1 0.16 (1.2 \u00b1 0.3) \u00d7 1025 (5.1 \u00b1 1.5) \u00d7 1042 0. \u2032\u20324 \u00d7 0. \u2032\u20322 4 4.7 \u00d7 103 SE 95.9 \u00b1 9.6 1. \u2032\u203216 \u00d7 0. \u2032\u203244 3 8.2 \u00d7 103 SE 44 \u00b1 4.4 \u22121.4 \u00b1 0.25 1.9 \u00d7 1027 1.5 \u00d7 1044 0. \u2032\u203272 \u00d7 0. \u2032\u203224 3 44 \u00d7 103 SE 1.88 \u00b1 0.09 \u22121.87 \u00b1 0.07 (1.3 \u00b1 0.1) \u00d7 1026 (5.9 \u00b1 0.3) \u00d7 1043 0. \u2032\u20325 \u00d7 0. \u2032\u20324 4 237 \u00d7 103 SE 0.12 \u00b1 0.01 \u22121.64 \u00b1 0.06 (6.6 \u00b1 0.7) \u00d7 1024 (1.6 \u00b1 0.2) \u00d7 1043 0. \u2032\u203207 \u00d7 0. \u2032\u203212 4 44 \u00d7 103 Core 0.23 \u00b1 0.06 0. \u2032\u20324 \u00d7 0. \u2032\u20323 4 Column (1): frequency. Column (2): the measurement of the individual component. Total=NW+SE+core. See texts for details. Column (3): Integrated \ufb02ux densities over a certain region. For \ufb02ux measurements from references 1, 2, and 3 in Column (8), the radio emissions are unresolved such that the integration region can be considered as the beam size. Column (4): the spectral index (positive convention, S \u03bd \u221d\u03bd\u03b1) between two observed frequencies, one in this line and the other in the line just above. Column (5): the spectral luminosity L\u03bd = S obs4\u03c0D2 L/(1 + z)1+\u03b1, where S obs is the observed integrated \ufb02ux densities and DL is the luminosity distance. Column (6): the radio power P\u03bd0 = 4\u03c0D2 L(1 + z)\u2212\u03b1\u22121S \u03bd0\u03bd0 (Cavagnolo et al. 2010). Column (7): the aperture size used to integrate the \ufb02ux density. Column (8): references to \ufb02ux measurements. 1: Vollmer et al. (2010); 2: de Gasperin et al. (2018); 3: Pentericci et al. (2000); 4: this work. a The radio luminosity is calculated assuming the spectral index \u03b1 = 0 the radio core contributes to less than 10% of the sum of di\ufb00erent components, thus the absence of a core has negligible in\ufb02uence on the total radio SED. As we will see in \u00a75.2, the radio core contributes to less than 1 per cent of the observed \ufb02ux densities at 4.7 and 8.2 GHz. Therefore, the contamination from the radio core cannot alter the shape of the observed spectral slope. 4 THE CURVED SED 4.1 Compact-Steep-Source To better investigate the spectrum as a curve, we \ufb01t the observed \ufb02ux densities at frequencies \u22648.2 GHz by a curved power-law model given by (Callingham et al. 2017) S \u03bd = a\u03bd\u03b1eq(ln \u03bd)2, (1) where S \u03bd is the \ufb02ux density at the observed frequency \u03bd in GHz, \u03b1 is the spectral index, a is a factor that determines the amplitude of the non-thermal spectrum, and q is the curvature parameter that optimizes the spectral curvature, by which the peaked frequency (or turnover frequency) is de\ufb01ned as \u03bdpeak = e\u2212\u03b1/2q. The best-\ufb01tting result of the sum of integrated \ufb02ux densities is shown in Fig. 2 by the green dashed line with q = \u22120.175 \u00b1 0.01 and \u03b1 = \u22121.05 \u00b1 0.03, showing no evidence of a signi\ufb01cant curvature (|q| \u22650.2, Du\ufb00y & Blundell 2012). The corresponding peaked frequency is \u03bdpeak = e\u2212\u03b1/2q \u223c50 \u221260 MHz. The curved power-law colossally deviates from the observational data higher than 44 GHz due to the energy loss, which will be studied in \u00a74.2. Since the projected linear separation of the two hotspots is only \u223c1. \u2032\u20325 (\u223c13 kpc at z = 1.92), the Dragon\ufb02y galaxy can be classi\ufb01ed as a CSS source (O\u2019Dea & Saikia 2021). By \ufb01tting a single Gaussian component to the image plane, the size of the SE hotspot deconvolved from the beam is (0. \u2032\u2032054 \u00b1 0. \u2032\u2032015) \u00d7 (0. \u2032\u2032033 \u00b1 0. \u2032\u203201), Figure 2. Integrated \ufb02ux density against frequency of the Dragon\ufb02y galaxy based on the measurements given in Table. 2. The horizontal axis shows the rest-frame frequency and the observed frequency is shown by the data points. The green dashed line indicates the best-\ufb01tting of the observed S Total by adopting a curved power-law model. The NW hotspot is not detected at 3\u03c3-level in ALMA Cycle 6 and we set 3\u03c3 as an upper limit for its \ufb02ux density to proceed with the discussion in \u00a75.1. The blue solid line indicates a spectral index \u03b1 = \u22121.3 and a spectrum steeper than this is classi\ufb01ed as a USS source (De Breuck et al. 2000). Because of the smallness of uncertainties, some error bars are not clearly visible in the \ufb01gure but can be found in Table 2. corresponding to a radius of r \u223c0.47 kpc at z = 1.92, which yields a typical hotspot size of CSS sources (Jeyakumar & Saikia 2000), providing additional support for a CSS classi\ufb01cation. Using an emMNRAS 000, 1\u201312 (2023) \f6 Yuxing Zhong et al. pirical relation that correlates the projected angular size (l in kpc) and \u03bdpeak (in GHz) for CSS sources (O\u2019Dea & Baum 1997), log \u03bdpeak = \u22120.21(\u00b10.05) \u22120.65(\u00b10.05) \u00d7 log l, (2) the estimated turnover frequency is 116 \u00b1 20 MHz for l \u223c13 kpc. This estimation is close to the value given by the curved power-law but can be underestimated since the observation may not detect the full extent of the synchrotron radiation due to the di\ufb00use emission attributed to radio lobes (see panel (c) in Fig. 1 and the discussion in \u00a75.2). CSS sources can be young AGNs with evolving radio jets or can be old populations that have their radio jets con\ufb01ned by the dense ISM of their host galaxies (O\u2019Dea & Saikia 2021). A young CSS source may merely have intermittent or transient activities as well, incapable of forming large structures (O\u2019Dea & Saikia 2021). However, con\ufb01ned by the dense ISM of the AGN host galaxy, the radio jet can strongly couple with the surrounding material, leading the jet to slow down and lose kinetic energy. When the jet is not su\ufb03ciently powerful to overcome the con\ufb01ning pressure of the ISM, such strong couplings may result in the disruption of the jet and the formation of radio lobes (Mukherjee et al. 2016; Bambic et al. 2023). In the Dragon\ufb02y, since the jet has obviously left its host galaxy, i.e., the NW galaxy, and interacted with the ambient medium to create the hotspots, con\ufb01nement by the host galaxy is not favored. Therefore, the RLAGN residing in the Dragon\ufb02y galaxy is likely to be a CSS source that is associated with an AGN at an early evolution stage (Dicken et al. 2012). A CSS source that launches powerful radio jets can escape more quickly from the con\ufb01ning medium than their lowpower counterparts and may evolve into a Fanaro\ufb00-Riley II (FRII) radio galaxy (O\u2019Dea 1998; Mukherjee et al. 2016; Callingham et al. 2017). Nonetheless, we have to consider the possibilities of transient or intermittent radio AGN activities and this requires a further investigation into the synchrotron age. 4.2 Synchrotron Ageing At higher frequencies, there is no cut-o\ufb00but a steepening of the slope at \u03bdobs \u22654.7 GHz with \u03b1 < \u22121.3. Such a steep slope in HzRGs is often found at \u223c1.4 GHz in the radio SED of USS sources (De Breuck et al. 2000). One possible interpretation for this feature is that the increasing cosmic microwave background (CMB) radiation at higher redshifts causes stronger inverse-Compton (IC) losses, and the strength of the magnetic \ufb01eld equivalent to the CMB (BCMB) dominates over the magnetic \ufb01eld (Saxena et al. 2017; van Weeren et al. 2019). However, in the SE hotspot, adopting a lower limit of the magnetic \ufb01eld in energy equipartition Beq, Pentericci et al. (2000) gives Beq = 160 \u00b5G, which is much higher than the expected BCMB (see \u00a74.2.1). Therefore, for the Dragon\ufb02y galaxy, the dominated energy loss at high frequency should be explained by synchrotron ageing, that is, since the electron energy loss rate \u2212dE/dt is proportional to E2, higher-energy electrons deplete their energy faster. 4.2.1 Synchrotron Fitting The injection of new synchrotron-emitting electrons can be described by a power-law spectrum that is referred to as the injection spectrum \u2013 a well-established method to investigate high-frequency energy losses. The injection spectrum is the ensemble of all relativistic electron populations with an injection spectral index \u03b1inj that describes the slope of a synchrotron radiation spectrum. The CI (continuous injection) model describes a source with continuous Figure 3. Fitting results of the SE hotspot using synchrofit. The vertical axis is in an arbitrary unit and all data points are the same but shifted along the vertical axis for display purposes. The shaded regions embed 2\u03c3 of the modeled spectrum and \u03c72 red indicates the reduced-\u03c72 statistics. Top: The black and blue lines indicate the \ufb01ttings without and with ALMA data, respectively. Middle: Fittings with the remnant fraction T \ufb01xed to 0. The dashed and solid lines indicate the \ufb01ttings without and with ALMA data, respectively. Without a remnant fraction T, the \ufb01tting is almost a straight line and cannot reproduce the curvature of the observed SED. Bottom: Fittings with ALMA data and \ufb01xed injection spectral index \u03b1inj. The \u03b1inj is chosen to lie within the range of a DSA scenario. replenishment of new relativistic particles over its lifetime (Pacholczyk 1970). The JP (Ja\ufb00e-Perola) and KP (Kardashev-Pacholczyk) models describe a source that undergoes a single burst of particle acceleration and then ages rapidly, with and without a continuous isotropization of the electron pitch angle, respectively (van Weeren et al. 2019; Ja\ufb00e & Perola 1973; Pacholczyk 1970; Kardashev 1962). We \ufb01t the radio SED of NW and SE hotspots using synchrofit to estimate the break frequency \u03bdbr, above which the synchrotron radiMNRAS 000, 1\u201312 (2023) \fSynchrotron Radiation from the Dragon\ufb02y Galaxy 7 ation ages, and injection index s that re\ufb02ects the initial power-law of the electron energy distribution (Turner et al. 2018; Turner 2018). The injection spectral index can be calculated from \u03b1inj = (1 \u2212s)/2. The \ufb01tting also returns an estimate of the remnant fraction T, which is de\ufb01ned as T = to\ufb00/\u03c4, where to\ufb00is the time the source spent in an inactive phase and \u03c4 = to\ufb00+ ts is the total source age, where ts, synchrotron age, is the duration of the continuous injection. The synchrotron age is determined by \u03bdbr together with the magnetic \ufb01eld strength B. The uncertainties are estimated using 1000 Monte Carlo iterations. The number of increments used to sample the allowing ranges for free parameters \u03bdbr, s, and T, is set by default. Three iterations have been performed such that the free parameters \ufb01tted by the former one of two consecutive iterations will be fed to the latter one as initial guesses. Prior to performing the \ufb01tting, we assume that the integrated \ufb02ux densities in unresolved observations (\u03bdobs \u22641.4 GHz) only emanate from the NW and SE hotspots, with negligible contamination from NW and SE galaxies and the radio core for the reasons stated in \u00a75.2. To divide the \ufb02ux densities of unresolved components into those of SE and NW hotspots, we adopt a \ufb02ux ratio that follows the values found in 4.7 and 8.2 GHz observations, that is S SE : S NW = 15 : 1. Further \ufb01ttings considering di\ufb00erent \ufb02ux ratios and contaminations from the radio lobes are presented in Appendix A. We only present the \ufb01tting results using observational data at \u2265147 MHz here since a shallower slope towards 74 MHz due to ionization losses may aggravate the \ufb01tting results, though an inclusion or omission of 74 MHz data does not a\ufb00ect our conclusion (see Appendix A). Previous studies have shown that the injection spectrum of a radio hotspot is better described by the CI model, which is natural considering the radio jet continuously accelerates the electrons in situ (Carilli et al. 1991; Murgia et al. 2011; Maccagni et al. 2020). Additionally, the reduced-\u03c72 of the JP and KP models is larger than that of the CI model by at least a factor of three. Furthermore, we have performed \ufb01tting by \ufb01xing T = 0, assuming that there is no quiescent phase during the jet-medium interaction. However, as clearly shown in Fig. 3, with or without ALMA 237 GHz data, the observed curvature of the SED of the SE hotspot cannot be reproduced by the algorithm in T = 0 cases, and this scenario can be rejected. Hence, we only discuss the \ufb01tting results of the CI model with non-zero T here. 4.2.2 Fitting Results The synchrotron age, i.e., the time-scale during which the continuous acceleration lasts, is calculated by (Turner et al. 2018; Maccagni et al. 2020): ts = 1610 B1/2 eq B2 eq + B2 CMB 1 \u03bdbr(1 + z)1/2 , (3) where BCMB = 3.25(1+z)2 \u00b5G is the magnitude of the magnetic \ufb01eld equivalent to the CMB. The estimation of Beq in an energy equipartition condition can be calculated using (Miley 1980; Maccagni et al. 2020): Beq = 9.6 \u00d7 10\u221212 P1.4GHz(1 + k) V3 !2/7 , (4) where P1.4GHz is the radio power at 1.4 GHz in a unit of W Hz\u22121, k is the protons to electrons energy ratio in the hotspot and set as 1, and V is the hotspot volume in kpc3. We corrected the observed \ufb02ux density to its rest-frame considering the Doppler boosting effect (see \u00a75.1 and \u00a75.3) to calculate P1.4GHz. The hotspot is assumed to occupy an elliptical cylinder volume, in which the cross-section corresponds to the beam size and the height is the transverse size of the hotspot. The estimated Beq, taking a Doppler boosting factor \u03b4 = 2 \u22124 (see the discussion in \u00a75.3), is 175 \u2212213 \u00b5G for the SE and 214 \u2212261 \u00b5G for the NW hotspot, respectively. These estimates lie within Beq = 160 \u2212700 \u00b5G given by Pentericci et al. (2000) for a sample of HzRGs including the Dragon\ufb02y galaxy. The NW hotspot is \ufb01tted without 237 GHz data, and the \ufb01tting result is \u03bdbr = 980\u00b1180 MHz, \u03b1inj = \u22120.59\u00b10.01, and T = 0.08\u00b10.02. Considering a quiescent phase during the continuous acceleration, the above \ufb01tting gives ts \u223c2 \u22123 \u00d7 105 yr. Without ALMA 237 GHz data, the SE hotspot has \u03bdbr = 760\u00b1140 MHz, \u03b1inj = \u22120.51 \u00b1 0.01, T = 0.13 \u00b1 0.03, and ts \u223c3 \u22124 \u00d7 105 yr. With ALMA 237 GHz data, i.e., under the assumption that the sub-component observed at 237 GHz has synchrotron-dominance, the SE hotspot then has \u03bdbr = 2.95 \u00b1 0.5 GHz, \u03b1inj = \u22120.82 \u00b1 0.02, T = 0.08 \u00b1 0.02, and ts \u223c2 \u00d7 105 yr. Although the model \ufb02ux density well reproduces the observed value at 237 GHz in this case, it slightly exceeds the observation at 44 GHz (blue curve in the top panel of Fig. 3). Naturally, one will expect a further steepening of the spectrum slope due to both synchrotron ageing and IC loss, and thus, at 237 GHz, the synchrotron radiation is supposed to have a much lower \ufb02ux density than the currently observed value. Considering that the 237 GHz data lies within the FIR regime at the restframe, a reasonable explanation is that the sub-component perfectly coincided with the SE hotspot and observed in ALMA Cycle 6 may be contaminated from thermal bremsstrahlung and/or the RayleighJeans tail of the thermal dust emission. Another explanation is that, as we will see in \u00a75.2 and \u00a75.4, this may be a result that the undetected di\ufb00use radio lobes have contaminated low-resolution observations such that the observed \ufb02ux densities of the SE hotspot at \u22648.2 GHz used for the \ufb01tting are over-assumed. Accordingly, at 44 GHz, the true spectral slope should be shallower than the current one and steepens towards 237 GHz. The First-order Fermi Acceleration, or di\ufb00usive shock acceleration (DSA), is one important mechanism for particle acceleration due to jet-medium interaction (Carilli et al. 1991; van Weeren et al. 2019). In a non-relativistic shock acceleration case, the injection index can be expressed by the compression ratio \u03c7 of downstream to upstream proper particle densities (Dermer & Menon 2009). For a strong shock with velocity u \u226bcs, where cs is the sound speed, in a uniform ISM, \u03c7 = 4, corresponding to \u03b1inj = \u22120.5. For the SE hotspot, without 237 GHz data, the calculated \u03b1inj = \u22120.51 agrees well with this scenario. While for \u03b1inj = \u22120.59 found in the NW hotspot, the shock dynamics may be altered by the relativistic particles slightly (Carilli et al. 1991). When the ALMA 237 GHz data is considered, the SE hotspot has \u03b1inj = \u22120.82, which is signi\ufb01cantly di\ufb00erent from the values expected for the case above. Based on current observations, we cannot decompose the 237 GHz data into di\ufb00erent emission mechanisms which may contaminate the observed \ufb02ux density of pure synchrotron radiation. As a result, the modeled \ufb02ux density at 237 GHz and its corresponding \u03b1inj = \u22120.82 can be an overestimation. We further \ufb01tted the spectrum with \ufb01xed \u03b1inj and found that as \u03b1inj increases, T and \u03bdbr decrease (see Fig. 3). Therefore, albeit there can be a mixture of di\ufb00erent emission mechanisms, in this case, we treat the current \ufb01tting up to 237 GHz data with \u03b1inj = \u22120.82 as a lower limit for the synchrotron age. Moreover, if \u03b1inj = \u22120.82 is robust, it may re\ufb02ect signi\ufb01cant environmental di\ufb00erences between SE and NW hotspots. The SE hotspot might reside in a denser environment such that the \ufb02uid velocity of upstream relativistic particles vup becomes smaller, resulting in a deMNRAS 000, 1\u201312 (2023) \f8 Yuxing Zhong et al. crease in s which is given by s = 2+\u03c7 \u03c7\u22121 = vup+2vdown vup\u2212vdown , where vdown is the downstream \ufb02uid velocity (Dermer & Menon 2009). Accordingly, the injection spectral index steepens. The synchrotron ages of both NW and SE hotspots have an order of 105 yr, lying within the order of magnitude 103 \u2212105 yr observed in typical CSS sources that are argued to be related to AGNs at an early evolution stage (O\u2019Dea 1998; Murgia 2003). Since a remnant fraction is essential to \ufb01t the observed curved SED by a CI model, the RLAGN hosted by the Dragon\ufb02y galaxy is not only young but may also have intermittent or transient activities. This may indicate that the central SMBH is in a transition phase towards an active one because the gas in\ufb02ow arising from the merging events fuels the SMBH growth. 5 DISCUSSIONS In this section, we \ufb01rst discuss the Doppler boosting e\ufb00ect because of the highly imbalanced \ufb02ux densities between the SE and NW hotspots. However, the \ufb02ux ratio of SE to NW changes from S SE/S NW \u224815 : 1 at 4.7 and 8.2 GHz to S SE/S NW \u22486 : 1 at 44 GHz, suggesting that there is contamination in low-resolution observations and/or the intrinsic \ufb02ux densities of the SE and NW hotspots may di\ufb00er. We then discuss the possible scenarios. Next, considering the proximity of the sub-component to the SE galaxy, we discuss the energetic input of the radio jet into the ISM. Finally, we discuss the caveats to the results and conclusions we make in this work. 5.1 Doppler Boosting? The radio core and the SE and NW hotspots are well aligned in a straight line. However, the \ufb02ux densities between SE and NW hotspots are highly imbalanced. This feature may be a result of the signi\ufb01cantly di\ufb00erent environments between SE and NW hot spots, which is an implication of the \ufb01tting results of the SE hot spot including ALMA 237 GHz data. If this is the case, adopting equation (9) in Araudo et al. (2019), the upstream energy density of relativistic electrons of the SE hotspot can be larger than that of the NW one by an order of magnitude. However, an investigation into the detailed environments is impossible with the current data set. Here we consider the Doppler boosting e\ufb00ect as another scenario to explain the \ufb02ux density imbalance. In this case, the SE hotspot is associated with the approaching jet while the NW one, as the counterpart, is associated with the receding jet. Therefore, the SE hotspot has its observed \ufb02ux density Doppler boosted arising from relativistic beaming, and the NW hotspot is dimmed accordingly (Condon & Ransom 2016). On the other hand, the global structure is not perfectly symmetric since the projected distance between the hotspot and radio core is lSE = 0. \u2032\u2032472 \u223c4 kpc for SE and lNW = 0. \u2032\u2032678 \u223c5.8 kpc for NW hotspot. This can be explained by a misalignment angle between the two jets because the initially inclined jet may interact with the gas in the circumnuclear disk, resulting in a further bending of the jet (Mukherjee et al. 2018; Garc\u00edaBurillo et al. 2019; Talbot et al. 2022). This can also be explained by that the SE jet has encountered the ISM of the SE galaxy, resulting in a shorter jet-traveled distance compared with the NW jet that may interact with the intergalactic medium. The enhancement or dimming of the intrinsic \ufb02ux density is dependent on the spectral index and Doppler factor \u03b4 (see discussion in \u00a75.3) which is determined by the inclination angle and advance speed of the jet head. With observations of NW and SE hotspots at multiple frequencies, the \ufb02ux density ratios of the approaching and receding one provide constraints on the parameter space of jet geometry. Denote the \ufb02ux density ratio by R, one can expect (Kappes et al. 2019; An et al. 2012) R = S adv S rec = S SE S NW = 1 + \u03b2 cos \u03b8 1 \u2212\u03b2 cos \u03b8 !3\u2212\u03b1 , (5) where \u03b8 is the inclination angle of the approaching jet relative to LOS, \u03b2 = v/c is the advance speed of jet head in the unit of light speed, and \u03b1 is the spectral index, de\ufb01ned as S \u03bd \u221d\u03bd\u03b1. Since there are two values of ratios dependent on the frequency, that is, R \u223c15 at \u03bdobs = 4.7 and 8.2 GHz and R \u223c6 at \u03bdobs = 44 and 237 GHz 3, we \ufb01nd two ranges for \u03b2: \u223c0.2c\u22120.3c for R \u223c 15 and \u223c0.1c\u22120.2c for R \u223c6. Both ranges are in agreement with powerful FRII radio galaxies that have \u03b2 varying within 0.1c \u2013 0.5c with mildly relativistic jets and giant radio lobes (Bondi et al. 2008; O\u2019Dea et al. 2009). Therefore, a mildly relativistic scenario is valid regardless of the exact value of the \ufb02ux density ratio. 5.2 Bi-modal Flux Ratios Since the low-resolution VLA 4.7 and 8.2 GHz observations have much larger \ufb02ux ratios (S SE/S NW \u223c15 : 1) compared with high-resolution and high-frequency VLA 44 GHz observations (S SE/S NW \u223c6 : 1), we consider several possibilities to explain this di\ufb00erence. First, the low-frequency radio emission of the SE hotspot is contaminated by the synchrotron radiation, which is attributed to the cosmic-rays accelerated in the supernova remnants, and free-free emission, which originates from H ii regions, from the SE galaxy located at \u223c1 kpc away. Although the Dragon\ufb02y galaxy is a starburst galaxy, its observed radio luminosities at 4.7 and 8.2 GHz are well beyond the star formation-powered synchrotron regime (Kellermann et al. 2016). Following Condon (1992), we estimated the thermal and non-thermal \ufb02ux densities associated with star formation at 8.2 GHz, with a spectral index \u03b1non\u2212thermal = \u22120.75, \u03b1thermal = \u22120.1, and SFR \u223c3000 M\u2299yr\u22121 (Paper II; Zhong et al. in prep.). The resulting S \u03bd,non-thermal \u223c0.07 and S \u03bd,thermal \u223c0.11 mJy are smaller than the observed \ufb02ux density by more than two orders of magnitude. Hence this scenario can be undoubtedly rejected. Radio core is the second possibility, but can be immediately rejected because the core component in general shows \ufb02atter spectral indices relative to radio hotspots and lobes (Kellermann et al. 1994; Zaja\u02c7 cek et al. 2019). Assuming \u03b1core = \u22120.5, the observed \ufb02ux density 0.23 mJy at 44 GHz corresponds to \u223c0.5 mJy at 8.2 GHz and \u223c0.7 mJy at 4.7 GHz. This explains why the core is only identi\ufb01ed in high-sensitivity VLA 44 GHz observation but remains nondetection in VLA 4.7 GHz and 8.2 GHz observations. The third possibility is re\ufb02ected in the spectral index between 8.2 and 44 GHz. By adopting \u03b1NW = \u22121.35 instead of \u03b1SE = \u22121.87, the expected \ufb02ux density of the SE hotspot at 44 GHz is \u223c4.5 mJy, and the corresponding \ufb02ux ratio becomes S SE/S NW \u224815. Hence, it is clear that the steeper slope of the SE hotspot results in a change in \ufb02ux ratios. A plausible explanation for this steepening is that, in VLA 4.7 and 8.2 GHz observations, the unresolved SE component 3 The values of \u03b1 for each hotspot at 8.2, 44, and 237 GHz follow those listed in Table. 2, and \u03b1 = \u22121.13 is assumed for both hotspots observed at 4.7 GHz. The NW hotspot is not detected in ALMA Band 6. We assumed an upper limit of the NW component based on 3\u03c3 level to estimate the \ufb02ux ratio at 237 GHz, which gives S NW,237GHz = 0.0194 mJy. MNRAS 000, 1\u201312 (2023) \fSynchrotron Radiation from the Dragon\ufb02y Galaxy 9 contains \ufb02ux densities not only from the SE hotspot but also the expanding radio lobes associated with it. At 44 GHz, the di\ufb00use radio lobes are not fully imaged, leading to a signi\ufb01cant decrease in the observed \ufb02ux densities linked to the SE hotspot. This is supported by the low-resolution Australia Telescope Compact Array continuum observation at 40 GHz that has a total \ufb02ux density S \u03bd = 5.1 \u00b1 1.5 mJy (Lebowitz et al. 2023; Emonts et al. 2011), which is about twice the sum of \ufb02ux densities of the radio core and SE and NW hotspots observed at VLA 44 GHz. Last but not least, the possibility that the intrinsic \ufb02ux densities of SE and NW hotspots may di\ufb00er exists. In this scenario, this more rapid steepening can be explained by that the SE hotspot has lost more energy at higher frequencies compared with the NW one. As discussed in \u00a74.2.2, this might be a consequence that the SE jet has interacted with a medium with higher density than the NW one, resulting in a steeper initial power-law energy distribution of the injected electrons. An additional explanation is that, due to the 3D geometry of the bipolar jets, the jet-traveled distance has a projection onto the LOS. Then, the total projection will be lLOS,total = lLOS,NW + lLOS,SE. This is an additional light-traveled distance for the synchrotron radiation from the NW hotspot. Accordingly, the NW hotspot is intrinsically younger than the SE one by lLOS,total/c yr at the time of observation, and thus has a higher magnetic \ufb01eld strength than the SE hotspot does. 5.3 The Possible Jet-ISM Interaction A serendipitous discovery of ALMA Band 6 continuum imaging is the sub-component adjacent to the SE galaxy and coincides with the location of the SE hotspot. The radio continuum to the east of the SE galaxy is consistent with the distribution of CO(6-5) line emission regions. However, west to the SE galaxy, no line emission has been detected around the SE hotspot and only the continuum is detected as the sub-component in high-resolution observations, while extended CO(6-5) line emission is found at this location in the lowresolution observation (Lebowitz et al. 2023; Paper II; Zhong et al. in prep.). Additionally, there is only a faint rest-frame UV continuum at this location. Furthermore, Emonts et al. (2015b) has found a giant molecular cloud lying close to the propagation direction of the SE jet, o\ufb00set from the jet propagation axis by \u223c0. \u2032\u20323 and indicated by the Companion in panel (a) in Fig. 1. Considering also the small o\ufb00set (\u223c1 kpc) of the sub-component to the centroid of SE galaxy, this sub-component may indicate an interaction between jet and ISM, through which the radio jet has driven a massive molecular out\ufb02ow. To examine this scenario, we estimate the spatial separation of SE and NW galaxies, which is \u223c7 kpc in the projected plane. Assuming this Companion in the form of molecular gas that may reach an out\ufb02ow velocity of \u223c1000 km s\u22121 (Wagner et al. 2012), the Companion takes \u223c7 Myr to travel such a distance. However, the upper limit of the lifetime of the SE hotspot has an order of 105 yr, suggesting that the bulk of the massive molecular gas cannot be the out\ufb02ow driven by the jet-ISM interaction. Nonetheless, we can still estimate the intrinsic \ufb02ux density of the SE hotspot through \u03b42+\u03b1 < S/S 0 < \u03b43+\u03b1 (Condon & Ransom 2016), where S is the observed \ufb02ux density, S 0 is the intrinsic \ufb02ux density, \u03b4 \u2261[\u03b3(1\u2212\u03b2 cos \u03b8)]\u22121 is the Doppler factor and \u03b3 \u2261(1\u2212\u03b22)\u22121/2 is the Lorentz factor. Using equation (4), we \ufb01nd the observed \ufb02ux ampli\ufb01ed by \u03b4 \u223c2 and \u03b4 \u223c3 \u22124 for S SE/S NW \u223c6 and S SE/S NW \u223c15, respectively. Conservatively, at \u03bdobs \u223c1.4 GHz, we assume that SE hotspot contributes to \u223c90% of the total observed \ufb02ux densities and Doppler boosted by a factor of 2 as an upper limit, the corresponding intrinsic radio power is P1.4GHz = (7.3 \u00b1 1.9) \u00d7 1043 erg s\u22121. Adopting an empirical correlation between the radio power at 1.4 GHz and the kinetic power of the jet Pjet (Cavagnolo et al. 2010), the approaching jet associated with the SE hotspot has a total jet kinetic power Pjet = (7 \u00b1 9) \u00d7 1046 erg s\u22121. The total energy injection for the ambient gas to be accelerated by the radio jet can be estimated via an energy conservation argument (Nesvadba et al. 2006): Mout = 2 \u00d7 1011Eout,60v\u22122 esc,500 M\u2299, (6) where Eout,60 is the energy of the out\ufb02ow in units of 1060 ergs and v\u22122 esc,500 is the out\ufb02ow velocity in units of 500 km s\u22121. Even taking a lower limit on the SE jet lifetime, the total energy injection is E \u223c1 \u00d7 1059 erg, suggesting that the jet kinetic energy can result in a signi\ufb01cant molecular gas out\ufb02ow of Mout \u223c7 \u00d7 109 M\u2299even reaching an out\ufb02ow velocity \u223c1000 km s\u22121 for the most powerful jets (Wagner et al. 2012). This may explain why there is merely diffuse CO(6-5) line emission in the west of the SE galaxy, that is, the molecular gas has been blown away by the jet-ISM interaction through the jet kinetic power, and the gas has been excited to higher excitation levels through the jet thermal power. The jet-driven large-scale molecular out\ufb02ows (MH2 \u223c109\u221210 M\u2299), which indicates a removal of a signi\ufb01cant fraction of the ISM from the AGN host galaxy, have been observed in some massive galaxies (M\u22c6\u223c1011 M\u2299) at high redshifts (e.g., Nesvadba et al. 2006, 2007, 2017; Emonts et al. 2023). However, a jet-ISM interaction in the Dragon\ufb02y galaxy discussed here happens between the jet and the merging pair (SE galaxy) of its host galaxy (NW galaxy), making it a unique sample at high redshifts without parallels. A detailed investigation into the AGN feedback and its associated out\ufb02ow traced by CO(6-5) line emission through an interaction of the jet and the ISM of the NW galaxy will be presented in Paper II. 5.4 Caveats The interpretation of the remnant fraction should be careful. It may simply indicate that the jet has been switched o\ufb00and the jet-medium interaction has ceased at the observation timing, which is a general understanding of non-zero remnant fraction in the CI model (Murgia et al. 2011; Turner 2018; Maccagni et al. 2020). We further speculate on the interpretation of the remnant fraction in our work considering the complexity of the particle acceleration in realistic models. The radio source may have experienced a quiescent time-scale after the \ufb01rst phase of continuous acceleration, following a second phase in which the restarted radio jets interact with the medium again and replenish a new population of relativistic electrons. In this restarting scenario, if the restarted jet reaccelerates the particles universally, then the entire radio spectrum will be altered and the break frequency no longer traces the time-scale since the initial acceleration begins (Carilli et al. 1991). In this case, a \ufb02attening in radio SED at high frequencies will be observed because of the accelerated electrons. However, this feature is not found in our target. Then, the restarted jet could happen shortly before the observation timing or it is not su\ufb03ciently powerful to alter the ensemble of synchrotron-emitting electrons. Therefore, compared with those dying radio sources, which have been inactive for a long period with ts \u223c106 \u2212107 yr, Beq \u223ctens \u00b5G, and remnant fraction of T \u22650.2, the small remnant fraction found in the Dragon\ufb02y galaxy indicates that this CSS source may have experienced a past radio activity, and our conclusion that the RLAGN may have transient or intermittent activities remains unchanged. We note that the age estimations of the radio hotspots can be MNRAS 000, 1\u201312 (2023) \f10 Yuxing Zhong et al. uncertain because of the magnetic \ufb01eld strengths, ambient medium densities, and volumes of hotspots. Additionally, the contribution of the di\ufb00use radio lobes to the low-resolution \ufb02ux densities cannot be measured with the current observations. Furthermore, the exact \ufb02ux ratio between the two hotspots cannot be determined unambiguously because of the limited resolution and sensitivity throughout all observations. As a consequence, the true spectral shapes of NW and SE hotspots will change together with the \ufb02ux densities of the lobes and \ufb02ux densities ratios at \u03bd \u22648.2 GHz. Hence, the obtained synchrotron age is just an order of magnitude estimation. We show the distributions of the best-\ufb01t parameters under the assumptions that, at low frequencies, the fractional contamination of di\ufb00use emissions to the observed \ufb02ux densities ranges from 1 to 55 per cent by a step of 1, and \ufb02ux ratios vary from 4 to 15 by a step of 1, in Fig. A4. Based on the current estimate of Beq, unless log \u03bdbr is smaller than 8.1 (corresponding to \u223c150 MHz), the age cannot exceed 105 yr. A very low break frequency smaller than 150 MHz is not expected for all the \ufb01ttings, suggesting that the conclusion of a young radio source is not sensitive to the limited number of data points and uncertain contaminations and \ufb02ux ratios. Although a non-zero remnant fraction is favored in most \ufb01ttings, when the SE hotspot is \ufb01tted only up to 44 GHz, a zero remnant fraction does happen in some cases. These are unavoidable uncertainties in this work due to the limited resolved low-frequency observations and possibly contaminated 237 GHz data, by which we cannot constrain the real spectral shapes. Future observations are essentially required to decompose the ALMA 237 GHz data and \ufb01ll the gap between 44 and 237 GHz to con\ufb01rm whether the curvature is robust and in need of a non-zero remnant fraction to interpret. We, therefore, emphasize the importance of high-frequency observations which can help us investigate the aging problem of radio sources. We also note the importance of multi-frequency data since the limited number of data points can dilute the curvature of the radio SED, leading to further uncertainties in the estimation of the remnant fraction. 6 CONCLUSION In this work, we have studied the synchrotron radiation from the radio hotspots in the Dragon\ufb02y galaxy, a hyper-luminous infrared galaxy at z = 1.92 using joint VLA and ALMA observations. Our major \ufb01ndings are as follows: (i) The NW hotspot, SE hotspot, and radio core constitute the synchrotron radiation from the Dragon\ufb02y galaxy and the SE hotspot dominates the observed \ufb02ux density. The synchrotron source has a projected linear size of \u223c13 kpc and \u03b1 < \u22120.5 at \u03bdobs \u2265365 MHz, being classi\ufb01ed as a CSS source. (ii) ALMA Band 6 observation catches a sub-component o\ufb00set from the SE galaxy by \u223c1 kpc and coincided perfectly with the location of the SE hotspot. This sub-component may be a mixture of di\ufb00erent emission mechanisms, including synchrotron radiation, thermal bremsstrahlung, and RayleighJeans tail. (iii) The NW hotspot has a synchrotron age ts \u223c2 \u22123 \u00d7 105 yr, while the SE one can vary amongst ts \u223c2 \u22124 \u00d7 105 yr. These age estimates agree with typical orders of magnitude observed in CSS sources related to radio AGNs that are robustly young. Furthermore, the \ufb01ttings of both hotspots may indicate that the radio jets have been switched o\ufb00at the observation timing, or that the RLAGN has past radio activities and re-launched radio jets that are not powerful enough to alter the energy density of high-frequency synchrotron-emitting electrons. This suggests that this RLAGN may have transient or intermittent activities because the central SMBH of the AGN residing in the NW galaxy is possibly in a fast transition phase. (iv) The NW hotspot has an \u03b1inj = \u22120.59 and the SE one has \u03b1inj = \u22120.51 in \ufb01ttings without ALMA 237 GHz data, both consistent with the values expected for the \ufb01rst-order Fermi acceleration. When the 237 GHz data is considered, the SE hotspot has \u03b1inj = \u22120.82. If this value is correct, it may indicate that the particle acceleration in the SE hotspot may not simply be explained by the \ufb01rst-order Fermi acceleration for a non-relativistic strong shock in a uniform-density ISM, re\ufb02ecting an environmental di\ufb00erence between the NW and SE hotspots. However, because of the limited resolution, the possibility that this di\ufb00erence is a result of the contaminated 237 GHz and low-frequency data exists. (v) The observed \ufb02ux ratios between the SE and NW spots indicate that the intrinsic \ufb02ux density of the SE hotspot has been Doppler boosted as a result of the smaller inclination angle of the radio jet relative to the LOS, while the NW one dimmed. The SE (NW) hotspot then corresponds to the approaching (receding) jet. The advance speed of the jet head ranges from \u223c0.1c \u2013 0.3c, in agreement with the mildly relativistic jet case observed in typical FRII galaxies. (vi) If the sub-component identi\ufb01ed in ALMA Cycle 6 observation indicates an in situ jet-ISM interaction, the jet can drive a massive molecular gas out\ufb02ow within its lifetime and excite the CO gas to higher rotational transition levels, providing an explanation for the extended and di\ufb00use CO(6-5) line emission on the west side of the SE galaxy, as well as the steep \u03b1inj found in the SE hotspot \ufb01tted by the CI model. The remnant fraction is robust for a reproduction of the curvature of the observed radio SED, suggesting that this RLAGN may have transient or intermittent activities. Considering also its young age, this suggests that high-redshift RLAGNs may have short duty cycles because of the stochastic accretion \ufb02ows onto SMBHs. Still, this requires a larger sample to con\ufb01rm. We note the importance of high-frequency observations to investigate the behavior of radioloud AGNs at high redshifts to understand when they interact with their ambient medium, such that we can gain a more complete view of AGN-host galaxy co-evolution. The sub-component adjacent to the SE galaxy may be a chance alignment. Otherwise, this HzRG is a merging galaxy in which the radio jet launched from the AGN interacts with the merging pair of the AGN host galaxy. There are only a few cases of such kind of a system observed at lower redshifts (e.g., Hota et al. 2022) and this galaxy may be the \ufb01rst one identi\ufb01ed at high redshifts. Therefore, subsequent observations, including both radio and optical observations, are required to con\ufb01rm this scenario, as well as to decompose the di\ufb00erent emission mechanisms at 237 GHz. ACKNOWLEDGEMENTS We thank the sta\ufb00in ALMA and NRAO helpdesk for their kind help in data calibration and reduction. We thank Bjorn Emonts and Sophie Lebowitz for sharing their paper before it becomes public. We also thank Niinuma Kotaro for his comments. This paper makes use of the following VLA data: VLA/15A-316 and VLA/17B-444. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2016.1.01417.S and ADS/JAO.ALMA#2018.1.00293.S. Data analysis was carried out MNRAS 000, 1\u201312 (2023) \fSynchrotron Radiation from the Dragon\ufb02y Galaxy 11 on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan. AKI, YS, and YF are supported by NAOJ ALMA Scienti\ufb01c Research Grant Numbers 2020-16B. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Software: python (Van Rossum & Drake 2009), matplotlib (Hunter 2007), astropy (Astropy Collaboration et al. 2013, 2018, 2022), synchrofit (Turner 2018; Turner et al. 2018), carta (Cube Analysis and Rendering Tool for Astronomy, Comrie et al. 2021), numpy (Harris et al. 2020), and scipy (Virtanen et al. 2020). DATA AVAILABILITY The ALMA data used in this work are publicly available at https://almascience.nao.ac.jp/aq/. The VLA data used in this work are publicly available at https://data.nrao.edu." + }, + { + "url": "http://arxiv.org/abs/2111.15289v1", + "title": "A Morphological Study on Galaxies Hosting Optical Variability-Selected AGNs in the COSMOS Field", + "abstract": "The morphological study is crucial to investigate the connections between\nactive galactic nuclei (AGN) activities and the evolution of galaxies.\nSubstantial studies have found that radiative-mode AGNs primarily reside in\ndisk galaxies, questioning the merger-driven mechanism of AGN activities. In\nthis study, through S{\\'e}rsic profile fitting and non-parametric morphological\nparameter measurements, we investigated the morphology of host galaxies of 485\noptical variability-selected low luminosity AGNs at $z\\lesssim4.26$ in the\nCOSMOS field. We analyzed high-resolution images of the Hubble Space Telescope\nto measure these morphological parameters. We only successfully measured the\nmorphological parameters for 76 objects and most AGN hosts ($\\sim70\\%$) were\nvisually compact point-like sources. We examined the obtained morphological\ninformation as a function of redshift and compared them with literature data.\nWe found that these AGN host galaxies showed no clear morphological preference.\nHowever, the merger rate increased with the higher hosts' SFRs and AGN\nluminosity. Interestingly, we found ongoing star formation consistent with the\ntypical star forming populations in both elliptical and spiral galaxies while\nthese two types of galaxies were more symmetric than normal star forming\ngalaxies. These results suggested that optical variability-selected AGNs have}\nhigher probabilities to reside in elliptical galaxies than infrared-selected\nAGNs (IR-AGNs), whose host galaxies had a strong disk-dominance, and supported\nrecent studies that the AGN feedback could enhance star forming activities in\nhost galaxies.", + "authors": "Yuxing Zhong, Akio K. Inoue, Satoshi Yamanaka, Toru Yamada", + "published": "2021-11-30", + "updated": "2021-11-30", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "INTRODUCTION Galaxy morphology is a direct means that reveals the interaction of galaxies with their environments and the impact of the internal perturbation. Based on galaxies\u2019 visual appearance, the morphological study provides a unique method to investigate the evolution of galaxies. In 1926, Edwin Hubble proposed a galaxy morphology classi\ufb01cation scheme, the so-called Hubble sequence, which divided galaxies into four classes: ellipticals, lenticulars, spirals, and irregulars (Hubble 1926). In addition, in 1959, de Vaucouleurs extended Hubble\u2019s scheme, taking rings into consideration (de Vaucouleurs 1959). Recently, a well-known project, Galaxy Zoo, proposed more detailed morphological classi\ufb01cations, including disturbed features and cigar-shape classi\ufb01cations for edge-on galaxies (Lintott et al. 2008; Willett et al. 2013). Along with purely visual investigation, parametric (S\u00b4 ersic index n) and non-parametric methods (Gini (G), M20, Concentration (C), Asymmetry (A), Smoothness (S), and Ellipticity) have been proposed to quantitatively describe the light distribution within a galaxy. Galaxies di\ufb03cult to visually classify at high-redshifts or due to edge-on structures can be more accurately distinguished through these parameters. For the S\u00b4 ersic index, Ravindranath et al. (2004) found n = 2 e\ufb03cient enough to separate earlyand late-type galaxies and Cassata et al. (2011) successfully applied this value to high-z HST galaxies. Conselice (2003) presented the values of G, A, and S for nearby galaxies and G decreased as the galaxies varied from ellipticals to disks, and irregulars, whereas A and S increased. Blackholes (BH) are ubiquitously found in the centers of galaxies. The co-evolution of a galaxy and its central BH is one of the most attractive issues in modern astronomy. Unfortunately, owing to the limitation of observational techniques, we can only investigate such connections via a proxy, which is known as the active galactic nucleus (AGN). In most cases, an AGN resides in the center of its host galaxy on a very small scale and is fueled by the gas in\ufb02ow\u2013an accretion onto the BH. Generally, AGNs that have dust-obscured structures, along with broad/narrow line regions (BLRs/NLRs), are arXiv:2111.15289v1 [astro-ph.GA] 30 Nov 2021 \f2 rich in emission lines. These AGNs are called radiativemode. They are more likely to be held by moderately massive (M\u22c6\u223c1010 to a few times \u00d7 1011M\u2299) disk systems undergoing star forming activities with SFRs corresponding to those of the typical star forming populations at epochs of up to z \u223c2 (Bournaud et al. 2012; Harrison et al. 2012). On the other hand, jet-mode AGNs that are radiatively ine\ufb03cient with advection-dominated accretion \ufb02ows (ADAF) tend to reside in massive ellipticals, or spheroid systems (Heckman & Best 2014). Such coevolution studies help us understand the growth mechanism of BHs, how AGNs in\ufb02uence the star formation rate (SFR), and the global structure of the host galaxy. Before the detailed studies is the selection of AGNs out of astronomical objects such as normal galaxies. One primary approach to select AGNs is based on Xray observations. Mainieri et al. (2011) selected 142 Type-2 QSOs in the COSMOS \ufb01eld via XMM-Newton observatory at LX[0.5 \u221210 keV] = 1044 \u22121045 erg s\u22121 and \u223c0.8 < z < 2.0. They found the majority of their objects residing in early-type galaxies, and the minority belongs to prominent disk systems or mergers. Meanwhile, at z \u223c1, about 62 \u00b1 7% Type-2 QSO hosts are actively forming stars and this tendency becomes more apparent at higher redshifts, suggesting an evolutionary e\ufb00ect. The evolution of the speci\ufb01c SFR (sSFR) of these QSO hosts along the redshift excellently agrees with that of normal star forming galaxies (SFGs) at 1 < z < 3. In addition, using Herschel PACS observations, Santini et al. (2012) found the hosts of their X-ray-selected radiative-mode AGNs in GOODS-S and -N \ufb01elds at \u223c0.5 < z < 2.5 exhibited enhanced SFRs and sSFRs in comparison with mass-matched inactive galaxies, indicating that SFGs are more likely to host AGNs. Infrared (IR) observations are paramount for explorations on the obscured AGNs and act as complements to X-ray data. This method can unveil AGNs that even the deepest X-ray observations cannot detect. Schawinski et al. (2012) studied the nature of quasar host galaxies based on mid-IR (MIR, 24\u00b5m) selected dustobscured galaxies (DOGs) at z \u223c2 using Hubble Space Telescope WFC3/IR imaging data. Contrary to X-rayselected Type-2 AGN hosts analyzed by Mainieri et al. (2011), most of their DOGs are disk systems. Further, they argued that only a minority of these objects are mergers. This merger rate of AGN hosts is also supported by the study on IR-AGN hosts up to z \u223c2.5 (Chang et al. 2017); they suggested that the merger rate of most luminous AGNs with log (LIR/L\u2299) \u223c12.5 could be as large as 50%. According to the sample selections, redshifts, and techniques used for the studies, the relations between the host galaxy\u2019s morphology and AGN activities are still under debate. Pierce et al. (2007) selected 94 AGNs based on X-ray observations at 0.2 < z < 1.2 and found most of their host galaxies classi\ufb01ed as E/S0/Sa; nonparametric techniques revealed no evident morphological preference in the hosts of IR-AGNs. Gabor et al. (2009) performed a two-component decomposition for \u223c400 X-ray-selected AGN hosts at 0.3 < z < 1.0 and found that the morphology of host galaxies spanned a wide range from bulge-dominated (1.5 < S\u00b4 ersic index n < 10) to disk-dominated (n \u223c1) systems and peaks between these two systems. Villforth et al. (2014) simulated AGN components and co-added them with stellar mass-matched control samples at 0.2 < z < 0.8, \ufb01nding similar distributions of asymmetries, S\u00b4 ersic indices, and ellipticities between AGN hosts and control sample galaxies. At 1.25 < z < 2.67, Schawinski et al. (2011) considered point source components and suggested that over half of their 57 X-ray AGN hosts resided in diskdominated systems. Low luminosity AGNs (LLAGNs) su\ufb00er contaminations by light from their host galaxies, where usual color selection techniques do no work. Therefore, the \ufb02ux variability, possibly arising from the instability of the accretion disk, in multiepoch observations can be employed as a new selection technique. By studying the host galaxies of MIR variability-selected AGNs, Villforth et al. (2012) found common disturbed morphological features, and that it was secular processes\u2013tidal processes and minor mergers in particular\u2013that triggered LLAGN activities rather than major mergers. In this study, we explore the morphology of host galaxies whose central AGNs are selected based on their optical variabilities up to z \u223c3. Such a method provides a more e\ufb03cient selection of unobscured AGNs with low bolometric luminosities compared with X-ray or IR observations. Accompanied by this low obscuration is the high contamination of their light to the host galaxies. Further, high redshifts beget a cosmological surface brightness dimming of the galaxies, making it more di\ufb03cult to visually determine the morphological classi\ufb01cation. Considering these, we employ a 2D surface brightness \ufb01tting (S\u00b4 ersic index), which includes a correction for central brightness excess, and non-parametric methods less a\ufb00ected by an AGN component but also corrected. To obtain detailed information about hosts\u2019 SFRs, masses, and AGN fractions to investigate AGNhost connections, the spectral energy distribution (SED) \ufb01tting is performed using IR to X-ray photometric data. The rest of this article is organized as follows: In Section 2, we describe the samples as well as imaging and photometric data. Section 3 introduces both parametric and non-parametric methods, along with how we perform the SED \ufb01tting. The results of morphological measurements are presented in Section 4. Indirect results derived from di\ufb00erent methods, as well as implications are presented in Section 5. Finally, we summarize in Section 6. We use the following \u039bCDM cosmological parameters to calculate the radius in the unit of kpc: H0 = 70 km s\u22121Mpc\u22121, \u2126M = 0.27, \u2126\u039b = 0.73. All \f3 magnitudes in this article follow the AB system (Oke & Gunn 1983). 2. SAMPLE AND DATA 2.1. Optical Variability-Selected AGNs The parent catalog of the AGNs examined in this study is based on optical variability-selection (Subaru Hyper-Suprime-Cam g, r, i, z, Furusawa et al. 2018; Kawanomoto et al. 2018; Komiyama et al. 2018; Miyazaki et al. 2018) by Kimura et al. (2020), consisting of 491 variability-selected AGNs in the Cosmic Evolution Survey (COSMOS, Scoville et al. 2007) \ufb01eld. The brightness of the AGNs in this catalog ranges from 17.56 to 25.87 in i-band magnitude, and the redshifts are spanned up to z = 4.26. The redshift distribution of these variability-selected AGNs is plotted in Fig. 1. Of all these objects, 441 (\u223c90%) were detected in X-ray observations with Chandra, and 337 objects (\u223c69%) had spectroscopic redshifts. The available spectroscopic redshifts are provided by zspec in the Subaru Hyper-Suprime-Cam (HSC) PDR2 catalog 1, which are collected from zCOSMOS DR3 (Lilly et al. 2009), PRIMUS DR1 (Coil et al. 2011; Cool et al. 2013), VVDS (Le F` evre et al. 2013), SDSS DR12 (Alam et al. 2015), FMOS-COSMOS (Silverman et al. 2015), 3D-HST (Momcheva et al. 2016) and DEIMOS 10 Spectroscopic Survey Catalog (DEIMOS catalog, Hasinger et al. 2018). If no spectroscopic redshift is available, z best (best photometric redshift) in the Chandra catalog (Marchesi et al. 2016) is used for Xray-detected objects, and z PDF (photometric redshift measured using the galaxy templates) in COSMOS2015 catalog (Laigle et al. 2016) is used for X-ray-undetected objects. 2.2. Imaging Data for Morphological Classi\ufb01cation To visually inspect the morphology of host galaxies of the optical variability-selected AGNs, we choose the I-band (F814W) imaging data obtained with the Hubble Space Telescope/Advanced Camera for Surveys (HST/ACS) from the COSMOS-HST Treasury project (Koekemoer et al. 2007; Massey et al. 2010). We searched around the coordinate of each object recorded in the Subaru HSC catalog o\ufb00ered by Kimura et al. (2020) within a radius of 1.5\u2032\u2032 in the COSMOS-HST catalog to obtain the HST cutout for each AGN host galaxy, and 485 of 491 objects were found in the COSMOS-HST Treasury project (Koekemoer et al. 2007; Massey et al. 2010). The cutout of each object had 202 pixels in height and width with the resolution per pixel of 0.03\u2032\u2032, corresponding to 6\u2032\u2032 \u00d7 6\u2032\u2032 in size. Checking the imaging data, we found that many host galaxies were contaminated by their central AGNs, lead1 https://hsc-release.mtk.nao.ac.jp/doc/index.php/database-2/ Figure 1. Redshift distribution of the optical variabilityselected AGNs. The blue and orange histogram represents X-ray detected and undetected AGNs, respectively. The \ufb01lled and open histogram displays the number of objects with spectroscopic redshift and photometric redshift in each bin, respectively. ing to bright point spread function (PSF)-like components in their centers, or completely dominated by the PSF. Therefore, we considered the PSF component a necessary term to \ufb01t the 2D S\u00b4 ersic brightness pro\ufb01le. We used the Tiny Tim HST PSF Modeling Tool (Krist et al. 2011) to generate the PSF of the I-band (F814W) images. The \ufb01nal modeled PSF was the average of the stacking of 17 PSFs modeled with 17 types (O, B, A, F, G, K, and M, including intermediate types) of stars, which were provided by Tiny Tim. The PSFs modeled with some types of stars showed unexpected gaps or bumps in their radial light pro\ufb01les. We performed the stacking to avoid the possible in\ufb02uences of such gaps on the successive measurements. A real point source convolved with the real PSF should always be larger than the size of the PSF. To ensure the reliability of the modeled PSF, its fullwidth at half-maximum (FWHM) was measured via IRAF-psfmeasure command, and compared with a few PSF-dominated objects (the entire object is visually in shape of the PSF), which could be considered real PSFs. The modeled one had an FWHM of 2.853 pixels, whereas two objects at z > 4 both had FWHM larger than 3 pixels, and an object at z = 3.599 had FWHM = 2.75 pixels. Although the FWHM of this object at z = 3.599 was smaller than that of the modeled PSF, we considered that such a 0.1-pixel di\ufb00erence was insigni\ufb01cant. Thus, we believe this modeling is su\ufb03cient for our purpose of involving it as a necessary term for the 2D S\u00b4 ersic \ufb01tting. Fig. 2 depicts our modeled PSF and the z = 3.599 object as an example of PSF-dominated host galaxies. \f4 Figure 2. Left panel: PSF modeled by Tiny Tim Modeling Tool (Krist et al. 2011) with FWHM = 2.853 pixels. Right panel: ID002, an object at z = 3.599 with FWHM = 2.75 pixels, completely convolved to be a PSF-dominated object due to its extremely compact size. The small di\ufb00erence of 0.1 pixels between the two FWHMs is believed to be acceptable and su\ufb03cient for the successive morphological analysis. 2.3. Photometric Data We obtained multi-wavelength photometry from the COSMOS2015 catalog (Laigle et al. 2016). The data include near-ultraviolet (NUV, 200-300 nm) and far-UV (FUV, 100-200 nm) observations from GALEX (The Galaxy Evolution Explorer satellite, Zamojski et al. 2007; Capak et al. 2007b), UV observations (300-400 nm) from the Canada-France Hawaii Telescope (u-band, CFHT/MegaCam), and optical data from COSMOS-20 survey taken with Subaru Suprime-Cam in \ufb01ve broad bands (B, V, i, r, z), six intermediate bands (IB427, IB464, IB505, IB574, IB709, and IB827), and two narrow bands (NB711 and NB816) (Taniguchi et al. 2007, 2015). In the near-IR (NIR) range, Y-, J-, H-, Ksband data taken with WIRCam and Ultra-VISTA (McCracken et al. 2010, 2012) were used. In the MIR range, the SPLASH-COSMOS survey (Spitzer Large Area Survey with HSC; PI: P. Capak), S-COSMOS (Sanders et al. 2007), and the Spitzer Extended Mission Deep Survey (Ashby et al. 2013) provide data in [3.6], [4.5], [5.8], and [8.0] \u00b5m. The units of these photometry were converted from the Absolute Magnitude (AB mag) to \ufb02ux densities (F\u03bd) with the unit mJy (F\u03bd = 10(8.90\u2212m)/2.5 \u22171000). The fractional errors in AB mag were used as the uncertainties on F\u03bd. Many objects were detected in X-ray observations, and we selected Chandra 0.5-10 keV band data originally from the \u201dChandra COSMOS Legacy\u201d survey (Civano et al. 2016) for our X-ray-detected objects and converted their \ufb02ux unit of erg s\u22121 cm\u22122 to the unit of \ufb02ux density mJy (corresponding to the frequency of 2.297\u00d71015Hz). We assumed a 10% fractional di\ufb00erence of the observed \ufb02ux as the uncertainty, which was typical for our objects based on the catalog by Civano et al. (2016). 3. METHODS 3.1. Parametric Method The basic parametric method includes the measurement of the S\u00b4 ersic index (n) and the e\ufb00ective radius (re, the radius that encloses 50% of the galaxy\u2019s total \ufb02ux). We performed 2D S\u00b4 ersic+PSF pro\ufb01le \ufb01tting by employing Astropy (Astropy Collaboration et al. 2013, 2018) and its a\ufb03liated package statmorph (RodriguezGomez et al. 2019). This package was also used for the non-parametric measurements in the next subsection. To perform the \ufb01tting, the segmentation map, a 2D array with the same size as the cutout, that labels the pixel belonging to the source should be created in advance. We used photutils (Bradley et al. 2020) to create this image segmentation. For the \ufb01rst step, a smoothing box, which was de\ufb01ned as a 2D circular Gaussian kernel with an FWHM of 3 pixels, was used to smooth the cutout image and increase the object\u2019s detectability. Next, we estimated the background noise of the cutout and detected the target object according to the threshold de\ufb01ned as 2\u03c3 per pixel above the smoothed background noise. The segmentation maps were generated based on this threshold and the FWHM size of the smoothing box. For several objects too faint or showing irregular structures, it might fail in basic morphological measurements or 2D S\u00b4 ersic \ufb01tting. To correctly perform the \ufb01tting we adjusted the threshold (1.5 \u22123\u03c3 range) and the FWHM size of the smoothing box. We compared the measured values before and after adjusting the parameters for some successfully measured objects and found only insigni\ufb01cant changes on both parametric and nonparametric measurements. As described in \u00a72.2, an AGN and its host galaxy was a composite of two components, which meant we needed a decomposition to perform a more accurate \ufb01tting. Otherwise, the bright AGN in the center will enhance the central brightness, resulting in biased S\u00b4 ersic indices and non-parametric parameters (see \u00a73.3). Although statmorph could perform a de-convolution of the galaxy pro\ufb01le convolved with the PSF, it could not decompose the host galaxy and its central AGN. We describe how we correct this systematic error in \u00a73.3 3.2. Non-parametric Method statmorph also provides a set of non-parametric parameter measurements (Lotz et al. 2004; Conselice 2003). This method was used to measure the light distribution of a galaxy. Non-parametric measurements performed in this study include \ufb01ve parameters as described below. The \ufb01rst is Gini (G) de\ufb01ned by the Lorentz curve of the galaxy\u2019s light distribution. This parameter describes whether the light evenly distributes among all pixels or concentrates on several pixels. The value of G ranges from 0 to 1; 1 indicates that a single pixel encloses all brightness and 0 indicates all pixels have the same brightness. Notably, G does not consider spatial positions. \f5 Figure 3. Output plot of the object ID190, a spiral galaxy at z = 0.184. Original Image shows the cutout of the original galaxy image. The blue dot represents the center of the galaxy, whereas the blue dashed and solid lines indicate the orientation and extent of the elliptical aperture that encloses half of the total \ufb02ux of the galaxy. S\u00b4 ersic model + Noise shows the \ufb01tted S\u00b4 ersic model with background noise added. On the left-hand box, flag sersic==0 indicates a successful 2D S\u00b4 ersic \ufb01tting and n is the value of measured S\u00b4 ersic index. The basic shape and size measurements derived from the S\u00b4 ersic \ufb01tting are indicated by the red point and lines, respectively. S\u00b4 ersic Residual shows the residual of the original galaxy after subtracting the S\u00b4 ersic \ufb01tting. Asymmetry Residual shows the residual of the original galaxy subtracted by half of the image rotated by 180 degrees. Original Segmap gives the segmentation map created by photutils and values of the background noise. The detection thresholds and sizes of smoothing boxes were adjusted to ensure an enclosure of the entire galaxy including substructures as much as possible. The non-parametric parameters (CMCAS) are shown in Gini Segmap, wherein the region enclosed by the black solid line is the Gini segmentation de\ufb01ned by the Petrosian radius (\u03b7 = 0.2). The second parameter is M20, the second order moment of the galaxy\u2019s brightest region enclosing 20% of the total \ufb02ux of the galaxy (Lotz et al. 2004). It is a tracer of any bright nucleus, bar, spiral arm, and o\ufb00center star-cluster within a galaxy. The value of M20 is always negative; the lower it is, the higher the concentration anywhere within the galaxy. The third parameter is Concentration (C), simply de\ufb01ned as follows (Conselice 2003): C = 5 \u00d7 log \u0012r80 r20 \u0013 , (1) where r80 and r20 are radii enclosing 80% and 20% of the galaxy\u2019s light, respectively. Concentration is directly related to the galaxy\u2019s central brightness; a larger value indicates a brighter central region and larger S\u00b4 ersic index. The fourth parameter is Asymmetry (A), which is an indicator of whether there exist non-symmetric components within the galaxy. Spiral galaxies usually have large Asymmetry due to their spiral arms. Besides, inhomogeneous star forming activities will also result in asymmetric structures. Although the value of A is usually positive, since such measurement depends on background noise, negative values may appear in low signalto-noise ratio (SNR) cases. The last parameter is Smoothness (S, or Clumpiness), which describes if there is any clumpy structure within the galaxy. Elliptical galaxies are typical smooth systems holding small values of Smoothness, and star forming galaxies tend to have more clumpy regions. To calculate these \ufb01ve parameters, statmorph would create a Gini segmentation based on the Petrosian radius (rpetro, Petrosian 1976), which is given by \u03b7(R) = I(R) \u27e8I(< R)\u27e9, (2) \f6 where I(R) is the surface brightness at a radius R, and \u27e8I(< R)\u27e9is the mean surface brightness within the radius R. We choose \u03b7(R = rpetro) = 0.2 (Lotz et al. 2004) to perform calculations. In fact, we tested measurements with di\ufb00erent values for \u03b7 and found that the Gini segmentation would ignore substructures, such as spiral arms, as \u03b7 increased and the measured values only varied slightly. Fig. 3 gives an example output plot of the measurement, including S\u00b4 ersic \ufb01tting and basic parameter measurements. After checking the results, we found that over 70% of our objects had S = 0. We investigated the outputs and found all these objects were PSF-like, indicating a signi\ufb01cant optically-unobscured AGN fraction. For any of these objects, there was no reliable Gini segmentation as well because the rpetro always merely enclosed the bright core (with almost the FWHM size) of the PSF-like object, making it impossible to measure the non-parametric morphological parameters of the host galaxies. Then, we further checked the Gini segmentation of the remaining objects so that the non-parametric parameters can be measured on the basis of a reliable enclosure of the galaxy\u2019s light. As a result, we found many cases where the objects had no reliable Geni segmentation with non-parametric parameters concentrating around the combination of G = 0.5 \u00b1 0.02, M20 \u223c\u22121.74, and C = 3.0 \u00b1 0.2. In G \u2212M20 diagnostics, these objects were all classi\ufb01ed as Sb/Sc/Irr, leading to an overestimation of the fraction of disk systems (see \u00a75.1). Real objects with this combination of G, M20, and C were also excluded. 3.3. Correction for E\ufb00ects of the Central PSF Components For the remaining objects, there were central bright PSF-like components in many cases. To correct any systematic e\ufb00ect due to the PSF-like components, in particular, to investigate how the S\u00b4 ersic index was a\ufb00ected, we modeled AGN host galaxies. The modeling includes several preset parameters: S\u00b4 ersic index (n), e\ufb00ective radius (re), light intensity Ie at the e\ufb00ective radius, ellipticity, and ratio of the total \ufb02ux of the host galaxy and AGN (fhost : fAGN). As the \ufb01rst step, we modeled 300 galaxies whose intrinsic S\u00b4 ersic index ranged from 0.1 to 5.0 (50 samples) and ellipticities ranged from 0 to 0.5 (six groups), both with a 0.1 increase between two consecutive setups. Then, we considered three di\ufb00erent e\ufb00ective radii (re = 5, 10, 15 pixels) and obtained 300\u00d73 = 900 galaxies. The intensity at the e\ufb00ective radius was assumed to be Ie = 10 electrons s\u22121 pix\u22121, which was arbitrarily chosen, but scaled by an SNR, as described below. The unit is the same as the actual HST cutout images. To model an AGN component, the brightest pixel of the modeled galaxy made in the \ufb01rst step was co-added with the value of the total \ufb02ux multiplied by a factor. This factor was assumed to be fAGN, given by the following combinations: fhost : fAGN = 1:0.01, 1:0.1, 1:0.2, ..., 1:0.9, 1:1.0, 1:1.5, 1:2, 1:2.5, 1:3, 1:4, 1:5. Hence we had 900\u00d717 = 15300 galaxies harbouring AGNs modeled in total. These model galaxies, including the AGN component, were convolved with the modeled F814W PSF. Then, we added noise in each pixel of the images following an error function whose standard deviation \u03c3 = 0.4 electrons s\u22121 pix\u22121, corresponding to an S/N = 25 at the e\ufb00ective radius. The SNR at the e\ufb00ective radius was chosen as a typical value observed in our actual AGN host galaxies.2 By comparing the assumed values of S\u00b4 ersic index and re with the values measured by statmorph, we found many cases where the di\ufb00erences were unacceptably large (measured n > 10) and the host galaxy was heavily contaminated by the AGN. Too make these models good matches of real objects, we adapted the following criteria to exclude invalid results: 1. Either flag sersic == 0 or flag == 0 (any error \ufb02ag returned by statmorph). 2. Either re or sersic rhalf 3 is smaller than that of the F814W PSF (2.19 and 1.52 pixels, respectively). 3. The size of the Gini segmentation is smaller than the e\ufb00ective radius as well as 50% of the size of the original segmentation. 4. The measured Smoothness is equal to 0. 5. Cases without reliable Gini segmentation, for which G = 0.5 \u00b1 0.02, M20 \u223c\u22121.74, and C = 3.0 \u00b1 0.2. 6. Objects with failure in the calculation of the total \ufb02ux, because of which an AGN component cannot be modeled. By adapting these criteria, 7,697 modeled AGN hosts were excluded and there were 7,603 successfully modeled AGN host galaxies. Except for the last one, the other criteria were applied to real objects to exclude unsuccessful measurements, ensuring a fair comparison between real and simulated AGN+host galaxies. As mentioned in \u00a73.2 that over 70% objects had S = 0, the fourth criterion produced the greatest rejection of the 2 In this procedure, the total \ufb02ux of model galaxies without the AGN component, fhost, was calculated by the method as follows. We convolved the model galaxies with the F814W PSF and coadded with the pixel noise following an error function with \u03c3 = 0.1 electrons s\u22121 pix\u22121. These high S/N = 100 model galaxies were used only to make a reliable segmentation map, wherein we calculated the total \ufb02ux, fhost as the sum of the pixel counts. 3 re indicates the e\ufb00ective radius of the object in the original image, and sersic rhalf indicates the e\ufb00ective radius of the model \ufb01tted by 2D S\u00b4 ersic \ufb01tting \f7 valid objects. These host galaxies su\ufb00ered both cosmological surface brightness dimming and heavy contamination from their central AGNs, resulting in a complete PSF-like morphology, of which object ID002 in the right panel of Fig. 2 is representative. The correction method for the S\u00b4 ersic index is described in Appendix A in detail. In addition, if the fraction of re to sersic rhalf of the real object is out of the range of the modeled hosts within the corresponding e\ufb00ective radius, there is no available correction for the object, and it was also excluded (see Appendix A). Corrections for G, M20, and C were also made through simulations. Unlike S\u00b4 ersic index that depends on a certain mathematical form, non-parametric parameters are much less a\ufb00ected by an AGN component only if Gini segmentation has a reliable enclosure of the host galaxy\u2019s light and the fractional di\ufb00erences of values ((valuegalaxy+AGN \u2212valuegalaxy)/valuegalaxy+AGN) for a galaxy with and without an AGN can be applied for the correction. The basic parameters of the simulation are the same as those for the correction for S\u00b4 ersic index other than the amplitude of the point source (or the total \ufb02ux of the AGN) and the de\ufb01nition of fhost : fAGN. Apart from the simulation parameters, the non-parametric parameters were measured for the host galaxies without AGN components and then measured after the two components were co-added so that the fractional di\ufb00erences could be calculated. In this simulation, the total \ufb02ux is set to 10 and fhost and fAGN were the pixel counts of the center-one pixel (which is the brightest one) of the galaxy and the AGN, respectively. Before co-adding the two components, both of them were convolved with the PSF and then normalized so that the brightest pixels had pixel counts equal to 1; thus, the fhost : fAGN could be well controlled. There were four groups for the \ufb02ux ratio between the host and AGN: fhost : fAGN = 3 : 1, 1 : 1, 1 : 4, 1 : 6. For fhost : fAGN = 3 : 1, there was no visible PSF, and the other three had visible PSF components. Thus, the modeled host galaxies were further divided into two groups: with or without visible AGN components. Modeled host galaxies without visible AGN components corresponded to real objects whose central AGNs could not be visually con\ufb01rmed or those with re > 16 pixels, whereas modeled hosts with visible AGN components were matched to real objects that had apparent AGN components. The fractional di\ufb00erences are listed in Table 1. To correct the measurement biases, the following equation is adapted: NPcorrected = NPmeasured \u2217(1 \u2212fNP ), (3) where NP represents the non-parametric parameter, and fNP is the fractional di\ufb00erence of the corresponding parameter. Through modeling, it is found that galaxies with small re and large S\u00b4 ersic index (n \u22653) are convolved to be the PSF-shape even if they do not have AGN components. By applying the criteria, along with those failing in the measurement, 349 objects were excluded, including many spheroid systems (n > 2) before they could be further discussed. This might result in potential biased dominant morphology of the whole sample with 485 AGN host galaxies. Table 1. Fractional di\ufb00erences of G, M20, and C measurements due to the presence of an AGN. re a Criterion 5 10 15 Gini: G PSF visible 0.059 0.087 0.067 PSF not visible 0.016 0.013 0.008 M20 PSF visible 0.078 0.111 0.124 PSF not visible 0.026 0.025 0.020 Concentration: C PSF visible 0.115 0.183 0.124 PSF not visible 0.040 0.024 0.019 Note\u2014a The e\ufb00ective radii are in unit of pixel with a resolution of 0.03\u2032\u2032/pixel. 3.4. SED Fitting Our samples were recorded in Laigle et al. (2016) catalog, which provides MASS BEST (the stellar mass of the entire galaxy) and SFR BEST computed by the SED \ufb01tting. However, the SED \ufb01tting performed in this catalog do not include IR data of Spitzer MIR channels, leading to large uncertainties in the SFR. Further, they did not take AGN contributions into consideration. Thus, to obtain results not only from the host galaxies but also the AGNs such as AGN luminosity and to update the catalog, we decided to perform our SED \ufb01tting. We choose X-CIGALE (Yang et al. 2020), the X-ray version of CIGALE (Code Investigating GALaxy Emission, Burgarella et al. 2005; Noll et al. 2009; Boquien et al. 2019), to perform SED \ufb01tting of the photometric data retrieved from COSMOS2015 catalog (see \u00a72.3). As the \ufb01rst step, a high-dimensional parameter grid of SED models consisting of all components that contribute to the emission was populated by X-CIGALE. Then, the goodness of the \ufb01t for each model was computed, and the best-\ufb01t SED model for each sample galaxy was identi\ufb01ed via the reduced-\u03c72 statistic (Ramos Padilla et al. 2020). The components used in the \ufb01rst step were required to be speci\ufb01ed; we list these parameters and values in Table 2 to de\ufb01ne the X-CIGALE grid of AGN hosts \f8 Table 2. X-CIGALE user-speci\ufb01ed components for SED \ufb01tting described in 3.4 Parameter Values Description Delayed star formation history: delayedSFH \u03c4main 300, 500, 800, 1000, 1500, 2000 e-folding time of the main stellar population model \u00b7 \u00b7 \u00b7 in Myr t0 500, 1000, 1500, 2000, 4000, 5000 Age of the main stellar population in the galaxy in Myr Single-age stellar population (SSP): BC03 (Bruzual & Charlot 2003) Z 0.05 Metallicity separation age 10 Age in Myr that separates the young and the old star populations Nebular Emission (Inoue 2011) log U -2.0 Ionisation Parameter Dust attenutaion: Calzetti et al. (2000) E(B \u2212V )young 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9 Color excess of the stellar continuum light for the young population. E(B \u2212V )oldfactor 0.44 Reduction factor for the E(B \u2212V ) of the old population compared to the young one \u03b4 0.0 Additional power-law index modifying the attenuation curve de\ufb01ned in Boquien et al. (2019) Clumpy two-phase torus model: SKIRTOR2016 (Stalevski et al. 2012, 2016) \u03c4 5, 7 The average edge-on optical depth at 9.7 micron \u03b8 20, 40, 60 The angle measured between the equatorial plane and the edge of the torus i 0, 20, 50, 70 The inclination of the line-of-sight Rout/Rin 20 The ratio of the outer to the inner radii fracAGN 0.01, 0.1, 0.2, ... , 0.99 The fraction of AGN IR luminosity in the total IR luminosity SEDs. For any other components not mentioned in the table, default settings provided by Yang et al. (2020) for AKARI and COSMOS AGN SEDs were used. The star formation history (SFH) of the host was treated with a delayed SFH model (Ciesla et al. 2015): SFR \u221d\u03c4main exp (\u2212t/\u03c4main). Since the onset of the star forming activity, the increase in SFR is almost linear rather than a sudden burst; after a peak, the SFR decreases exponentially. There are two parameters controlling this model: the age from the onset of star formation (t0), and the e-folding time (\u03c4main) that also determines the timing of the SFR peak. Through adjusting these two parameters, ongoing or recent starburst events as well as a quenched phase could be simulated. To model the emission from the stars, we adopted the Bruzual & Charlot (2003) stellar population synthesis library. The metallicity set by default was 0.02. However, considering that we were handling with AGN samples, we increased the value to 0.05 because AGNs would enrich their environments, resulting in higher chemical abundance (Zubovas et al. 2013; Taylor & Kobayashi 2015). Owing to the existence of the obscuring structure, emissions from the AGN will be re-emitted at longer wavelengths. To simulate this reddening effect, we adopted SKIRTOR, a clumpy two-phase torus model derived from a modern radiative-transfer method (Stalevski et al. 2012, 2016). This model depends on several parameters such as the average edge-on optical depth at 9.7 micron, the ratio of outer to inner radii, and the inclination. Because among our AGN host galaxies, Kimura et al. (2020) suggested a signi\ufb01cant fraction of optically unobscured Type 1 AGNs, we allowed these parameters to vary within wider ranges. With the settings in Table 2, we computed over 500 million models. In Table 3, we compare our median stellar masses and SFRs of G \u2212M20 + V classi\ufb01cation samples (see \u00a75.1) with those in COSMOS2015 at three redshift bins, and Fig. 4 shows the corresponding values in the COSMOS2015 catalog as a function of our computed values. SFRs in COSMOS2015 show large scat\f9 Figure 4. Star formation rates (SFRs, left panel) and stellar masses (right panel) in the COSMOS2015 catalog as a function of the values computed in this work. Table 3. Comparison of stellar masses and SFRs computed by X-CIAGLE and recorded in COSMOS2015.a log10(SFR/M\u2299yr\u22121)med log10(M\u2217/M\u2299)med Redshift This work COSMOS2015 This work COSMOS2015 z \u22641.0 0.67 1.51 10.33 10.57 1.0 < z \u22641.5 0.79 1.73 10.83 10.96 1.5 < z 0.95 1.95 10.85 10.75 Note\u2014aThree objects among our G \u2212M20 + V classi\ufb01cation samples are not included in the comparison due to lack of SFRs and stellar masses in the COSMOS2015 catalog. tering and are 2-dex higher than ours in the extreme case, which can be a consequence of the absence of IR data and the ignorance of AGN components. The medians of the stellar masses are closer and most of the new estimates are 0.05\u20130.5-dex smaller than those in COSMOS2015. Fig. 5 depicts the SED \ufb01tting results of the object ID003 at z = 0.977 with visually mergerlike morphology. The host galaxy is rich in emission lines, suggesting an ongoing star formation. The solid orange line indicates the emission from the AGN and AGN contributes a signi\ufb01cant fraction to the observed \ufb02uxes, from which we may imply that this is a Type 1 AGN with a visible nucleus. The computed SFR and inclination are 22.97 M\u2299yr\u22121 and 21.11\u25e6\u00b1 21.96\u25e6, respectively. 4. RESULTS In this section, we will present the results of our morphological measurements with previous studies on AGN hosts and normal galaxies in the literature. First, we describe our \ufb01nal sample to be discussed further. There were 491 objects in Kimura et al. (2020), and 485 of these were found in the HST-COSMOS database. By applying the selection criteria described in \u00a73.3, only 76 objects remained. There were other 26 objects that failed in passing the criteria due to invalid Gini segmentation or outlying re/sersic rhalf, but their morphology could be con\ufb01dently determined by the visual classi\ufb01cation. Therefore, these 26 objects were also kept while the measurements of both parametric and non-parametric parameters were not discussed other than SED \ufb01tting results. Unfortunately, all AGNs at z > 3 did not pass the criteria because the PSF-like component dominated the entire structure of their host \f10 Figure 5. An example of the best-\ufb01t models obtained from our SED \ufb01tting performed by X-CIGALE. This is the SED of the object ID003, a galaxy with visually merger-like morphology at z = 0.977. The reduced-\u03c72 indicates the goodness of the \ufb01tting, and 0.56 ensures a high-quality and realistic \ufb01tting. The nebular emission contributes several emission lines in the Model spectrum including the Ly\u03b1 and H\u03b1 line at \u03bb \u223c0.24\u00b5m and 1.3\u00b5m, respectibely, suggesting that the galaxy is undergoing star forming activities. galaxies. Finally, there were only \ufb01ve valid objects at 1.5 < z < 3.0. For the 76 remaining objects found in the COSMOS2015 catalog, we computed the host galaxies\u2019 stellar masses and SFRs by using X-CIGALE as described in the previous section. The redshift-mass and -SFR distributions are plotted in the upper and lower panel of Figures 6, respectively. There are 72 (\u223c94.7%) host galaxies that lie within the star formation main sequence (see Fig. 15), which can be considered as star forming galaxies (SFGs), with a median mass of 2.81 \u00d7 1010M\u2299. The central AGNs of these 76 objects have LAGN,median \u223c1.11 \u00d7 1037 W, which is more than 1-dex fainter than the typical bolometric luminosity of samples in SDSS Quasar DR12 (Koz lowski 2017). Therefore, we believe that these valid objects are good samples of low luminosity populations. In the following subsections of morphological measurements, we focus only on these 76 valid objects. 4.1. Visual Classi\ufb01cations Visual classi\ufb01cation is the most original and explicit method to determine the morphology of a galaxy. However, owing to the di\ufb03culties in the classi\ufb01cation of edgeon galaxies and those at high redshifts, where galaxies were more compact, we decided to follow the Galaxy Zoo 2 (GZ2, Willett et al. 2013; Lintott et al. 2011) \ufb01eld guide to classify the total 102 objects into four: spirals, Figure 6. Upper: Stellar mass of the entire galaxy computed by X-CIGALE as a function of the redshift. The dashed lines divide the sample into three bins along the stellar mass. Lower: SFR averaged over the recent 100 Myr computed by X-CIGALE as a function of the redshift. The solid and dashed lines divide the sample into three bins along the SFR and redshift, respectively. The blue crosses represent host galaxies with AGN detected by Chandra, while coral triangles are AGNs undetected in X-ray. ellipticals, irregulars/merger, and point source. Even after the rejection of point-like sources for which we could not obtain reliable morphological measurements as discussed in \u00a73.3, we still found many small and compact sources among the valid objects, forcing us to classify them into the point source. We showed examples of the four classes of galaxies in Fig. 7. The result of the visual classi\ufb01cation is listed in Table 4. Notably, there exist some objects that could not be con\ufb01dently classi\ufb01ed via a pure visual inspection. These objects include point sources with spatial extensions of several pixels and are classi\ufb01ed as ellipticals and galaxies with asymmetric arms but cannot be distinguished between spiral galaxies or late-stage mergers. Tiny arm-like substruc\f11 Figure 7. Examples of visual classi\ufb01cations. From left to right and upper to lower, the galaxy is spiral, elliptical, irregular/merger, and point source, respectively. Figure 8. The distribution of S\u00b4 ersic index before (coral) and after (steelblue) the correction. Without the correction, there is a clear bias toward higher values. This will result in an underestimation of the fraction of disk systems, thus leading to a conclusion of spheroid-dominance of the AGN hosts. After the correction, the valid sample shows no clear preferred morphological classi\ufb01cation. tures are also di\ufb03cult to be distinguished between spiral arms or streams. By visual investigation, many spiral galaxies showed some disturbed features such as asymmetric spiral arms and streams, which suggested that these galaxies were undergoing strong star formation, merger activities, or Table 4. A summary of visual classi\ufb01cation for 76 + 26 = 102 objects Classi\ufb01cation Numbera Spiral 32 Elliptical 30 Irregular/merger 17 Point-source 23 Note\u2014aThe pure visual inspection has uncertainties due to: spirals with patchy star forming regions may be classi\ufb01ed as irregular/mergers and compact ellipticals lie on the boundary of ellipticals and point-source. interactions. Owing to such di\ufb03culties, some visual classi\ufb01cations might be suspicious; we will discuss morphological classi\ufb01cations based on their S\u00b4 ersic index, GiniM20 and log(Gini)-log(Asymmetry) diagnostics in \u00a75.1. This visual method was also used as a criterion to decide the correction factor for the S\u00b4 ersic index and nonparametric parameters, as introduced in \u00a73.3. Among the 76 valid objects, 26 had visible PSF components in their centers, corresponding to the modeled host galaxies that had their central regions not three times brighter than AGNs, and 50 did not have such visible PSF-like components. 4.2. S\u00b4 ersic Index and E\ufb00ective Radius In this section we compare the results of 2D S\u00b4 ersic \ufb01tting with some other AGN host galaxy samples in the literature. First, we investigated the in\ufb02uence of the correction on distributions of the S\u00b4 ersic index. The values of the S\u00b4 ersic index were corrected for bias due to the central bright AGN (\u00a73.3). To clearly show this e\ufb00ect, we compared the values before and after the correction in Fig. 8. The corrected histogram signi\ufb01cantly shifted to smaller S\u00b4 esic index values than the histogram before correction. Indeed, the S\u00b4 ersic index measurements without correction were biased toward higher n values and the correction reduced the number fraction around 3 < n < 6 signi\ufb01cantly. Therefore, such correction to correct the bias is mandatory in morphological studies for AGN hosts to reject potential misleading dominant morphological classi\ufb01cation. Nevertheless, we discuss how this correction a\ufb00ects our conclusion about the dominant morphology. If we take n = 2 as a standard to separate spheroid and disk systems, we \ufb01nd that 23 (\u223c30.3%) objects can be classi\ufb01ed as disk systems before the correction and 36 (\u223c47.4%) after the correction. The conclusion changed from an apparent spheroid-dominance to an unapparent disk-dominance. In the upper panel of Fig. 9, our results are compared with the host galaxies of IR-selected AGNs within 0.5 < z < 1.5, with logM\u22c6/M\u2299> 10.5 (Chang et al. 2017), whereas ours range from \u223c109 M\u2299to \u223c1011M\u2299. \f12 Figure 9. Comparison of S\u00b4 ersic index and re in the literature. The redshift ranges of our samples match with the comparisons. Top: comparison with IR-selected AGNs at 0.5 < z < 1.5 (Chang et al. 2017). Middle: comparison with quasars observed with Subaru HSC at 0.2 < z < 1.0 (Li et al. 2021). Bottom: comparison with normal galaxies in the COSMOS \ufb01eld at up to z \u223c1 (Sargent et al. 2007; Scarlata et al. 2007). \u00af n and \u00af re are average values of the S\u00b4 ersic index and the e\ufb00ective radius, respectively. Their AGNs were signi\ufb01cantly obscured and the host galaxies could be considered as degradation, not contamination from AGNs. Considering n = 2 (Ravindranath et al. 2004; Cassata et al. 2011) as the boundary that divided disk and spheroid systems, most host galaxies of IR-AGNs were disk systems in the top-left panel of Fig. 9. However, the S\u00b4 ersic index of the host galaxies of our optical variability-selected AGNs had a concentration at n \u223c3 along with a higher cumulative probability (\u223c60%) at n > 2. This led to a signi\ufb01cant di\ufb00erence in the n distribution of these two samples in a Kolmogorov\u2013Smirnov test (KS test, P-value \u226a0.01) . The distributions of re of these two samples were similar, but we found the absence of our objects in the most compact hosts (re \u22640.42 kpc) and extended (re > 10 kpc) cases as in the distributions of IR-AGN hosts. This led to a less signi\ufb01cant (P-value \u223c0.04) but still a possible di\ufb00erence between the two samples. We explain this as \f13 Figure 10. Upper: GMCA distributions compared with 16,358 normal galaxies (\u223c12,000 disk galaxies) at z < 1 (Sargent et al. 2007); Lower: GMCA distribution compared with 8,146 normal SFGs at z \u223c0.7 (Zamojski et al. 2007). Together with GMC, the host galaxies of our optical variability-selected AGNs had bright cores even compared with normal SFGs and were almost as symmetric as normal galaxies. that, our most compact hosts were dominated by the AGNs because of unobscuration and they failed to pass the criteria. In addition, the average value of their host galaxies was about 3.56 kpc (re = 100.552 kpc, measured error considered), larger than ours (re = 100.438 = 2.74 kpc). However, considering the signi\ufb01cant fraction of extremely extended IR-AGN host galaxies, we found no evidence of more compact sizes of hosts of the optical variability-selected AGNs. We also compared the host galaxies of \u223c 5000 quasars from SDSS DR14 at 0 < z < 1.0, Lbol = 1044.0\u221246.5 erg s\u22121 and 9.5 < log(M\u22c6/M\u2299) < 11.5 (Li et al. 2021) in the middle panel of Fig. 9. The morphology of these quasar host galaxies was studied using HSC \ufb01ve-band (grizy) optical imaging data. Considering light contributions of quasars to the hosts, Li et al. (2021) also made corrections for the S\u00b4 ersic index by adding AGNs to the centers of galaxies with di\ufb00erent magnitudes. Apparently, the di\ufb00erence of n between the two samples was highly signi\ufb01cant (P-value \u226a0.01). However, in this comparison, our host galaxies showed a spheroid dominance while the majority of quasar host galaxies were still disk systems. Besides, Li et al. (2021) have performed corrections for n as well; for some quasar-dominated hosts fainter than HSC iband\u223c23, their intrinsic parameters could not be recovered. Such di\ufb03culty in the measurements of low luminosity hosts was also seen in our results before valid sample selection; over \u223c60% host galaxies were fainter than imag \u223c22.5 and dominated by AGN components, and these objects had both S = 0 and flag sersic == 1. We removed all such objects and many of them even had a S\u00b4 ersic index of n > 10. Signi\ufb01cantly di\ufb00erent (P-value < 0.01) distributions of re could also be seen that our sample was much more compact than the SDSS quasars. We attributed this di\ufb00erence to the fact that Li et al. 2021 used HSC imaging data. Because of the atmospheric seeing (0.7\u2032\u2032), objects in HSC images were more extended, resulting in larger re. It is also interesting to mention the comparison with 16,538 normal galaxies selected with HST I-band magnitude (IF814W < 22.5) from Zurich Structure & Morphology catalog (Sargent et al. 2007; Scarlata et al. 2007) in the bottom panel of Fig. 9. Among these normal galaxies, \u223c12,000 were disk systems. The distributions of n signi\ufb01cantly di\ufb00ered (P-value \u226a0.01), which was attributed to both the spheroid dominance in the AGN host galaxies and the disk systems that host AGNs had higher n than normal disk systems. For re, our host galaxies were smaller in sizes (P-value \u226a0.01) and showed clearly separated concentrated regions. Indicated by the average values, the size of our AGN hosts was roughly 70% of that of normal galaxies. \f14 Figure 11. Non-parametric parameters: Gini, M20, Concentration, and Asymmetry of the variability-selected AGN host galaxies at three redshifts, total stellar masses, and SFRs bins. The Gini segmentation of the PSF only encloses the central core (smaller than 5*5 pixels), and the pixel counts within this small core are even (brightest \u223c0.6, surrounding 8 pixels 0.3 \u22120.5), leading to both low Gini and Concentration. As suggested in Kimura et al. (2020), optical variability-selection is more e\ufb03cient in selecting LLAGNs. This was supported by the result of SED \ufb01tting that our AGNs had a median luminosity of Lbol = 1.11\u00d71044.0 erg s\u22121 while SDSS quasars spanned Lbol = 1044.0\u221246.5 erg s\u22121 with a median at Lbol \u223c 1045.2 erg s\u22121, which was larger that of ours by 1 dex (Li et al. 2021). LLAGNs are likely to be hosted by SMBHs that have lower accretion rates, which are more likely to reside in the center of elliptical galaxies (Heckman & Best 2014). This explains the di\ufb00erent morphological dominance seen in the three comparisons as well as more compact sizes when they are compared with the sample of normal galaxies, 70% of which are spiral galaxies (Sargent et al. 2007). The smaller sizes may also imply that our AGN-host SFGs are undergoing a process of dynamical compaction, probably arising from the gas in\ufb02ow. Cosmological hydro-dynamical simulations of galaxy formation suggest that highly perturbed wet disks fed by cold streams may experience a dissipative contraction phase (Dekel & Burkert 2014). In this scenario, the gas in\ufb02ow, which is often associated with the disk instability, toward the central region of the galaxy triggers the initial contraction and acts as the energyprovider to maintain substantial high-level turbulence, leading to a massive core and enhanced SFRs, along with the triggering of the accretion onto the SMBH and induced AGN activities (Zolotov et al. 2015; Bournaud et al. 2011). 4.3. Non-parametric Parameters Non-parametric parameters provided more detailed investigations on the light distributions of our host galaxies. There were four parameters presented in the \f15 results: Gini (G), M20, Concentration (C), and Asymmetry (A). In the upper panel of Fig. 10, we plot the distributions of GMCAs of our AGN host galaxies and normal galaxies from Sargent et al. (2007). The correction described in \u00a73.3 was also applied to our measurements, although the AGN e\ufb00ects were small. Apparently, normal galaxies had more even light distributions seen from their smaller G values. Combined with M20 and C, it implied there were brighter central regions of the host galaxies of our AGN sample. Given that we corrected the bias due to the central AGN component in the measurements, the brighter central parts might serve as evidence of the central star formation induced by the AGN. As shown by Asymmetry, there was only a minute increase compared with normal galaxies, suggesting insigni\ufb01cant impacts of AGNs on the global structures of the host galaxies. In the bottom panel of Fig. 10, we compared the results with the COSMOS Zamojski Morphology catalog (Zamojski et al. 2007). This catalog includes 8,146 SFGs selected based on HST ACS I-band with IF814W \u226423 at 0.2 < z < 1.0 and 9.5 < log(M\u22c6/M\u2299) < 11.6. The distributions of G, M20, and C showed the same pattern as the normal galaxies of Sargent et al. (2007). Meanwhile, the distributions of A di\ufb00ered. Their SFGs showd an extended tail to high A values in the distributions, indicating an intense and inhomogeneous star formation in their disks. Our AGN host galaxies were comparatively much more symmetric, which is also shown by the comparison in the upper panel. The signi\ufb01cant fraction of spheroid systems implied by the S\u00b4 ersic index might be an explanation to the di\ufb00erence in the distributions of A. We also plot the GMCA measurements of our AGN samples within three redshifts, stellar masses, and SFR bins in Fig. 11. The average values of G and A at di\ufb00erent redshift bins were close, whereas M20 and C showed a slight evolution pattern. This could be explained as the result of the size evolution along the cosmic time, i.e., at higher redshifts, the galaxy became more compact, and spheroid systems showed a steeper decrease than disk systems (Conselice 2014), as well as surface brightness dimming along the redshift (Lotz et al. 2006). Therefore, with a smaller spatial extension, the contribution of the brightest pixel for high-z objects did not in\ufb02uence the measurement as signi\ufb01cantly as low-z objects did. The rest-frame UV light would be redshifted to the wavelengths at which we observed for high-n objects. Therefore, we expect more UV light for higherz galaxies. Because the star forming regions are the main contributor in UV, more disturbed and asymmetric structures can be expected as in Zamojski et al. (2007). However, such asymmetry evolution along the redshift was not clearly seen in A. Comparisons of di\ufb00erent stellar mass and SFR bins are shown in the middle and lower panels of Fig. 11, respectively. We could barely see any di\ufb00erences between Figure 12. The e\ufb00ective radius, re, and S\u00b4 ersic index, n, as functions of the AGN fraction calculated from IR luminoisty. The coral lines are linear \ufb01tting and the indigo lines indicate standard deviations within di\ufb00erent bins of the AGN fraction. G, M20, and C for both comparisons, indicating that the stellar masses and star formation had no clear dependencies on the global morphology. Besides, as seen in the distribution of A for di\ufb00erent SFR bins, although there was an increase in A toward higher SFRs, the evolution was quite insigni\ufb01cant compared with normal SFGs, whereas these hosts of optical-variability selected AGNs were also SFGs. Such results again required a morphological classi\ufb01cation to explain and suggested that the star forming activities in these hosts might more likely to be related to the galaxies\u2019 central regions. 4.4. AGN Fraction Contribution Through the SED \ufb01tting, we also computed the AGN fraction de\ufb01ned as the AGN luminosity fraction in the total (AGN+galaxy) IR luminosity (Boquien et al. 2019; Yang et al. 2020). In this section, we investigate the relations of the AGN fraction with the S\u00b4 ersic index, n, and the e\ufb00ective radius, re, to discover whether these parameters correlate with the AGN fraction. Fig. 12 shows the relations between the AGN fraction and the S\u00b4 ersic index, n as well as the e\ufb00ective radius, re. The e\ufb00ective radius re and S\u00b4 ersic index showed negative correlations with the AGN fraction. Their linear-\ufb01ttings with the AGN fraction are given by: r = \u2212(1.61 \u00b1 0.91)f + (3.38 \u00b1 0.37) and n = \u2212(0.41 \u00b1 0.85)f + (2.52 \u00b1 0.35), where r is the e\ufb00ective radius, n is the S\u00b4 ersic index, and f is the AGN fraction. According to these equations, a 20% AGN contribution \f16 led to a 3.3% decrease of the S\u00b4 ersic index and 9.5% decrease in the re compared with those at a zero AGN fraction, respectively. A 50% AGN contributions corresponded to an increased S\u00b4 ersic index by 8.1% and a decreased re by 23.8% compared with the case without any AGN e\ufb00ect. To measure the signi\ufb01cance of these correlations, we calculated Pearson\u2019s correlation coe\ufb03cients. For re, it had a P-value of 0.08 > 0.05 and a correlation coe\ufb03cient of r = \u22120.2, and for S\u00b4 ersic index, the P-value and correlation coe\ufb03cient were 0.63 \u226b0.05 and r = \u22120.06, respectively. It suggested that re is weakly correlated with the AGN fraction and the correlation was not statistically signi\ufb01cant. This weak correlation between re and the AGN fraction agreed with the process of dynamical contraction that arising from the central gas in\ufb02ow, Meanwhile, the S\u00b4 ersic index, in principle, did not correlate with the AGN fraction at all, which might be because we corrected the in\ufb02uences of AGN components and removed extremely compact objects. We calculated the P-values of the AGN fraction between non-parametric parameters. However, all of them had a P-value \u226b0.05, indicating no arguable correlations between the AGN fraction and non-parametric parameters. 5. DISCUSSION 5.1. Non-parametric Parameter Diagnostics Other than visual classi\ufb01cation, a method to determine which class the galaxy belongs to is the diagnostics based on non-parametric parameters, including GiniM20 (G \u2212M20) and log(Gini)-log(Asymmetry) (log(G)log(A)) diagnostics. In the left panel of Fig. 13, we plot the G \u2212M20 diagnostics that divides the region into three parts: E/S0/Sa, Sb/Sc/Irr, and Mergers. The division lines are obtained from Lotz et al. (2008) with the following de\ufb01nitions for Extended Groth Strip (EGS) galaxies at 0.2 < z < 1.2: Mergers : G > \u22120.14M20 + 0.33, E/S0/Sa : G \u2264\u22120.14M20 + 0.33 & G > 0.14M20 + 0.80, and Sb/Sc/Irr : G \u2264\u22120.14M20 +0.33 & G \u22640.14M20 + 0.80. We show the fraction of di\ufb00erent identi\ufb01cations at di\ufb00erent redshifts in Table 5. Clearly, most AGN host galaxies at low redshifts resided in the E/S0/Sa, whereas at higher redshifts, the probability of them being found in the Sb/Sc/Irr region increased. Besides, the fraction of mergers was small within the full redshift range, suggesting that mergers were not the major mechanism triggering AGN activities, consistent with the \ufb01ndings of Chang et al. (2017). However, as the redshift increased, the merger fractions increased as well, which may might cause a sudden strong gas in\ufb02ow toward the galaxy center and eventually triggers the AGN activity. In our visual classi\ufb01cation, many spiral galaxies showed disturbed features, which might support this implication. The log(G)-log(A) diagnostics is plotted in the right panel of Fig. 13. The total sample had a size of 71 objects since \ufb01ve objects had negative A values due to high background noises. The division lines were de\ufb01ned by Capak et al. (2007a), who studied the morphology of the galaxies in the COSMOS \ufb01eld at 0 < z < 1.2: the division line between the irregular and spiral is: log10 A = 2.353 \u00b7 log10 G + 0.353, and the division line between the spiral and elliptical is: log10 A = 5.50 \u00b7 log10 G + 0.825. In the entire redshift range, 65 (\u223c91.5%) were classi\ufb01ed as elliptical galaxies, and only six objects (\u223c8.5%) were spiral galaxies. Further, there were no irregular or merging galaxies. The number of spiral galaxies in this diagnostics is even smaller than that of visually con\ufb01rmed spirals. We infer that such distributions could be attributed to the low A and larger G at the same time. The A of these AGN hosts had only slight increase compared with normal galaxies, whereas they had much more uneven light distributions. Albeit in G \u2212M20 diagnostics, 46 objects (\u223c60.5%) were E/S0/Sa, since it included S0 and Sa, we expected a smaller fraction of ellipticals. Further, the 2D S\u00b4 ersic \ufb01tting depends on the underlying mathematical form, which means it cannot be used as an indicator of merging/irregular galaxies. In addition, when we handled these systems, errors might occur and reduce the valid sample size. Therefore, we combined the G \u2212M20 diagnostics and visual classi\ufb01cations (G\u2212M20 +V ) to reach the \ufb01nal morphology result for valid objects, which included objects that were successfully measured and objects with visually recognizable morphological structures but failed to pass the selection criteria. In G\u2212M20 diagnostics, 17 of 22 visually classi\ufb01ed ellipticals were classi\ufb01ed as E/S0/Sa, whereas only 10 of 20 visually classi\ufb01ed spirals were classi\ufb01ed as Sb/Sc/Irr. Among the other 10 visual spiral galaxies, six had two visible spiral arms, by which they were classi\ufb01ed as Sa. We took the results of G \u2212M20 diagnostics as the priority and visually reinvestigated the objects with inconsistent classi\ufb01cations between the two methods to avoid misclassi\ufb01cations of G \u2212M20 diagnostics because of unenclosed faint substructures. The visually-con\ufb01rmed S0/Sa galaxies were all classi\ufb01ed as disk systems, whereas the left ones were classi\ufb01ed as spheroid systems. The \ufb01nal classi\ufb01cations of G \u2212M20 + V is listed in Table 5. As seen in the G\u2212M20 diagnostics, the dominant system is spheroid system at z < 1.5. Although at z > 1.5, disk galaxies had a higher fraction, because of the limited sample size, this dominance was unarguable. At z \u22641, the ellipticals occupied the absolute majority and the number of elliticapls was even over half of all ellipticals within the entire redshift range. One possible explanation for this result could be attributed to less UV light, which primarily arose from star forming activities, received by the F814W \ufb01lter at lower redshifts. In addition, spheroid systems experienced a size evolution that dropped more steeply along the cosmic time than disk \f17 Figure 13. Left panel: Gini-M20 diagnostics that puts galaxies into three classi\ufb01cation regions: merger, E/S0/Sa, and Sb/Sc/Irr. The division lines are obtained from Lotz et al. (2008). Right panel: log(Gini)-log(Asymmetry) diagnostics that roughly classify the morphology with three basic types: irregular, spiral, and elliptical. We adapted the formula in Capak et al. (2007a) to divide the regions. Gray histograms in the distribution simply show the overlapped objects. systems, resulting in a smaller fraction of detectable ellipticals with a higher redshift. This was clearly shown in the valid sample selection that \u223c70% host galaxies were excluded because of PSF-dominance, caused by small re, large n, and bright AGN components. Table 5. Summary of the classi\ufb01cation of AGN hosts. redshift E/S0/Sa Sb/Sc/Irr Mergers G \u2212M20 diagnosticsa 0 < z \u22641.0 (59) 66.1% 20.0 % 0% 1.0 < z \u22641.5 (12) 50.0% 41.7% 8.3% 1.5 < z \u22643.0 (5) 20.0% 60.0% 20.0% 0 < z < 3.0 (76) 60.5% 36.8% 2.6% redshift spheroid disk merger G \u2212M20 diagnostics + Visual classi\ufb01cationb 0 < z < 3.0 (102) 46.1% 44.1% 9.8% Note\u2014aNumbers of objects within each redshift bin are shown in parentheses; bTwenty-six objects with visually con\ufb01dent morphology that did not pass the selection criteria are included in G\u2212M20 +V classi\ufb01cation. 5.2. Implications to Galaxy-AGN Co-evolution As shown in the upper panel of Fig. 14, the majority of the hosts of our variability-selected AGNs were undergoing star forming activities. To investigate if they had stronger SFR than normal SFGs that lay around the star formation main sequence at the corresponding epochs, we plot the sSFR against the stellar mass in Fig. 15. These star forming AGN hosts lie within the main sequence if \u00b11.0 dex is considered. This suggested that there was no intensively enhanced SFR on the spiral galaxies that were intrinsically SFGs. But interestingly, for ellipticals, which were representative of quiescent galaxies with low SFRs, we had found that they were around the star formation main sequence, which suggested that their star forming activities might be triggered by AGNs. Such star forming elliptical galaxies hosting AGNs might answer a strange pattern shown in the Asymmetry comparison in the lower panel of Fig. 10, i.e., star forming AGN host galaxies showed more symmetric structures than normal SFGs. The Asymmetry within di\ufb00erent SFR bins in the lower panel of Fig. 11 is a complement to this pattern; higher SFRs only have negligible impacts on the global structures of the host galaxies. However, there was an almost equal fraction of disk systems, which should be more asymmetric than ellipticals and show a long tail in Asymmetry but contribute slightly to the Asymmetry evolution. Similarly, Reichard et al. (2009) studied \u223c25, 000 nearby galaxies (z < 0.06) and compared the lopsidedness of non-AGN and AGN galaxy pairs matched in redshift, mass, mass density, and stellar age but found no signi\ufb01cant di\ufb00erence in the lopsidedness of matched galaxy pairs. We interpret the above results the evidence that AGN feed\f18 Figure 14. Upper: AGN luminosity as a function of the SFR. Lower: AGN luminosity as a function of redshift. The meaning of the symbols are indicated in the panel. For both panels the classi\ufb01cations are based on the combination of the G \u2212M20 diagnostics and visual classi\ufb01cation. The dashed lines in the upper and lower panels indicate the median values of log(SFR) and log(LAGN), respectively. back a\ufb00ects the SFR primarily in the central regions of the host galaxies and its in\ufb02uence on the entire system is less signi\ufb01cant. This interpretation is also supported by LaMassa et al. (2013), who studied 28,000 Type 2 Seyfert galaxies at z \u22720.3 and found positively correlated AGN luminosity and centrally concentrated star formation. Other than the simple classi\ufb01cation of G \u2212M20 diagnostics, our visual classi\ufb01cation revealed that a relatively larger fraction of spiral galaxies had disturbed features. This indicated the existence of possible interactions or merger activities and con\ufb02icted with Cisternas et al. (2011), who studied the hosts of X-ray-selected Type 1 AGNs at z \u223c0.3 \u22121.0 and found that 85% of their objects showed normal undisturbed morphological patterns. Owing to the limitation of observations, Figure 15. Speci\ufb01c star formation rate (sSFR) as a function of galaxy mass. Di\ufb00erent types of galaxies represented by di\ufb00erent shapes are based on G \u2212M20 diagnostics + visual classi\ufb01cation. The di\ufb00erent lines show the SFR mainsequences at di\ufb00erent redshifts adapted from Chang et al. (2017) we cannot give certain words that these features were attributed to disk instabilities or that they were latetype mergers. Except for the disturbed galaxies, we also found 10 mergers or irregular galaxies in the imaging data. The basic conclusion for the small merger rate at 0 < z < 3.0 agreed with many previous studies arguing that mergers were not the primary triggering mechanism of AGN activities (Elmegreen et al. 2008; Bournaud et al. 2012). However, in the lower panel of Fig. 14, we did \ufb01nd that mergers were more likely to be found at higher redshifts, which agreed with the predictions of cosmological hydrodynamic simulations (Rosas-Guevara et al. 2016). We also checked the AGN fractions, AGN luminosities, SFRs, and stellar masses of host galaxies, and we have found that, with higher SFR and AGN luminosity, the galaxy had a larger possibility to be a merger-like object, as shown in the bottom panel of Fig. 14. Such \ufb01nding agreed with the X-ray-selected and HST/WFC3 imaged heavily obscured AGNs at z \u223c1 (Kocevski et al. 2015). This can be understood as an aspect of the evolution of the massive galaxy, i.e., the most massive systems grow mainly through hierarchical merging activities. This is more ubiquitous at younger cosmic times relative to the local universe. Then, the merging galaxies induce a sudden gas in\ufb02ow into their central regions that feed strong AGN activities compared with steady gas in\ufb02ow. Intensive SFRs and luminous AGNs are expected as the products of these major mergers. 6. CONCLUSIONS In this study, we studied the morphology of host galaxies with optical variability-selected AGNs at 0 < z < 3 in the COSMOS \ufb01eld. The host morphology was evalu\f19 ated using parametric (n) and non-parametric (G, M20, C, A) morphological parameters and investigated with SFRs using HST imaging (\u223c0.03\u2032\u2032) and SED \ufb01tting. Our main conclusions are as follows. 1. The sizes of host galaxies of the optical variabilityselected AGNs up to z \u223c1 are more compact than normal galaxies at the same redshift and stellar mass range by 35.7%. 2. The host galaxies of these optical variabilityselected AGNs have no clear morphological preference, seen in the number fractions of the disk (\u223c44.1%) and spheroid (\u223c46.1%) systems. 3. Almost all AGNs (\u223c 94.6%) reside in SFGs (log(SFR)med \u223c0.7 M\u2299yr\u22121) that have a very similar Asymmetry distribution compared with that of normal galaxies, and much more symmetric structures compared with normal SFGs. This can be explained by the fraction of elliptical galaxies (44.9%), suggesting that the AGN feedback enhances the star formation of spheroid systems and the star forming activities in\ufb02uenced by AGN feedback only vary within a small central scale rather than the entire system. 4. The fraction of major mergers in the variabilityselected AGNs is as small as \u223c9.8%, which suggests that major mergers are not the main triggering mechanism of AGN activities; however, the merger rate increases with the increase in the redshift, AGN luminosity, and SFR values. ACKNOWLEDGMENTS We thank Yu-yen Chang, Mark Sargent, Michel Zamojski, and Junyao Li for providing their data in literature and the referee for the useful comments. This work would have not be possible without Vicente Rodriguez-Gomez\u2019s technical support. This work used the NASA/IPAC Infrared Science Archive (IRSA) at the NASA Infrared Processing and Analysis Center (IPAC), located on the campus of the California Institute of Technology (Caltech). We gratefully acknowledge the contributions of the entire COSMOS collaboration. The COSMOS team in France acknowledges support from the Centre National d\u2019\u00b4 Etudes Spatiales. Software: statmorph (Rodriguez-Gomez et al. 2019); Photutils (Bradley et al. 2020); Tiny Tim (Krist et al. 2011); Astropy (Astropy Collaboration et al. 2013, 2018); X-CIGALE (Boquien et al. 2019; Yang et al. 2020)." + } + ], + "Huiwen Yang": [ + { + "url": "http://arxiv.org/abs/2306.14503v1", + "title": "Sensor Selection for Remote State Estimation with QoS Requirement Constraints", + "abstract": "In this paper, we study the sensor selection problem for remote state\nestimation under the Quality-of-Service (QoS) requirement constraints. Multiple\nsensors are employed to observe a linear time-invariant system, and their\nmeasurements should be transmitted to a remote estimator for state estimation.\nHowever, due to the limited communication resources and the QoS requirement\nconstraints, only some of the sensors can be allowed to transmit their\nmeasurements. To estimate the system state as accurately as possible, it is\nessential to select sensors for transmission appropriately. We formulate the\nsensor selection problem as a non-convex optimization problem. It is difficult\nto solve such a problem and even to find a feasible solution. To obtain a\nsolution which can achieve good estimation performance, we first reformulate\nand relax the formulated problem. Then, we propose an algorithm based on\nsuccessive convex approximation (SCA) to solve the relaxed problem. By\nutilizing the solution of the relaxed problem, we propose a heuristic sensor\nselection algorithm which can provide a good suboptimal solution. Simulation\nresults are presented to show the effectiveness of the proposed heuristic.", + "authors": "Huiwen Yang, Lingying Huang, Chao Yang, Yilin Mo, Ling Shi", + "published": "2023-06-26", + "updated": "2023-06-26", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "main_content": "Introduction Recently, many promising real-life applications such as smart city, agriculture, and transportation, have benefited from the development of cyber-physical systems (CPS) and Internet-of-Things (IoT). Remote state estimation plays a very important role in wireless CPS, where sensors can transmit their measurements to a remote estimator via wireless channels and the remote estimator utilizes the received measurement data to esti\u22c6This paper was not presented at any IFAC meeting. Corresponding author: Lingying Huang. \u22c6\u22c6The work by H. Yang and L. Shi is supported by a Hong Kong RGC General Research Fund 16206620. The work by Y. Mo is supported by National Natural Science Foundation of China under grant no. 62273196. Email addresses: hyangbr@connect.ust.hk (Huiwen Yang), lingying.huang@ntu.edu.sg (Lingying Huang), yangchao@ecust.edu.cn (Chao Yang), ylmo@tsinghua.edu.cn (Yilin Mo), eesling@ust.hk (Ling Shi). mate the states of a system monitored by the sensors. As the manufacturing cost of sensors has been reduced, an increasing number of sensors have been deployed to expand surveillance coverage and thus provide access to richer data. However, communication resources (spectrum, transmission power, etc.) are usually limited. As a result, it is impossible for a wireless communication system to permit the transmission of all the sensors at the same time, especially when the quantity of sensors is massive. Existing works in the control community have extensively investigated the sensor scheduling and resource allocation for remote state estimation with limited communication resources [1, 2]. Wang et al. [1] studied the multichannel allocation problem when the number of available channels is less than the number of sensors. Wu et al. [2] studied the optimal sensor scheduling policy when the communication channels have limited bandwidth. These works formulated their sensor scheduling problems as a Markov decision process (MDP), which Preprint submitted to Automatica 27 June 2023 arXiv:2306.14503v1 [eess.SY] 26 Jun 2023 \fis computationally inefficient in multisensor scenarios due to the curse of dimensionality. To handle this problem, they proposed an index-based heuristic to provide asymptotically optimal policy instead of solving the MDP using some numerical algorithms such as value iteration and policy iteration. Some other sensor scheduling problems are formulated as deterministic sensor selection problems [3\u20138], which are usually non-convex due to the inevitable introduction of integer constraints. As a result, they either relaxed the non-convex problems to convex programming problems [3], or proposed some periodic schedule heuristic, e.g., a branch-andbound-based algorithm [4], a dynamic programming based approach [5], a greedy-based algorithm [7], etc. The works mentioned above all considered the scenarios where sensors only have two choices: whether transmit their measurements or not. However, in real communication systems, sensors can realize more flexible transmission by adjusting their transmission power. Meanwhile, the multiple access technique makes it possible to share a limited amount of radio spectrum among multiple users [9], which facilitates the full utilization of communication resources and the promotion of transmission efficiency. One of the fundamental issues for the implementation of multiple access is interference management [10]. Since users are allowed to access the same resource block at the same time, the transmission power of a sensor can cause interference with the transmission of other sensors. Li et al. [11] established a simple game framework for multi-sensor remote state estimation with an energy constraint. Li et al. [12] formulated the power control problem for multi-sensor remote state estimation as a stochastic programming problem, where only non-negative transmission power constraints were considered. Ding et al. [13] investigated the interference management for a CPS with primary sensors and potential sensors, whose interaction was formulated as a non-cooperative game. The works mentioned above adopted the signal-to-interference-and-noise ratio (SINR) model to characterize the interference among sensors, and they all considered that the packet dropout rate is a non-increasing function in SINR. The packet dropout rate was used to calculate the expected estimation error covariance, which appeared in the objectives of the studied problems. However, by utilizing modulation and coding, error-free decoding can be achieved if and only if the SINR at the receiver side is above a threshold, which is determined by the adopted modulation and coding schemes [14]. Such a predefined threshold is called Quality-of-Service (QoS) requirement [15]. In this scenario, the reception of the signals transmitted by sensors becomes a deterministic process rather than a random process, which is significantly different from the previous problem formulations. In the communication community, researchers have made great efforts to investigate interference management problems with QoS requirement constraints [15\u2013 19]. As the mutual interferences among users degrade the SINR performance, a communication system probably is unable to ensure that all users can transmit their data successfully, i.e., not all users\u2019 QoS requirements can be satisfied, which is reflected in the fact that there may be no feasible solution for the optimization problem considering the satisfaction of all uses\u2019 QoS requirements. Therefore, it is necessary to select a subset of users, whose QoS requirements can be satisfied simultaneously, to transmit their data. Tang and Feng [17] proposed a minimum-mean-square-error (MMSE) based user selection algorithm whose core idea is to minimize the gaps between users\u2019 achievable SINR and their QoS requirements. Later, Xia et al. [19] jointly selected users and designed the transmitter parameters based on a similar idea as [17]. However, they can only maximize the number of users whose QoS requirements can be satisfied, which is not necessary for remote state estimation with multiple sensors. The reason is that the number of selected sensors is not the determining factor of the estimation performance. Intuitively, it can be more worthy to select one sensor that has accurate measurements rather than select several sensors whose measurements are extremely inaccurate. Therefore, the existing user selection algorithms in the communication community are inapplicable for the sensor selection for remote state estimation. In this paper, we consider the remote state estimation with real-time QoS requirement constraints. The objective of the remote estimator is to estimate the system state as accurately as possible by collecting measurement data from different sensors. However, due to the QoS constraints, the communication system may be unable to guarantee the transmission of all the sensors at the same time. Therefore, at each time slot, the sensors whose transmission is allowed should be selected according to the current estimation error covariance, channel states, and real-time QoS requirements. The main contributions of this paper are summarized as follows: 1) We study the sensor selection problem for remote state estimation under the constraints of sensors\u2019 QoS requirements. This problem exists in some practical communication systems, but has not yet been considered in the existing works. Due to the QoS requirement constraints, both continuous variables (transmission power) and discrete variables (decision of selection) should be optimized in the sensor selection problem. Moreover, the QoS requirements introduce more nonconvexity to the problem. Compared with the existing sensor selection problems where only discrete (binary) decision variables were optimized, our problem is more complicated and computationally expensive. Compared with [17\u201319], which only maximize the number of users, we take the accuracy of sensors\u2019 measurements into account. As a result, a matrix optimization variable, which has a coupling relationship with the discrete decision variables, is in2 \fRemote estimator Selected sensor Unselected sensor Signal Interference Fig. 1. An instance of sensors\u2019 transmission troduced to characterize the estimation error. Hence, it becomes challenging to solve the sensor selection problem. 2) We formulate the sensor selection problem as a nonconvex optimization problem with equality of matrices and integer (binary) constraints, and provide a heuristic to solve the formulated problem. Specially, we first adopt linear relaxation to relax the binary constraints and transform the equality of matrices into a linear matrix inequality. Then, we adopt the successive convex approximation to transform the relaxed problem into a series of convex optimization problems. By utilizing the solution of the relaxed problem obtained by the algorithm based on SCA, we propose a heuristic sensor selection algorithm based on the concept of assimilated sensing precision matrix. Simulation results show that the proposed heuristic outperforms the existing methods. The remainder of this paper is organized as follows. Section II presents the mathematical setup and description of the problem. In Section III, an algorithm for solving the formulated problem is provided and analyzed. In Section IV, the simulation results are presented to verify the effectiveness of the algorithm proposed in Section III. Section V concludes this paper and presents some future work. Notations: R is the set of real numbers, Rn is the ndimensional Euclidean space, and Rn\u00d7m is the set of real matrices with size n \u00d7 m. For a matrix X, X > 0 (X \u22650) denotes X is a positive definite (positive semidefinite) matrix. Tr{\u00b7} denotes the trace of a matrix. E[\u00b7] is the expectation of a random variable. E[\u00b7|\u00b7] refers to conditional expectation. Sensor 1 \u210e\u0b35, \ud835\udc5d\u0b35, \ud835\udf03\u0b35, \ud835\udc36\u0b35, \ud835\udc45\u0b35 \u22ee Process Sensor 2 \u210e\u0b36, \ud835\udc5d\u0b36, \ud835\udf03\u0b36, \ud835\udc36\u0b36, \ud835\udc45\u0b36 Sensor N \u210e\u0bc7, \ud835\udc5d\u0bc7, \ud835\udf03\u0bc7, \ud835\udc36\u0bc7, \ud835\udc45\u0bc7 Estimator \ud835\udefe\u0b35 \ud835\udefe\u0b36 \ud835\udefe\u0bc7 Fig. 2. System model 2 Problem Setup 2.1 System Model We consider a discrete-time linear time-invariant (LTI) system monitored by N sensors: xk+1 = Axk + wk, (1) yi k = Cixk + vi k, i = 1, 2, . . . , N, (2) where xk \u2208Rn is the system state at time k and yi k \u2208 Rmi is the measurement taken by sensor i. wk \u2208Rn and vi k \u2208Rmi, i = 1, 2, . . . , N are independent zeromean Gaussian noises with E[wkwT l ] = \u03b4klQ (Q \u22650), E[vi k(vj l )T ] = \u03b4ij\u03b4klRi (Ri > 0) \u2200i, j \u2208{1, 2, . . . , N}, and E[wk(vi l)T ] = 0 \u2200i \u2208\u2208{1, 2, . . . , N}, k, l \u2208N. The initial state x0 is Gaussian with mean \u00af x0 and covariance P0. The pairs (A, Ci), i = 1, 2, . . . , N are assumed to be observable and (A, Q 1 2 ) is controllable. 2.2 Transmission Model In this system, the remote estimator and the sensors are equipped with a single antenna. Sensor i can send its measurement yi k to the remote estimator with transmission power pi k \u2208[0, pi max] over the fading channel denoted by hi k, which is modeled as hi k = (T i k) 1 2 gi k, (3) where pi max is the maximum transmission power of sensor i, gi k denotes the small scale fading at time k and T i k denotes the large scale fading, which is modeled as T i k = cif i kli, (4) where ci is the path gain constant, f i k is the shadow fading at time k, and li is the pathloss. For sensor i, the signals transmitted by other sensors will at time k be regarded as interferences. Consequently, the signal-to-interference-and-noise-ratio (SINR) of the signal transmitted by sensor i at time k is defined as SINRi k = hi kpi k P j\u0338=i hj kpj k + \u03c32 , (5) 3 \fwhere \u03c32 is the channel noise. To successfully decode the signal transmitted by sensor i, the following condition should be satisfied: SINRi k \u2265\u03b8i, (6) where \u03b8i is the SINR quality-of-service (QoS) requirement for sensor i. Note that the condition (6) is widely used to guarantee successful data decoding in the wireless communication field [16]. Due to the limited communication resources and the inter-inference between the sensors, the condition (6) may not be satisfied for each sensor. Therefore, at time k, it should be decided that which sensors should send their measurements. Define the transmission variable \u03b3i k as \u03b3i k = \u001a1, if yi k is sent by sensor i at time k, 0, otherwise. (7) Furthermore, if sensor i is selected, it should satisfy the condition (6) to guarantee a successful transmission; otherwise, there will be no constraint on the SINR of sensor i, i.e., the following condition should be satisfied: SINRi k \u2265\u03b3i k\u03b8i. (8) Assumption 1 At each time step k, there exists at least one sensor i such that hi kpi max \u03c32 \u2265\u03b8i. Remark 1 Assumption 1 means there is at least one sensor\u2019s QoS requirement can be satisfied. Otherwise, none of the sensors can be selected, i.e., \u03b3i k = 0, \u2200i. 2.3 Estimation Process After receiving the measurements from the sensors, the remote estimator runs a Kalman filter to obtain the minimum mean-squared error (MMSE) estimate of the system state xk. Define yk \u225c(\u03b31 k(y1 k)T , . . . , \u03b3N k (yN k )T )T and let Lk \u225c{y1, . . . , yk}. Then, define the priori estimate of xk and its corresponding error covariance as follows: \u02c6 xk|k\u22121 \u225cE[xk|Lk\u22121], (9) Pk|k\u22121 \u225cE[(xk \u2212\u02c6 xk|k\u22121)(xk \u2212\u02c6 xk|k\u22121)T |Lk\u22121], (10) and define the posteriori estimate of xk and its corresponding error covariance as follows: \u02c6 xk|k \u225cE[xk|Lk], (11) Pk|k \u225cE[(xk \u2212\u02c6 xk|k)(xk \u2212\u02c6 xk|k)T |Lk]. (12) The priori and posteriori estimates follow the below update steps: \u02c6 xk|k\u22121 = A\u02c6 xk\u22121|k\u22121, (13) Pk|k\u22121 = APk\u22121|k\u22121AT + Q, (14) \u02c6 xk|k = \u02c6 xk|k\u22121 + Kk(yk \u2212\u02dc Ck\u02c6 xk|k\u22121), (15) Pk|k = Pk|k\u22121 \u2212Kk \u02dc CPk|k\u22121, (16) Kk = Pk|k\u22121 \u02dc CT k ( \u02dc CkPk|k\u22121 \u02dc CT k + \u02dc Rk)\u2020, (17) where \u02dc Ck \u225c(\u03b31 kCT 1 , . . . , \u03b3N k CT N)T , \u02dc Rk \u225cdiag{\u03b31 kR1, . . . , \u03b3N k RN}, and \u2020 represents the Moore-Penrose pseudo-inverse. In the subsequent analysis, we will write Pk|k as Pk for simplicity whenever there is no confusion. Remark 2 Note that the matrix \u02dc CkPk|k\u22121 \u02dc CT k + \u02dc Rk is invertible if and only if \u03b3i k = 1, \u2200i. If not all sensors are selected at time k, i.e., \u2203i, s.t. \u03b3i k = 0, the matrix \u02dc CkPk|k\u22121 \u02dc CT k + \u02dc Rk will become a singular matrix. Since the Moore-Penrose pseudoinverse of an invertible matrix is its inverse, here we can use Moore-Penrose pseudoinverse to cover all cases [8]. 2.4 Problem Description In the following, we will formulate an optimization problem, where the transmission power of sensors and the selection variables are jointly optimized, to help select which sensors should transmit their measurements at time k. The set of candidate sensors S is initialized as {1, 2, . . . , N}, but some sensors should be remove from S due to the restriction of communication resources. In practice, sensors usually have limited transmission power, i.e., pi k \u2208[0, pi max], \u2200i \u2208S, (18) The selection variables \u03b3i k(i = 1, 2, . . . , N) are 0 \u22121 binary decision variables, i.e., \u03b3i k \u2208{0, 1}, \u2200i \u2208S. (19) If a sensor is selected to transmit its measurement at time k, the corresponding SINR QoS requirement must be satisfied so that the transmitted measurement can be received by the remote estimator successfully, i.e., the following constraints should be satisfied: hi kpi k P j\u0338=i hj kpj k + \u03c32 \u2265\u03b3i k\u03b8i, \u2200i \u2208S. (20) 4 \fRemark 3 By utilizing modulation and coding, errorfree decoding can be achieved if and only if the SINR at the receiver side is above a threshold, which is determined by the adopted modulation and coding schemes [14]. Such a predefined threshold is called QoS requirement [15]. At time k, the system objective is to minimize the estimation error, which can be characterized as minimizing the trace of the error covariance of the state estimate Pk. Therefore, we have the following optimization problem: Problem 1 min \u03b3k,pk Tr{Pk} s.t. hi kpi k P j\u0338=i hj kpj k + \u03c32 \u2265\u03b3i k\u03b8i, \u2200i \u2208S, pi k \u2208[0, pi max], \u2200i \u2208S, \u03b3i k \u2208{0, 1}, \u2200i \u2208S, Pk = Pk|k\u22121 \u2212Kk \u02dc CkPk|k\u22121, Kk = Pk|k\u22121 \u02dc CT k \u0010 \u02dc CkPk|k\u22121 \u02dc CT k + \u02dc R \u0011\u2020 , where \u03b3k \u225c \u0002 \u03b31 k, \u03b32 k, . . . , \u03b3N k \u0003 , pk \u225c \u0002 p1 k, p2 k, . . . , pN k \u0003 , and Pk|k\u22121 = APk\u22121AT + Q. Moreover, Pk\u22121 and hi k, \u2200i are known by the estimator at time k. Remark 4 In practice, the channel state information (CSI) hi k, \u2200i can be obtained by the estimator through channel estimation [20], which can be accomplished by transmitting pilot sequences from each sensor. 3 Main Results Due to the binary and non-convex constraints, it is extremely difficult to derive the optimal solution of Problem 1. In this section, we will propose an efficient algorithm, which can obtain a suboptimal solution of Problem 1. Fig. 3 illustrates the relationship among the main results derived in this section. Explicit LMI (Proposition 1) Problem 1 Problem 2 Problem 3 Linear Relaxation SCA Problem 4 KKT Point (Algorithm 1) Suboptimal Solution (Algorithm 2) Fig. 3. Relationship among the main results 3.1 Problem Reformulation and Relaxation We first eliminate the non-convexity of Pk = Pk|k\u22121 \u2212 Kk \u02dc CPk|k\u22121 in the following Proposition 1: Proposition 1 Problem 1 is equivalent to the following problem: Problem 2 min \u03b3k,pk,PkTr{Pk} s.t. \" (APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci I I Pk # \u22650, hi kpi k P j\u0338=i hj kpj k + \u03c32 \u2265\u03b3i k\u03b8i, \u2200i \u2208S, pi k \u2208[0, pi max], \u2200i \u2208S, \u03b3i k \u2208{0, 1}, \u2200i \u2208S. Proof 1 First of all, we introduce two lemmas: Lemma 1 (Matrix Inversion Lemma) Let A and D be square, invertible matrices of size nA \u00d7 nA and nD \u00d7 nD, and B and C be matrices of size nA\u00d7nD and nD\u00d7nA, the following equation holds: (A+BD\u22121C)\u22121 = A\u22121 \u2212A\u22121B(D \u2212CA\u22121B)\u22121CA\u22121. Lemma 2 (Schur Complement) Let S be a symmetric matrix with the following formation S = \" A B BT C # . Then, if A is positive definite, we have S \u22650 \u21d0 \u21d2C \u2212BT A\u22121B \u22650. Define \u02c6 Ck = \u0000CT i1, CT i2, . . . , CT im \u0001T , \u02c6 Rk = diag{Ri1, Ri2, . . . , Rimk }, where im \u2208{i \u2208S|\u03b3i k = 1}, \u2200m \u2208{1, 2, . . . , mk}. According to the definition of Moore-Penrose pseudoinverse, we have \u02dc CT k \u0010 \u02dc CkPk|k\u22121 \u02dc CT k + \u02dc Rk \u0011\u2020 \u02dc Ck = \u02c6 CT k \u0010 \u02c6 CkPk|k\u22121 \u02c6 CT k + \u02c6 Rk \u0011\u22121 \u02c6 Ck, (21) 5 \fand hence we have Pk = Pk|k\u22121 \u2212Kk \u02dc CkPk|k\u22121 = Pk|k\u22121 \u2212Pk|k\u22121 \u02dc CT k \u0010 \u02dc CkPk|k\u22121 \u02dc CT k + \u02dc Rk \u0011\u2020 \u02dc CkPk|k\u22121 = Pk|k\u22121 \u2212Pk|k\u22121 \u02c6 CT k \u0010 \u02c6 CkPk|k\u22121 \u02c6 CT k + \u02c6 Rk \u0011\u22121 \u02c6 CkPk|k\u22121 = \u0010 P \u22121 k|k\u22121 + \u02c6 CT k \u02c6 R\u22121 k \u02c6 Ck \u0011\u22121 = \u0000APk\u22121AT + Q \u0001\u22121 + N X i=1 \u03b3i kCT i R\u22121 i Ci !\u22121 , (22) where the fourth equality follows from Matrix Inversion Lemma. Then, Problem 1 can be rewritten as min \u03b3k,pk Tr{Pk} s.t. Pk = (APk\u22121AT + Q)\u22121 + N X i=1 \u03b3i kCT i R\u22121 i Ci !\u22121 , hi kpi k P j\u0338=i hj kpj k + \u03c32 \u2265\u03b3i k\u03b8i, \u2200i \u2208S, pi k \u2208[0, pi max], \u2200i \u2208S, \u03b3i k \u2208{0, 1}, \u2200i \u2208S. which is equivalent to the following problem: Problem 2.1: min \u03b3k,pk,PkTr{Pk} s.t. Pk \u2265 (APk\u22121AT + Q)\u22121 + N X i=1 \u03b3i kCT i R\u22121 i Ci !\u22121 , hi kpi k P j\u0338=i hj kpj k + \u03c32 \u2265\u03b3i k\u03b8i, \u2200i \u2208S, pi k \u2208[0, pi max], \u2200i \u2208S, \u03b3i k \u2208{0, 1}, \u2200i \u2208S. The equivalence is derived from the fact that the constraint Pk \u2265 \u0010 (APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci \u0011\u22121 in Problem 2.1 must be satisfied with equality. Otherwise, we can always decrease the objective value by replacing Pk with \u0010 (APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci \u0011\u22121 . Due to Q \u22650, Ri > 0, \u2200i, and Assumption 1, we have (APk\u22121AT + Q)\u22121 + N X i=1 \u03b3i kCT i R\u22121 i Ci > 0. (23) Similar to [21], by Lemma 2, we have Pk \u2265 \u0000(APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci \u0011\u22121 is equivalent to \" (APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci I I Pk # \u22650. (24) The proof is thus completed. To deal with the integer (binary) constraints, we adopt the linear relaxation, which is often considered in the literature [4,8,22]. Specifically, we replace the constraints \u03b3i k \u2208{0, 1}, \u2200i with \u03b3i k \u2208[0, 1], \u2200i. Then, we have the following relaxed problem: Problem 3 min \u03b3k,pk,PkTr{Pk} s.t. \" (APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci I I Pk # \u22650, hi kpi k P j\u0338=i hj kpj k + \u03c32 \u2265\u03b3i k\u03b8i, \u2200i \u2208S, pi k \u2208[0, pi max], \u2200i \u2208S, \u03b3i k \u2208[0, 1], \u2200i \u2208S. Remark 5 Note that after the linear relaxation for variables \u03b3i k, the constraint (24) is a linear matrix inequality (LMI), which is convex. However, the non-convexity of the constraint (20) has not been eliminated yet. Therefore, further efforts should be made to deal with this issue. 3.2 Algorithm for Solving Problem 3 As discussed above, Problem 3 is still non-convex due to the constraint (20). In this subsection, we propose an iterative algorithm based on SCA technique [23\u201325], where a series of convex problems will be solved to approximate the optimal solution of the non-convex Problem 3. In each iteration, a convex surrogate problem will be constructed based on the optimal solution obtained in the last iteration. Then, the new surrogate problem will be solved in the subsequent iteration. Eventually, the solution of the surrogate problem will converge to a solution satisfying the KKT conditions of Problem 3. In the following, we will show how to construct such surrogate problems. By introducing a set of auxiliary variables {\u03b7i k, \u2200i}, the constraint (20) can be equivalently rewritten as the fol6 \flowing two constraints: X j\u0338=i hj kpj k + \u03c32 \u2264\u03b7i k, \u2200i, (25) \u03b7i k\u03b3i k\u03b8i \u2264hi kpi k, \u2200i. (26) The constraint (25) is a convex (linear) constraint, but the constraint (26) is still non-convex. From the fact 4\u03b7i k\u03b3i k \u22644\u03b7i k\u03b3i k +(\u03b7i k \u2212\u03b3i k \u2212(bi k)(t))2, we can see that the non-convex term \u03b7i k\u03b3i k has the following convex bound: \u03b7i k\u03b3i k \u2264(\u03b7i k + \u03b3i k)2 \u22122(bi k)(t)(\u03b7i k \u2212\u03b3i k) + ((bi k)(t))2 4 , (27) where (bi k)(t) = (\u03b7i k)(t) \u2212(\u03b3i k)(t), and (\u03b7i k)(t) and (\u03b3i k)(t) are the feasible points in the t-th iteration. Then Problem 3 can be approximated by the following convex surrogate problem in the (t + 1)-th iteration: Problem 4 min {\u03b3k,\u03b7k,pk, Pk}(t+1) Tr{Pk} s.t. \" (APk\u22121AT + Q)\u22121 + PN i=1 \u03b3i kCT i R\u22121 i Ci I I Pk # \u22650, X j\u0338=i hj kpj k + \u03c32 \u2264\u03b7i k, \u2200i \u2208S, \u0000\u03b7i k + \u03b3i k \u00012 \u22122 \u0000bi k \u0001(t) \u0000\u03b7i k \u2212\u03b3i k \u0001 + \u0010\u0000bi k \u0001(t)\u00112 \u22644 \u03b8i hi kpi k, \u2200i \u2208S, pi k \u2208[0, pi max], \u2200i \u2208S, \u03b3i k \u2208[0, 1], \u2200i \u2208S. The SCA-based algorithm for solving Problem 3 is summarized in Algorithm 1: Algorithm 1 SCA-based algorithm for solving Problem 3 Set the initial values for {\u03b3k, \u03b7k, pk, Pk}(0) and set t = 0; repeat Update (bi k)(t) = (\u03b7i k)(t) \u2212(\u03b3i k)(t); Solve Problem 4 to obtain {\u03b3k, \u03b7k, pk, Pk}(t+1); t = t + 1; until converge Proposition 2 Monotonic convergence of Algorithm 1 is guaranteed. Moreover, the converged solution satisfies all the constraints as well as the KKT conditions of Problem 3. Proof 2 See Appendix. Remark 6 The initial values of {\u03b3k, \u03b7k, pk, Pk}(0) must be feasible for Problem 3. We can simply set \u03b3i k = 0, pi k = 0, \u03b7i k = \u03c32, \u2200i and Pk = APk\u22121AT + Q. Note that the initial values will influence the number of iterations. Remark 7 Generally, a KKT point of a non-convex problem can be a global minimum, a local minimum, a saddle point, or even a maximum. Usually, obtaining the global minimum of a non-convex problem is considered NP-hard. Therefore, the majority of efforts in solving non-convex problems aim to obtain a local minimum [17, 18]. According to [26, Corollary 1], the solution obtained by Algorithm 1 is not only a KKT point of Problem 3, but also a local minimum to Problem 3. The local minimum obtained by Algorithm 1 may vary with the initial point. In Section 4, the results were derived with the initial values mentioned in Remark 6. 3.3 Heuristic for Sensor Selection So far, Problem 3 has been solved by Algorithm 1. However, the values of the transmission variables \u03b3i k, \u2200i obtained by solving Problem 3 are continuous rather than binary due to the linear relaxation. That is to say, there is still a gap before we obtain the solution to the original problem, i.e., Problem 1. Based on the above analysis, there is no doubt that it is difficult to find a feasible solution to Problem 1, let alone to find the global optimal solution. Therefore, in this subsection, we will provide a heuristic to give a suboptimal solution to Problem 1. Define two functions h : S+ n \u2192S+ n and g(X; S) : S+ n \u00d7 S+ n \u2192S+ n : h(X) \u225cAXAT + Q, (28) g(X; S) \u225c([h(X)]\u22121 + S)\u22121. (29) It is easy to see that g(X; S) is matrix monotonically decreasing with respect to S in S+ n . By defining \u03b3i kCT i R\u22121 i Ci as the assimilated sensing precision matrix for sensor i, based on the fact that Pk = g \u0010 Pk\u22121, PN i=1 \u03b3i kCT i R\u22121 i Ci \u0011 , we can heuristically select the sensors which have \u201clarger\u201d assimilated sensing precision matrices. In the following, we take the trace of the assimilated sensing precision matrix, i.e., Tr{\u03b3i kCT i R\u22121 i Ci}, as the selection metric. We say that the assimilated sensing precision matrix of sensor i is \u201clarger\u201d than that of sensor j if Tr{\u03b3i kCT i R\u22121 i Ci} > Tr{\u03b3j kCT j R\u22121 j Cj}. 7 \fThe intuition of using Tr{\u03b3i kCT i R\u22121 i Ci} as the selection metric is to jointly consider how far a sensor\u2019s QoS requirement can be satisfied and how much the sensor can contribute to the estimation accuracy. The extent to which the QoS requirement can be satisfied can be characterized by \u03b3i k. If \u03b3i k = 1, then the QoS requirement of sensor i can be fully satisfied. If \u03b3i k < 1, a larger value of \u03b3i k indicates that the QoS requirement of sensor i is more likely to be satisfied, which further implies that sensor i is more capable of transmission. The contribution to the estimation accuracy of sensor i can be characterized by the matrix CT i R\u22121 i Ci. Specifically, if CT i R\u22121 i Ci > CT j R\u22121 j Cj(i \u0338= j), then sensor i will be credited with a greater contribution to the estimation accuracy than sensor j. Therefore, to combine these two considerations as well as to facilitate calculations and numerical comparisons, the trace of the assimilated sensing precision matrix is adopted as the selection metric. The whole algorithm for sensor selection is summarized in Algorithm 2. In each iteration, the sensor with the smallest value of the trace of the assimilated sensing precision matrix will be removed from the candidate set S. Note that if the QoS requirements of all the sensors in S can be satisfied with the communication resources of the system, then \u03b3i k(\u2200i \u2208S) will be 1 so that the objective value, i.e., Tr{Pk}, can be minimized subject to the communication resource constraints. That is, if there exists some \u03b7k, pk and Pk such that {\u03b3k, \u03b7k, pk, Pk}, where \u03b3k = [1, 1, . . . , 1], is a feasible solution for Problem 4, then it must be the optimal solution. Due to interference between the sensors, the system communication resources may not be enough to accommodate the transmission of all the sensors in S under the QoS constraints. As a result, there will exist i \u2208S such that \u03b3i k < 1. As more and more sensors are removed from the candidate set S, the interference between sensors can be reduced such that the QoS requirements of the remaining sensors can be satisfied. Gradually, \u03b3i k \u2208S will approach 1. Therefore, the algorithm will end when \u03b3i k = 1, \u2200i \u2208S, and the sensors in the candidate set S will be selected at time k. Algorithm 2 Sensor Selection Algorithm Initialize the set of candidate sensors S = {1, 2, . . . , N}; loop Solve Problem 3 for the system including the sensors in S via Algorithm 1; if \u03b3i k = 1, \u2200i \u2208S then return S. else s = arg mini\u2208S \b Tr{\u03b3i kCT i R\u22121 i Ci} \t ; S = S\\s; end if end loop 4 Simulation Results In the following simulations, all sensors are randomly deployed in a circular area with a radius of 2km, and the remote estimator is located at the center of the circle. The shadow fading is modeled as a random variable following log-normal distribution with zero mean and 8dB standard deviation. The path gain constant ci(i = 1, 2, . . . , N) are set to 1. The pathloss is modeled as li(di) = \u2212147.3 \u221243.3 log(di)dB, where di is the distance between the remote estimator and sensor i. The small-scale fading is modeled as Rayleigh fading with zero mean and unit variance. We set the noise power \u03c32 = \u221230dB and the maximum transmission power pi max = 1mW, \u2200i. According to [27], the attainable data rate is monotonically non-decreasing with the SINR. Specifically, given the SINR QoS requirement \u03b8i, the minimum attainable data rate ri of sensor i is B log2(1+ \u03b8i), where B is the bandwidth. In the following simulations, all sensors are assumed to have the same minimum attainable data rate r = 50Mbps, and hence we have \u03b8i = 2 r B \u22121, \u2200i. 4.1 Convergence of Algorithm 1 First, we verify the convergence of Algorithm 1 proposed in Section 3. In this simulation, we consider an unstable system with the A = 1.1 and Q = 1. There are N = 10 sensors in the system, and Ci \u2208R(i = 1, 2, . . . , N) and Ri \u2208R(i = 1, 2, . . . , N) are chosen randomly. Fig. 4 shows the convergence behavior of Algorithm 1. One can see that Algorithm 1 is monotonically convergent, which confirms Proposition 2. 5 10 15 20 25 30 35 40 45 iteration 0.8 1 1.2 1.4 1.6 1.8 2 2.2 objective value Fig. 4. Convergence behavior of Algorithm 1 8 \f4.2 Effectiveness of Algorithm 2 In this subsection, we show the proposed heuristic (Algorithm 2) is effective by comparing it with the following heuristic sensor selection methods: \u2022 Sensor number maximization (SNM) [17]: the main idea of this method is to maximize the number of sensors which the system can support with the limited communication resources. This method is often used in many communication scenarios for user selection. \u2022 Precise measurements first (PMF): the basic idea of this heuristic is to select as many sensors with more precise measurements as possible, while ensuring that the QoS requirement constraints are satisfied. This method follows the below steps: 1) Initialize the sensor set S = {1, 2, . . . , N} and the candidate set C = \u2205; 2) Calculate s = arg maxi\u2208S{Tr{CT i R\u22121 i Ci}}, and let S = S\\s; 3) Check the feasibility of the following optimization problem: min pi k,i\u2208C\u222a{s}c s.t. hi kpi k P j\u2208C\u222a{s}\\i hj kpj k + \u03c32 \u2265\u03b8i, \u2200i \u2208C \u222a{s}, (30) where c can be any constant. If the problem is feasible, then let C = C \u222a{s}; 4) If S \u0338= \u2205, Go back to step 2); otherwise, output C. We consider a system with xk \u2208R5, yi k \u2208R5, \u2200i, and the following parameters: A = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0.9416 \u22120.0180 0.0715 0.0262 \u22120.0196 \u22120.0559 0.9948 0.0544 0.0251 \u22120.0148 \u22120.0564 \u22120.0176 1.0686 0.0255 \u22120.0162 \u22120.0631 \u22120.0090 0.0652 1.0284 \u22120.0179 \u22120.0197 \u22120.0046 0.0200 0.0096 1.0049 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , Q = I. There are N = 10 sensors in the system. The value of Tr{Pk} is obtained by averaging 1000 independent experimental results with different sensor location, channel states, matrices Ci(i = 1, 2, . . . , N) and Ri \u22645I(i = 1, 2, . . . , N) generated randomly. Fig. 5 shows the trace of error covariance Pk versus bandwidth B. One can see that our proposed heuristic always performs better than the SNM method and the PMF method, especially when there is less available bandwidth. When bandwidth resources are enough to allow all the sensors to transmit their measurements simultaneously, the proposed method and the other two methods have the same 50 100 150 200 250 300 350 400 450 500 Bandwidth B (MHz) 0 1 2 3 4 5 6 7 Tr{Pk} Proposed Heuristics SNM PMF Fig. 5. Trace of error covariance matrix Pk versus bandwidth B performance since all the sensors will be selected. When bandwidth resources decrease, the SNM method tends to choose the sensors which have better channel states so that more sensors can transmit their measurements. However, it is inadequate to only consider selecting as many sensors as possible, since the sensing precision of a sensor also needs to be taken into account. At the extreme, the sensor which has the best channel state may have the worst sensing precision. As a result, this sensor may not deserve to be selected although it introduces low communication cost. On the contrary, a sensor which has little worse channel state but better sensing precision can be a good candidate to be selected. The PMF method performs as well as the proposed method when there is more bandwidth available, which is because the system has more freedom to select the sensors with the most accurate measurement as much as possible. When the available bandwidth is limited, it is also inadequate to consider the measurement accuracy only. Since our proposed method takes both transmission cost and measurement precision of sensors into account, it outperforms the SNM method and the PMF method especially in the low bandwidth region. Moreover, when the bandwidth is so low that the system can only support one sensor to transmit, the optimal solution is to select the sensor with the most accurate measurement. In this case, the PMF method can obtain the optimal solution. The simulation results show that the proposed heuristic tends to have similar performance with the PMF when the bandwidth decreases, which implies that the solution obtained by the proposed heuristic is approaching the optimal solution. 4.2.1 Case Study In the following simulations, we study two simple cases 9 \fto see how Algorithm 2 works. Case 1: Ci = 1, \u2200i, R1 = 0.5, R2 = 0.2, R3 = 0.15, R4 = 0.2, R5 = 0.2, h1 k = 2, h2 k = 1, h3 k = 0.01, h4 k = 1, h5 k = 1. Hence, we have Tr{CT 3 R\u22121 3 C3} > Tr{CT 2 R\u22121 2 C2} = Tr{CT 4 R\u22121 4 C4} = Tr{CT 5 R\u22121 5 C5} > Tr{CT 1 R\u22121 1 C1}. Case 2: Ci = 1, \u2200i, R1 = 0.5, R2 = 1, R3 = 0.15, R4 = 1, R5 = 1, h1 k = 2, h2 k = 1, h3 k = 0.01, h4 k = 1, h5 k = 1. Hence, we have Tr{CT 3 R\u22121 3 C3} > Tr{CT 1 R\u22121 1 C1} > Tr{CT 2 R\u22121 2 C2} = Tr{CT 4 R\u22121 4 C4} = Tr{CT 5 R\u22121 5 C5}. Moreover, in both cases, we set A = 1.005, Q = 1, Pk\u22121 = 1, \u03c32 = \u221220dB, pi max = 1mW, \u2200i, and \u03b8i = \u221a 2 \u22121, \u2200i. Note that the differences between case 1 and case 2 are the values of R2, R4, and R5. Table 1 and Table 2 show the solving process of Algorithm 2 in case 1 and case 2, respectively. Table 3 shows the solutions obtained by different methods. In both case 1 and case 2, the set of selected sensors obtained by the SNM method is {2, 4, 5}, and the objective values are 0.0645 and 0.2857, respectively. Furthermore, the solution obtained by the SNM method will always maintain unchanged if pi max, \u2200i, hi k, \u2200i and \u03b8i, \u2200i are fixed. The reason is that the SNM method only considers the constraints on communication resources, which are merely related to pi max, \u2200i, hi k, \u2200i and \u03b8i, \u2200i. Although the obtained solution is optimal for case 1, it cannot be optimal for most cases. The PMF method can obtain the optimal solution for case 2 but cannot for case 1. However, the proposed heuristic can obtain the optimal solutions for both case 1 and case 2, which further reflects the effectiveness and versatility of the method. 5 Conclusion In this paper, we study the sensor selection problem for remote state estimation under the QoS requirement constraints. The formulated sensor selection problem is non-convex and it is difficult to obtain a feasible solution. To deal with this problem, we propose a heuristic algorithm, which is derived with the help of linear relaxation, SCA technique, and the concept of assimilated sensing precision matrix. Simulation results show that the proposed heuristic outperforms the existing methods. For future work, the beamforming technique can be considered when the sensors and the remote estimator are equipped with multiple antennas, which can further improve the spectrum efficiency. A Proof of Proposition 2 We first prove the monotonic convergence of Algorithm 1 with the following Lemma 3: Lemma 3 ( [28], Theorem 15) A monotone sequence of real numbers converges if and only if it is bounded. It is easy to see that the optimal point (solution) obtained in the (t \u22121)-th iteration is also a feasible point of the optimization problem in the t-iteration: \u0010 ((\u03b7i k)(t) + (\u03b3i k)(t))2 \u22122(bi k)(t)((\u03b7i k)(t) \u2212(\u03b3i k)(t)) +((bi k)(t))2\u0011 /4 = (\u03b7i k)(t)(\u03b3i k)(t) \u2264 \u0010 ((\u03b7i k)(t) + (\u03b3i k)(t))2 \u22122(bi k)(t\u22121)((\u03b7i k)(t) \u2212(\u03b3i k)(t)) +((bi k)(t\u22121))2\u0011 /4 \u2264(hi k)(t)(pi k)(t) \u03b8i , (A.1) That is, the optimal solution in the (t \u22121)-th iteration is also achievable in the t-th iteration. Therefore, the optimal solution obtained in the t-th iteration is no greater than that obtained in the (t \u22121)-th iteration. Moreover, since Pk \u22650, the value of the objective function is bounded. Monotonic convergence of Algorithm 1 is hence proved. Next, we prove the converged solution satisfies all the constraints as well as the KKT conditions of Problem 3. According to [26], the SCA-based algorithm (Algorithm 1) can always converge to a solution satisfying the KKT conditions of Problem 3 when the following conditions are satisfied: 1) Each non-convex constraint f(x) \u22640 of the optimization problem is iteratively approximated by a convex constraint fcv(x; \u02c6 x) \u22640, where \u02c6 x is the optimal solution to the approximated problem in the previous iteration, and fcv(x; \u02c6 x) is a convex function satisfying: fcv(x; \u02c6 x) \u2265f(x), (A.2) fcv(\u02c6 x; \u02c6 x) = f(\u02c6 x), (A.3) \u2207fcv(x; \u02c6 x)|x=\u02c6 x = \u2207f(x)|x=\u02c6 x. (A.4) 2) The approximated convex problem satisfies the Slater\u2019s condition [29]. First, it is easy to verify condition 1) is satisfied when the approximation (27) is adopted. Next, we show condition 2) is also satisfied for Problem 4, i.e., there exists a strictly feasible solution to Problem 4 [29]. As mentioned before, the (t\u22121)-th solution {\u03b3k, \u03b7k, pk, Pk}(t\u22121) is also a feasible solution to the optimization problem in the 10 \fTable 1 Solving process of Algorithm 2 (case 1) Iteration S \u03b3k pk s 1 {1, 2, 3, 4, 5} {0.1882, 1.0000, 0.0600, 1.0000, 1.0000} {0.0148, 0.1200, 1.0000, 0.1200, 0.1200} 1 2 {2, 3, 4, 5} {1.0000, 0.1559, 1.0000, 1.0000} {0.0483, 1.0000, 0.0483, 0.0483} 3 3 {2, 4, 5} {1.0000, 1.0000, 1.0000} {0.7741, 0.7741, 0.7741} Table 2 Solving process of Algorithm 2 (case 2) Iteration S \u03b3k pk s 1 {1, 2, 3, 4, 5} {1.0000, 0.1015, 1.0000, 0.1015, 0.1015} {0.0050, 0.0014, 1.0000, 0.0014, 0.0014} 2 2 {1, 3, 4, 5} {1.0000, 1.0000, 0.1558, 0.1558} {0.0050, 1.0000, 0.0021, 0.0021} 4 3 {1, 3, 5} {1.0000, 1.0000, 0.3332} {0.0050, 1.0000, 0.0041} 5 4 {1, 3} {1.0000, 1.0000} {0.0055, 0.9588} Table 3 Solutions obtained by different methods Method SNM PMF Proposed Heuristic Optimal Case 1 Selected Sensor {2, 4, 5} {2, 3} {2, 4, 5} {2, 4, 5} Objective Function 0.0645 0.0822 0.0645 0.0645 Case 2 Selected Sensor {2, 4, 5} {1, 3} {1, 3} {1, 3} Objective Function 0.2857 0.1091 0.1091 0.1091 t-th iteration. Let \u02c6 pi k = \u03d5i k(pi k)(t\u22121), where \u03d5i k \u2208R and 1 + (\u02c6 \u03b3i k \u2212(\u03b3i k)(t\u22121))(\u02c6 \u03b3i k \u2212(\u03b3i k)(t\u22121) + 4(\u03b7i k)(t\u22121)) 4hi kpi k \u03b8i < \u03d5i k < 1, \u2200i, (A.5) (\u03b3i k)(t\u22121) \u22124(\u03b7i k)(t\u22121) < \u02c6 \u03b3i k < (\u03b3i k)(t\u22121), \u2200i. (A.6) It is easy to show that the solution {\u02c6 \u03b3k, \u03b7(t\u22121) k , \u02c6 pk, P (t\u22121) k } is a strictly feasible solution to Problem 4. Therefore, the converged solution satisfies the KKT conditions of Problem 3." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file