text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A Deep-Learning-Based Artificial Intelligence System for the Pathology Diagnosis of Uterine Smooth Muscle Tumor We aimed to develop an artificial intelligence (AI) diagnosis system for uterine smooth muscle tumors (UMTs) by using deep learning. We analyzed the morphological features of UMTs on whole-slide images (233, 108, and 30 digital slides of leiomyosarcomas, leiomyomas, and smooth muscle tumors of uncertain malignant potential stained with hematoxylin and eosin, respectively). Aperio ImageScope software randomly selected ≥10 areas of the total field of view. Pathologists randomly selected a marked region in each section that was no smaller than the total area of 10 high-power fields in which necrotic, vascular, collagenous, and mitotic areas were labeled. We constructed an automatic identification algorithm for cytological atypia and necrosis by using ResNet and constructed an automatic detection algorithm for mitosis by using YOLOv5. A logical evaluation algorithm was then designed to obtain an automatic UMT diagnostic aid that can “study and synthesize” a pathologist’s experience. The precision, recall, and F1 index reached more than 0.920. The detection network could accurately detect the mitoses (0.913 precision, 0.893 recall). For the prediction ability, the AI system had a precision of 0.90. An AI-assisted system for diagnosing UMTs in routine practice scenarios is feasible and can improve the accuracy and efficiency of diagnosis. Introduction Uterine smooth muscle tumors (UMTs) are the most common tumors of the female genital tract, with an incidence of approximately 70% in women aged >40 years [1,2]. The World Health Organization Classification of Tumours of Female Reproductive Organs (fourth edition, 2014) classifies UMTs into three main categories: leiomyoma (including specific subtypes), leiomyosarcoma, and smooth muscle tumors of uncertain malignant potential (STUMP) [3]. These classifications are still used in the fifth edition published in 2020. The diagnostic criteria for smooth muscle tumors include cytological atypia, mitoses, coagulative tumor cell necrosis, tumor border, and vascular invasion [4][5][6]. The three main criteria for diagnosis are the heterogeneity of tumor cells, mitoses, and tumor coagulative necrosis. The accurate accounting of mitoses is particularly important in differentiating leiomyosarcomas from certain subtypes of leiomyomas (e.g., leiomyomas with bizarre nuclei and mitotically active leiomyomas) or STUMPs. However, the judgment of these three main criteria is somewhat subjective for pathologists, especially mitoses, which are often interfered with by nuclear fragmentation, apoptosis, and inflammatory cells with irregular nuclei. This is poorly reproducible and time-consuming for pathologists and may ultimately result in an incorrect diagnosis. With the development of computer technology and medical image analysis algorithms, it has become possible to use artificial intelligence (AI) to analyze whole-slide images and to perform early screening and diagnosis for tumors [7][8][9]. Several studies have verified the effectiveness of AI in the pathological diagnosis of tumors in different organs, such as lung cancer, breast cancer, prostate biopsy, and mesothelioma [10][11][12][13][14][15]. The current study aimed to analyze the morphological characteristics of digitally scanned sections of UMTs and build an AI diagnosis system for UMTs by using a computerized deep learning network model for image detection and recognition to assist pathologists in improving diagnostic accuracy and efficiency. Figure 1 presents the methods of this study. After data preparation and labeling, the classification and detection models were trained to obtain an automatic UMT diagnostic aid that can "study and synthesize" a physician's experience. Data Set The Ethics Committee of Beijing Obstetrics and Gynecology Hospital approved the study. The requirement for informed consent was waived because the reports were anonymized. Overall, 29 cases of leiomyosarcomas, 5 cases of STUMP, and 24 cases of leiomyomas (including 20 cases of conventional leiomyomas and 4 cases of lipoleiomyomas) were collected. All patients were diagnosed by the Department of Pathology of Beijing Obstetrics and Gynecology Hospital from May 2016 to May 2021. The inclusion criteria were as follows: clinical diagnosis of uterine occupancy with mass resection or total hysterectomy and pathological diagnosis of smooth muscle tumors, including leiomyomas, STUMP, and leiomyosarcomas. The exclusion criteria were as follows: patients who (1) had received preoperative radiotherapy or chemotherapy before surgery, (2) were diagnosed with uterine leiomyosarcomas exhibiting a predominant epithelioid appearance, (3) were diagnosed with myxoid leiomyosarcoma or leiomyomas with bizarre nuclei, (4) had used hormonal drugs, or (5) were pregnant. This study used 233 digital slides of leiomyosarcomas, 108 digital slides of leiomyomas, and 30 digital slides of STUMP stained using hematoxylin and eosin (HE). Two pathologists selected and read all slides, and all data were strictly desensitized. Aperio ImageScope software (Vista, CA, USA) was used for the annotation of digital sections. To train the deep learning model, the regions of necrotic, cytological atypia, collagen, blood vessels, and a certain range of mitosis targets in the digital sections were first annotated by pathologists. To facilitate the subsequent detection of nuclear fractures, the area of the field under the microscope was roughly converted into 10 square areas with 969-pixel borders, according to the method of counting nuclear fractures in 10 high-power fields (HPF, d = 0.55 mm). Five pathologists randomly selected a marked region in each section that was no smaller than the total area of 10 HPF in the digital sections. We labeled the areas of necrosis (N), vascular (x), collagen (j), and mitoses (h). Deep Learning Models The deep learning modern model was established on the basis of multiple convolutional neural network (CNN) feature extraction backbones and the image features of UMTs [16,17]. The images were divided into small-scale cuts that were 224 pixels × 224 pixels or 128 pixels × 128 pixels in size. An 18-layer residual network model was used to train and test the automatic classification on a server equipped with four NVIDIA Tesla v100 graphics cards to determine whether the slices had cytological atypia or necrosis. A small-scale detection network was built directly by using the YOLOv5s model with two NVIDIA RTX 3090 graphics cards to obtain a mitosis detection network. The YOLOv5s network is the network with the smallest depth and the smallest width of characteristic graph in the YOLOv5 series; therefore, this model has a faster training and prediction speed than other models and is widely used in medical case diagnosis research. The classification and mitoses detection results were logically judged to obtain the final diagnosis. All of these were automatically handled by the deep learning model. Evaluation Metrics Our evaluation metrics included the following: TP indicates that the sample is positive and that the prediction result is also positive. FP indicates that the sample is negative, but the prediction result is wrongly interpreted as positive. TN indicates that the sample is negative and that the prediction result is also negative. FN indicates that the sample is positive, but the prediction result is wrongly judged as negative. The accuracy rate and F1 index can be used to measure the overall classification performance of the model, and a value that is closer to one indicates a better model. Accuracy, which is also known as the accuracy check rate, indicates the proportion of TPs among the positive samples detected by the model. Recall, which is also known as the full check rate, indicates the proportion of positive samples accurately detected by the model among all positive samples. Image Annotation Among the patients with leiomyosarcoma, 2 had moderate cytological atypia, 5 had moderate-severe cytological atypia, and 20 had severe cytological atypia. A total of 24 patients had necrosis with an average count of more than or equal to 10 mitoses in 10 high-magnification fields (≥10/10 HPF). A total of 140 images of 19 patients with leiomyosarcoma were selected as the training set and were labeled by pathologists. The cellular regions of the tumor in the digital pathology slices were classified as normal or mild atypia, tumor necrosis, tumor cytological atypia (moderate to severe), collagen area, and vascular regions ( Figure 2). Automatic Classification of Necrosis and Tumor Cytological Atypia By using image data that were manually labeled by physicians, a classification model was constructed using a residual network. Among the patients with leiomyosarcoma, 140 slice images of 19 patients were selected as the training set. Some 93 slice images of another 10 patients with leiomyosarcoma were selected as the test set, in which different areas were intercepted according to necrosis and nuclear anomaly annotation. Images of normal or mild cytological atypia samples were obtained from patients with leiomyoma (57 slices of 19 patients). The target regions in the slices were intercepted and cropped into small blocks of 224 pixels × 224 pixels ( Figure 3). The final image blocks of the training set obtained 6418 images of moderate and severe cytological atypia, 2593 images of normal or mild atypia, and 13,266 images of necrosis. The test set contained 1200 images of each. The test results for the model after training are shown in Table 1. The table shows that the classification network can correctly classify normal images, nuclear atypia, and necrotic images. In addition, all classification indices reached more than 0.920. Automatic Detection of Mitoses The microscopic counting of mitoses requires the high-resolution observation of pathological sections to identify various morphological targets of nuclear division at the cellular level ( Figure 4). On the basis of this working idea, the YOLOv5s model in the one-stage detection mode was used for the construction and training of the detection network in this project. The hardware device was two NVIDIA RTX 3090 graphics cards. First, the target field in the physician-labeled slice was intercepted using the program. The coordinates were then repositioned in the intercepted result, with the upper left corner of the intercepted region as the coordinate origin and the positive direction to the right and down, respectively. Second, the manually labeled position of each nuclear split was recorded and saved. Finally, we recorded the coordinates of the center point of each nuclear split region, the length of the region, and the width of the region using rectangular positioning boxes. The size of each cut block was 128 pixels × 128 pixels (Figure 4). According to the pathologist's annotation, 2000 blocks were used as the training set, which comprised 1500 cuts containing nuclear mitoses targets and 500 cuts with apoptotic body or other interfering factors. A total of 1000 blocks were used as the test set, which comprised 525 cuts containing mitosis targets and 475 normal cuts. In the detection model, the max-det value was set to one to ensure that, at most, one nuclear split target was detected in each cut block (128 pixels × 128 pixels). The confidence threshold was set to 0.6, and it was considered correctly detected when the intersection over union value of the detection box and manual marker box overlap was 0.55. The detection network could accurately detect the mitoses with 0.938 precision, 0.913 accuracy, 0.893 recall, and 0.915 F1 index (Table 2), which could meet the requirements in practical applications. After detecting the target, the program could further calculate the number of mitoses required to obtain the target result. AI for Logical Judgment To test whether the AI-aided diagnosis system can make accurate and logical judgments, we combined the automatic classification model with the automatic detection model to perform overall detection and logical judgments on pathological sections ( Figure 5). We selected 10 cases of leiomyosarcoma, 5 cases of STUMP, and 5 cases of leiomyoma (1-3 sections for each case) for testing. Among them, one case of leiomyosarcoma was misdiagnosed as STUMP, and one case of STUMP was misdiagnosed as leiomyoma; the others were consistent with the pathologist's diagnosis, with a total precision of 0.900 (Table 3). For 0.24 mm 2 or 10 HPF of 0.55 mm in diameter, the computational times of the proposed network model for automatic classification, automatic detection, and logical judgment were 1.7, 1.5, and 0.1 s, respectively. Discussion The significance of using deep learning models is that automatic analysis can be obtained by learning from the samples, and the empirical knowledge of different pathologists can be synthesized. Repetitive and empirical tasks can be handed over to machines for assisted analysis. By using the computerized deep learning of digital pathological sections of UMTs, the following functions were achieved in this project: (1) automatic discriminative analysis of cytological atypia and tumorigenic necrosis of tumor cells, (2) automatic detection and counting of mitoses, and (3) logical judgment of the results obtained from the classification and detection networks to make a diagnosis. Smooth muscle tumors are common tumors of the female reproductive system and most often occur in the uterus, followed by the cervix; broad ligament; and occasionally in the vagina, ovaries, fallopian tubes, and vulva [18]. At present, the properties of smooth muscle tumors are mainly based on the heterogeneity of tumor cells, mitosis, and tumor coagulative necrosis [3]. For example, in coagulative necrosis, a UMT with mild cytological atypia is diagnosed as a leiomyosarcoma if the mitotic count is ≥10/10 HPF; otherwise, it is diagnosed as a STUMP. In the absence of coagulative necrosis, if the tumor cells show diffuse moderate-to-severe atypia and the mitotic count is ≥10/10 HPF, the diagnosis is leiomyosarcoma. However, when only one of the conditions is met, the diagnosis is STUMP. When necrosis is lacking and cytological atypia is not obvious or only focally mild, the accuracy of the mitotic count is paramount. Tumors lacking cytological atypia and tumor cell necrosis but with ≥15 mitoses/10 HPF should be diagnosed as STUMP. Therefore, cytological atypia, mitotic count, and tumor cell necrosis play important roles in diagnosing UMTs. However, due to subjectivity, the consistency of interpretation between different pathologists is poor. In addition, counting the mitoses of tumor cells is time-consuming and labor intensive, which affects the accuracy and efficiency of diagnosis. In the classification of UMTs, STUMPs show morphological features that exceed the criteria for leiomyoma or its subtypes but are insufficient for a diagnosis of leiomyosarcoma. This issue often puzzles pathologists. However, the cytological atypia, necrosis, and mitosis in leiomyosarcoma and leiomyomas are relatively clear. We try to establish an AI judgment standard by observing the morphological characteristics of these two categories such that the diagnosis of STUMPs will be more objective. In recent years, rapid development in the field of AI, especially deep learning (e.g., a CNN), has provided more possibilities for the establishment of intelligent computer-aided diagnostic systems based on pathological image analysis [19,20]. Deep learning is a new field in machine learning research in which higher-level attribute classes or features are formed by combining lower-level features into more abstract ones. Several studies and clinical practices are attempting to integrate AI and pathological image analysis to achieve intelligent detection and diagnosis and overcome the shortcomings of manual reading visual fatigue to improve diagnostic accuracy. Therefore, they have important clinical value and application prospects. For example, Song et al. developed an assisted diagnostic system using AI for gastric adenocarcinoma biopsy specimens. The deep CNN was trained on 2123 slides of digital pathology slices stained using hematoxylin and eosin (HE). It achieved a sensitivity of approximately 100% and an average specificity of 80.6% on a real-world test data set of 3212 slides of digital pathology [11]. In other tumors, such as esophageal cancer, lung cancer, and prostate cancer, AI based on deep learning has also achieved good detection results [21][22][23]. ResNet is a classic residual neural network and has an excellent performance in image classification tasks, including ResNet-18, ResNet-50, and other frameworks [24]. In the pre-experiments on our data set, we used ResNet-18, ResNet-34, and ResNet-50 for the classification network detection. It showed that the 18-layer residual network achieved the best classification performance. Therefore, in this experiment, an 18-layer residual network model was adopted for the model building to detect the tumor cytological atypia and tumor necrosis. The specific process is shown in Figure 3. The classification network achieved the correct classification of normal images, nuclear heterotypes, and necrotic images through testing. Furthermore, the classification performance was good, with all classification indexes reaching more than 92%. This shows that a deep-learning-based AI system can detect cytological atypia and necrosis of smooth muscle tumors. The identification and counting of mitotic images are crucial for the differential diagnosis of benign and malignant smooth muscle tumors. Computer experts have developed several methods for mitotic detection, such as the maximized inter-class weighted mean, CNN, and YOLOv5 [25][26][27]. YOLOv5 was proposed in 2020 and is one of the latest achievements of the YOLO series of detection algorithms in the one-stage detection framework. It contains YOLOv5s, YOLOv5m, and other frameworks and has excellent detection accuracy and speed in target detection tasks. Our pre-experiment on the sub-data set showed that YOLOv5s had the best detection performance, so we chose YOLOv5s as the detection network in this experiment, as shown in Figure 4. The precision and recall steadily improved and approached one, which was used to measure the performance of the detection network during training. This indicated that the network was well trained. The detection network can detect mitoses accurately, which meets the requirements of practical applications. We achieved automatic discrimination and classification of nuclear atypia and tumor cell necrosis through training in the classification network. The detection network makes it possible to detect and count mitoses automatically. We combined the automatic classification and automatic detection models and performed overall detection and logical judgment on some pathological sections, shown in Figure 5. In the detection experiments, 20 cases of UMTs, including 10 cases of leiomyosarcomas, 5 cases of STUMPs, and 5 cases of leiomyoma, were tested. Except for one leiomyosarcoma misdiagnosed as STUMP, the results were consistent with the pathologist's diagnosis, with 0.90 precision. Our study initially explored the feasibility of using histopathological AI-assisted systems to diagnose UMTs, which can assist pathologists in making judgments and improve the efficiency and accuracy of diagnosis. This study had some limitations. First, the number of images in the categories is small. To improve the performance of our diagnostic system, a larger data set is required. Second, our experiments were conducted mainly on UMTs, and the morphological features of other spindle cell tumors in the uterus, such as endometrial mesenchymal sarcoma, inflammatory myofibroblastoma tumor, and perivascular cell tumor, were not learned and judged by AI. Therefore, it is still necessary to manually select the slides of pathological images to be analyzed. Owing to the limited number of cases, the computer learning and judgment of cell-rich leiomyomas myxoid leiomyosarcoma, leiomyomas with bizarre nuclei, and epithelioid leiomyosarcoma are not yet convincing. Finally, the false detection rate of the model for detecting mitoses was relatively high. Given that clinical information, immunohistochemical markers, and molecular detection results can help affect pathological diagnoses [28,29], especially for relatively difficult cases, it is necessary to combine this information with the AI system. This is a difficult issue in automatic histopathological diagnosis. In subsequent experiments, we should continue to expand the sample size and combine immunohistochemistry and molecular results with AI to improve the diagnostic accuracy and automation of the model. Conclusions The criteria for UMT diagnosis are very complex. Furthermore, UMT diagnosis is timeconsuming and error prone. This study proposes an AI-aided diagnosis and evaluation system for UMTs based on deep learning. The algorithms for the automatic classification of necrosis and tumor cytological atypia and the automatic detection of mitoses were proved to be effective. By analyzing whole-slide images, an AI system can judge the properties of UMTs and make logical conclusions. This system may provide a new, comprehensive, and intelligent method for pathological diagnosis and may generate new ideas for advancing interdisciplinary collaborative research on clinical medical problems. However, there are still areas that can be improved, and our current work is mainly based on the pathological image slices of a certain size. Therefore, it is necessary to manually frame a certain area to be analyzed on a slice. Achieving automatic application at the WSI level requires further research. In addition, studies should be conducted on the improvement of the depth learning algorithm to further improve the accuracy of the system and ensure its clinical practicality. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Beijing Obstetrics and Gynecology Hospital (2020-KY-021-01, 2020.5.18). Informed Consent Statement: Patient consent was waived because the data used were anonymized. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
4,713.8
2022-12-20T00:00:00.000
[ "Computer Science" ]
Clustering Classes in Packages for Program Comprehension . During software maintenance and evolution, one of the important tasks faced by developers is to understand a system quickly and accurately. With the increasing size and complexity of an evolving system, program comprehension becomes an increasingly difficult activity. Given a target system for comprehension, developers may first focus on the package comprehension. The packages in the system are of different sizes. For small-sized packages in the system, developers can easily comprehend them. However, for large-sized packages, they are difficult to understand. In this article, we focus on understanding these large-sized packages and propose a novel program comprehension approach for large-sized packages, which utilizes the Latent Dirichlet Allocation (LDA) model to cluster large-sized packages. Thus, these large-sized packages are separated as small-sized clusters, which are easier for developers to comprehend. Empirical studies on four real-world software projects demonstrate the effectiveness of our approach. The results show that the effectiveness of our approach is better than Latent Semantic Indexing-(LSI-) and Probabilistic Latent Semantic Analysis-(PLSA-) based clustering approaches. In addition, we find that the topic that labels each cluster is useful for program comprehension. Introduction Program comprehension is one of the most frequently performed activities in software maintenance [1,2].It is a process whereby a software practitioner understands a program using both knowledge of the domain and semantic and syntax knowledge, to build a mental model of the program [3,4].Developers working on software maintenance tasks spend around 60% of their time for program comprehension [5].As software evolves, its complexity becomes increasingly higher.Moreover, some documents affiliated to the system also become outdated or inaccessible, which makes program comprehension more difficult. In practice, the natural top-down program comprehension process can effectively facilitate developers to understand the system step by step [6].For an object oriented Java software system, developers can also understand a system in such a top-down way.Packages are first taken into consideration.Then, interesting packages are selected, and developers further go deep into classes in the packages.For small-sized packages (with several classes), it is easy for developers to understand them.However, for packages with many classes in them, it is more challenging for developers to understand these classes, their relationships, and their functionalities [7,8].To aid program understanding, classes in these largesized packages can be clustered into smaller-sized groups.With such clustering, developers can understand a system more easily. There are several approaches that cluster programs based on static structural dependencies in the source code [9].Static structural dependencies based approaches usually cluster classes in a system based on static structural dependencies among program elements, such as variable and class references, procedure calls, use of packages, and association and inheritance relationships among classes [8,10,11].These approaches are more suitable in the process of implementing a change request in the source code.But before implementing a change request in the source code, developers should know 2 Scientific Programming which part in the source code is related to the change request.Specifically, they need to know the functional points of a system and where in the source code corresponds to these functional features.A feature or functional point represents a functionality that is defined by requirements and accessible to developers and users.Then, they can implement source code level changes.Hence, some studies focused on understanding the functional features of a system and proposed semantic clustering, which exploits linguistic information in the source code, such as identifier names and comments [12].These approaches usually take the whole system as input and generate clusters at some granularity levels, for example, class level or method level.The generated clusters corresponding to different functional features are used to divide a system into different units [13,14].This article also focuses on exploiting linguistic information in the source code to understand functional features of different clusters in large-sized packages.In a large-sized package, there are a number of functional features or concerns.Each of these concerns is implemented in a set of classes.The previous studies focused on clustering a software unit.However, developers still do not easily know what the functional features that each cluster expresses are.So to get a good understanding of its concerns and the classes that implement each of them, in this article, we further generate a set of topics to describe each cluster. This article proposes a technique to generate a set of clusters of classes for a large-sized package, where different clusters correspond to different functional features or concerns.Our approach is based on Latent Dirichlet Allocation (LDA), which is a topic model and one of the popular ways to analyze unstructured text in the corpus [15].LDA can discover a set of ideas or themes that well describe the entire corpus.We use LDA for a whole package and extract latent topics to capture its functional features.Then, classes in the package with similar topics are clustered together. Our approach can be effectively used for top-down program comprehension during software maintenance.For small-sized packages, developers can directly understand them.For large-sized packages, our approach can be used to divide packages into small clusters.Each of these small clusters can be understood more easily than the original large-sized package.The main contributions of this article are as follows: (1) We propose to use LDA to generate clusters for largesized packages.The topics generated by LDA are useful to indicate the functional features for these class clusters.(2) We conduct an empirical study to show the effectiveness of our approach on four real-world opensource projects, JHotDraw, jEdit, JFreeChart, and muCommander.The results show that our approach is more effective in identifying more relevant classes in the cluster than other semantic clustering approaches, that is, Latent Semantic Indexing-(LSI-) and Probabilistic Latent Semantic Analysis-(PLSA-) based clustering. (3) The empirical study on four selected packages from four subject systems shows that the topics generated by our approach are useful to help developers understand these packages. The rest of the article is organized as follows: in the next section, we introduce the background of program comprehension and LDA model.Section 3 describes our approach.We describe the design of our experiment, experiment results, and threats to validity of our study in Sections 4, 5, and 6, respectively.In Section 7, related work using clustering for program comprehension is discussed.Finally, we conclude the article and outline directions for future work in Section 8. Background In this article, we use LDA to cluster classes in large-sized packages for easier program comprehension.This section discusses the background of program comprehension and LDA topic model. Program Comprehension. For software developers, program comprehension is a process whereby they understand a software artifact using both knowledge of the domain and semantic and syntax knowledge [10].Program comprehension can be divided into bottom-up comprehension, topdown comprehension, and various combinations of these two processes.In bottom-up comprehension, a developer may first read all the source code at finer statement or method level and abstract features and concepts according to the low-level information.Then, coarser class-level or package-level elements are read and understood.Finally, developers comprehend the whole system.For top-down comprehension, a developer first utilizes knowledge about the domain to build a set of expectations that are mapped to the source code.Then, he/she understands the coarser packagelevel or class-level elements, followed by finer method-level or statement-level elements.Finally, the developer also gets an understanding of the whole system.In practice, top-down program comprehension is more acceptable since it meets humans' way of thinking from simple to complex, from whole to part [3]. Software clustering is one of the effective techniques for top-down program comprehension.During software maintenance, developers usually need to identify the functional features they are interested in to help them accomplish a change request.In this article, we propose a software clustering technique using LDA to provide some features for developers to facilitate the top-down program comprehension process. Latent Dirichlet Allocation. Topic models were originated from the field of information retrieval (IR) to index, search, and cluster a large amount of unstructured and unlabeled documents.A topic is a collection of terms that cooccur frequently in the documents of the corpus.One of the mostly used topic models in software engineering community is Latent Dirichlet Allocation (LDA) [16][17][18].It requires no training data and can well scale to thousands or millions of documents. Source code Analyze the packages Extracting key information Clusters for each large-size package Comprehending artificially Small-size packages LDA models each document as a mixture of corpuswide topics and each topic as a mixture of terms in the corpus [15].More specifically, it means that there is a set of topics to describe the entire corpus; each document can contain more than one of these topics; and each term in the entire repository can be contained in more than one of these topics.Hence, LDA is able to discover a set of ideas or themes that well describe the entire corpus.It assumes that documents have been generated using the probability distribution of the topics and that words in the documents were generated probabilistically in a similar way. In order to apply LDA to the source code, we represent a software system as a collection of documents (i.e., classes) where each document is associated with a set of concepts (i.e., topics).Specifically, the LDA model consists of the following building blocks: (1) A word is the basic unit of discrete data, defined to be an item from a software vocabulary = { 1 , 2 , . . ., V }, such as an identifier or a word from a comment.(2) A document, which corresponds to a class, is a sequence of words denoted by = { 1 , 2 , . . ., }, where is the th word in the sequence.(3) A corpus is a collection of documents (classes) denoted by = { 1 , 2 , . . ., }. Given documents containing topics expressed over V unique words, the distribution of th topic over V words and the distribution of th document over topics can be represented. By using LDA, it is possible to formulate the problem of discovering a set of topics describing a set of source code classes by viewing these classes as mixtures of probabilistic topics.For further details on LDA, interested readers are referred to the original work of Blei et al. [15]. With LDA, latent topics can be mined, allowing us to cluster them on the basis of their shared topics.In this article, to effectively use LDA, we apply it in a package-level corpus rather than each class to extract the latent topics to simulate the functional features or concerns for a package since small (class-level) corpus is too small to generate good topics [19][20][21][22][23].Then, we cluster the classes according to these topics and assign different classes to their corresponding topics [23]. Our Approach Faced with the source code of a software system, developers need to use their domain knowledge to comprehend the program from coarse package level to class level in each package.The process of understanding different packages is different.In the process, small-sized packages are easy to understand while large-sized packages are complex and they need to be separated into small-sized clusters.In this article, we focus on clustering the classes in large-sized packages as well as their corresponding functional features. The process of understanding packages is described in Figure 1.Firstly, we analyze the size of each package in the software system.Small-sized packages are comprehended manually.For large-sized packages, there are two steps.First, LDA is used to extract the latent key information to facilitate the comprehension process.Then, on the basis of the key information of each package, we adopt the clustering to build small-sized clusters to decompose each package.Thus, given the source code of a software system at hand, programmers can comprehend small-sized packages by themselves and large-sized packages with the help of our approach.In the following subsections, we discuss more details of our approach. 3.1.Analyzing the Size of Packages.Our approach focuses on understanding large-sized packages.So we first need to select large-sized packages in a program.Here, we set a threshold for the number of classes in a package.The packages including more than classes are selected for analysis.These packages are separated into smaller clusters to facilitate program comprehension. Figure 2 shows an example of separating packages into large-sized packages and small-sized packages for six packages in jEdit when is set to 5. The packages jarbunder, browser, and asm are classified as largesized packages. Extracting Key Information Based on LDA. During the program comprehension process, developers are more focused on functional features or concerns of the program.In the program, the source code contains not only the syntax information but also the unstructured data, such as natural language identifiers and comments [24].These unstructured source code identifiers and comments can be used to capture the semantics of the developers' intent [25].They represent an important source of domain information and can often serve as a starting point in many program comprehension tasks [26,27].However, there exists noise in the source code, which can potentially confuse the LDA application.So natural language processing (NLP) techniques are usually used to perform one or more preprocessing operations before applying LDA models to the source code data.Then, LDA can be effectively used to generate the topics.To effectively use LDA, we apply it in a package-level corpus to simulate the functional features or concerns for a package.Finally, we cluster the classes according to these topics and assign different classes to their corresponding topics. Preprocessing of the Source Code. There are several typical preprocessing operations for the unstructured part of a source code.These preprocessing operations can be performed to reduce noise and improve the quality of the resulting text for LDA [28]. We first isolate identifiers and comments and strip away syntax and programming language keywords (e.g., "public" and "int").First, we remove header comments since they often include generic information about the software that are included in most of source code files.Then, we tokenize each word based on common naming practice, such as camel case ("oneTwo") and underscores ("one_two"), and remove common English language stop words (the, it, and on) and programming language related key words (public, int, and while). After preprocessing the unstructured part of source code files, LDA can be used to extract key information more effectively.Figure 3 shows an example of the detailed process of preprocessing the source code in the class InvalidHead-erException.java in jEdit.After preprocessing the source code, most of the useful words are kept for LDA application. Extracting Key Information from Large-Sized Packages. After preprocessing each class in large-sized packages, we need to extract key information from them.LDA is an effective approach to discover a set of ideas or themes that well describe the entire corpus.Before using LDA, we need to set the number of topics, that is, K.This parameter affects the effectiveness of LDA application.In this article, K is related to the size of clusters for a package, which is determined by users. An LDA application generates two files: one is the wordtopic matrix which lists the words for each topic and the other is the topic-document matrix which shows the percentage of topics in each document, also called the membership value.An example is given in Figure 4.The results show the distribution of different topics in different classes, and each topic is described by different words with different possibilities.These topics express different functional features of classes in the package. Clustering Classes in Large-Sized Packages.After extracting the topics from the objective package, classes having similar topics should be allocated in a cluster to aid their comprehension.In this subsection, we discuss details for clustering classes in a package. Generating Initial Clusters. To cluster classes in a package, the number of clusters should be first determined.However, it is difficult to know this information at the beginning.In this article, the number of clusters is estimated based on the number of classes in a cluster. Let us assume that the number of classes in an initial cluster is M, where is a user-defined parameter.That is, if a user thinks that an M-scale cluster is easy for him/her to understand, he/she can set the initial size of each cluster as M, for example, 5 and 10.For a package with classes ( ≥ ), each of these classes should be put into a cluster.Thus there will be ⌈/⌉ (a whole number) clusters for a package.We set the number of topics to be the same as the number of clusters (i.e., ⌈/⌉), because we need a topic to label each cluster. After applying the LDA in a preprocessed package, we get two files, the word-topic matrix which lists the words for each topic and the topic-document matrix which shows the percentage of the topic words in each document.To allocate different classes to their corresponding topics, we use the topic-document matrix.That is, we allocate the top documents to these topics in the topic-document matrix.Thus a set of clusters can be generated, which we call the initial clusters for a package. The ideal situation for the initial clusters is that each class is just assigned to only its own and exclusive cluster.Inevitably, there are two special cases; one is that a class may match different topics in the topic-document matrix.Such classes are called shared classes, which we need to reassign.The other case is that there may be some remaining classes that are not included in the top documents in any topics.Such classes are called nonmatching classes, which need to be assigned to the most probable clusters related to them.In the following, we deal with these classes to guarantee that each class is assigned to one and only one cluster. Assigning Shared Classes and Nonmatching Classes. Shared classes are the classes matching different topics in the generated topic-document matrix.These classes are all listed in the top classes for each topic.We list all the classes shared by different topics and the membership value of each topic for them.A shared class is allocated to the cluster corresponding to the topic with the highest membership value. For nonmatching classes that are not initially matched to any cluster, they are processed in a similar way as shared classes.We list all these nonmatching classes and their membership values.Each of these nonmatching classes is put into the cluster corresponding to the topic with the highest membership value. Finally, each cluster in a package only contains classes having high membership values and each class is a member of only one cluster.Based on the word-topic matrix, we can see the words describing the topic, which indicates the feature of the cluster.Figure 5 shows an example of the process to generate the clusters for a large-sized package.First, initial clusters are generated according to the membership values with five topics.Then, shared classes and nonmatching classes are assigned to corresponding initial clusters based on their membership values.Finally, a set of clusters for the largesized package is obtained as shown in the bottom-left part of Figure 5. Case Study In this section, we conduct case studies to evaluate the effectiveness of our approach.In our study, we address the following three research questions (RQs): RQ1: Does the number of topics affect the shared classes and nonmatching classes?RQ2: Is our LDA clustering approach more effective than other semantic clustering approaches, that is, LSI-based clustering and PLSA-based clustering?RQ3: Can our approach provide useful topics for developers to understand the classes in the package(s)? In our approach, the number of topics is set by users themselves.We investigate RQ1 to see how this parameter affects the number of shared and nonmatching classes.In addition, we investigate RQ2 to see whether our clustering approach using LDA is more effective than other semantic clustering based approaches based on LSI and PLSA [12,[29][30][31], respectively.Finally, there is another difference between our approach and other clustering approaches; that is, each cluster that is generated by our approach is labeled with a topic to facilitate understanding of the cluster.So RQ3 aims to answer whether the topic labeling each cluster can help developers understand the cluster or not. Empirical Environment. We implemented our approach with Java language in the Eclipse environment.In addition, all the selected subject programs are also Java programs.So our case study was conducted in the Eclipse environment. JHotDraw is a medium-sized, open-source, 2D drawing framework developed in the Java programming language.jEdit is a medium-sized, open-source text editor written in Java.JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications.muCommander is a lightweight, crossplatform file manager that runs on any operating system supporting Java.These projects belong to different problem domains.They are general enough to represent real-world software systems, and they have been widely used in empirical studies in the context of software maintenance and evolution [32,33].In addition, they have become the de facto standard system for experiments and analysis in topic and concern mining (e.g., by Robillard and Murphy [34] and Binkley et al. [35]).Moreover, these four subject systems of different sizes that are neither too small nor too large are selected due to their good design and manageable size for manual analysis. Parameters Setting. In our approach, there are two parameters, and . represents the size of a package and is the number of topics as input for the LDA model.Values of these two parameters will affect the number of packages to be subdivided into clusters and the number of clusters in a package.Table 2 shows the percentage of classes in packages with different number of classes.From the results, when is 5, 10, and 15, the average percentages of classes in packages of large sizes are 89.3%, 74.7%, and 61.4%, respectively.In this study, we consider packages of size larger than 10 as the large-sized packages used to evaluate our approach.Hence, for all four systems, most of classes and packages are used to evaluate our approach. The other parameter in our approach is the number of topics () for LDA analysis. is an important parameter which also indicates the number of clusters for the final clustering results.It determines the size of each cluster.We set to be 5, 10, and 15 for our study, respectively. Methods and Measures. For LDA computation, we used MALLET (http://mallet.cs.umass.edu),which is a highly scalable Java implementation of the Gibbs sampling algorithm.We ran for 10,000 sampling iterations, the first 1000 of which were used for parameter optimization.We selected different numbers of topics to use MALLET to generate the word-topic matrix and topic-document matrix.Then, we clustered each large-sized package based on these two matrixes. For semantic clustering based on LSI and PLSA [12,[29][30][31] that we used to compare with our approach, they are popular methods for cluster analysis, especially for clustering nonstructured data.LSI uses singular value decomposition to explore patterns in the relationships between the terms and concepts contained in an unstructured corpus [36].LSI is implemented based on the assumption that words used in the same contexts tend to have similar meanings.Hence, LSI is able to extract the conceptual contents from a corpus by establishing associations between those terms that occur in similar contexts.Probabilistic Latent Semantic Analysis (PLSA) is a statistical technique for the data analysis, which is based on a mixture decomposition derived from a latent class model [30,31]. We selected these clustering approaches for comparison because (1) they are widely used in clustering software data and show promising results [37,38] and (2) they are also clustering approaches based on lexical information which is similar to our approach.In our study, they are performed by clustering classes with similar vocabularies.After calculating the similarity between each pair of documents, an agglomerative hierarchical clustering algorithm is executed.There are a lot of similarity measures, for example, cosine similarity, Manhattan distance, and Euclidean distance [39].Cosine similarity which is a popular similarity measure is used here [33,40]. To answer RQ1, we compute the number of shared classes and nonmatching classes and the shared occurrence counts (or shared counts) of the shared classes.For example, if one class is shared by topic 1 and topic 2, its shared count is 1.If the class is also shared by topic 3, the shared count is 2. We analyze the percentages of shared classes and nonmatching classes as well as the shared counts for different numbers of topics. For RQ2, our study involved 10 participants from university and industry.Half of them are from our lab with 2-3 years of development experience and the other half are from industry with 5-6 years of development experience especially large project development experience.They are not familiar with the systems before.Then, they were assigned with a class as shown in the fourth column of Table 3.The task for them is to identify the most likely classes that are related to the given classes in their enclosing packages.Then each participant obtained a cluster of classes for each given class.As different participants may generate different clusters, they needed to discuss the results and reached a consensus on the clustering results for each given class.We used the clustering results as the authoritative clusters to compare with the clusters produced by our approach and the LSI-based/PLSA-based clustering approach.For LSI-based clustering approach and our approach that are used for comparison, we need to set the value.Based on the size of the authoritative clusters, we set the values for the packages, which are shown in the last column of Table 3.To answer RQ2, we first provided the clustering results of the three approaches to participants.In this process, to guarantee a fair treatment, they did not know which clustering results were generated by our approach or the LSI-based/PLSA-based clustering approach.Then, each of participants assessed each of the three clusters to vote the best one.In addition, to quantitatively compare these two approaches, we used precision and recall, two widely used information retrieval and classification metrics [41], to validate the accuracy of different clustering approaches.Precision measures the fraction of classes identified by a clustering approach to be in the same cluster as the given class that are truly relevant (based on the authoritative cluster), while recall measures the fraction of relevant results (i.e., classes that appear in the authoritative cluster) that are put in the same cluster as the given class by a clustering approach.Mathematically, they are defined as follows: (1) In the above equations, clustering results and authoritative results are sets of classes. To answer RQ3, participants were required to write the words in the identifiers or comments to label the authoritative clusters.This process is similar to that of RQ2, and a set of authoritative words are produced.To show whether the topics generated by our approach are useful, the participants needed to assess the generated topics to check whether they are useful for them to understand the clusters.Each participant needs to provide a rating in a five-point Likert scale, 1 (very useless) to 5 (very useful).Finally, we also computed the precision and recall of the words in the topics by comparing them with authoritative words.The way precision and recall are computed is similar to the way they are computed to answer RQ2. Overall, the participants needed to answer four questions during the evaluation process.In RQ2, they were asked to give the answers of the authoritative results for the clustering and assess the results between LSI or PLSI and our approach.In RQ3, they needed to provide labels of the clusters and assess the topics generated by our approach. Results In this section, we gather and analyze results collected from the case studies to answer RQ1, RQ2, and RQ3.5.1.RQ1.First, we see the existence of shared classes and nonmatching classes in the initial clusters.Table 4 shows the average percentage of the initial clusters without nonmatching classes and shared classes.From the results, we see that there do exist some shared classes and nonmatching classes in the initial clusters.So we need to perform the operation of reassigning these shared classes and nonmatching classes.Then, we see how the number of topics affects the results of shared classes and nonmatching classes.Figure 6 shows the box-plots of the number of shared classes and nonmatching classes and shared times in the process of clustering the classes with different numbers of topics.From the results in Figures 6(a) and 6(b), we notice that, with the increasing in the number of topics, the shared counts and the number of shared classes also increase.So setting different values of the number of topics will affect the number of shared classes.In addition, Figure 6(c) shows the results for nonmatching classes in the process of clustering the classes.We see that nonmatching classes are fewer than shared classes.Moreover, the range of the number of nonmatching classes with different numbers of topics is similar.That is, values of different numbers of topics do not obviously affect the number of nonmatching classes. From the results discussed above, shared classes and nonmatching classes exist in the initial clusters.However, the majority of the classes are neither shared nor nonmatching ones.Furthermore, some shared classes are shared by three or more topics, and the number of shared classes is larger than that of nonmatching classes.In addition, the results also show that different settings of number of topics will affect the number of shared classes but will not affect the number of nonmatching classes. RQ2. In this subsection, we compare the accuracies of the three clustering approaches to show the effectiveness of our approach.First, we invited participants to assess the clustering results from three clustering approaches.The voting results are shown in Table 5.The results show that, in most cases, the results generated by our approach are more fit to their needs.For the jfreechart.source.org.jfree.chart.plotpackage, the voting results of LSI-based clustering and our approach are similar.When we investigated deep into the results in this package, both two clustering approaches output the clusters with two true-positive relevant classes (the truepositive relevant classes are those classes that do belong to the authoritative cluster).So participants are not sure which one is better than the other one.So from the participants' qualitative analysis, we notice that our approach can generate clustering results which better fit their needs compared to the LSI-based and PLSA-based clustering approach. In addition, to quantitatively compare these clustering approaches, we compute their precision and recall results, which are shown in Table 6.From the recall perspective, our approach is always better than (or at least as good as) the LSI-based and PLSA-based clustering approaches.However, from the precision perspective, sometimes our results are better, and sometimes the LSI-based clustering or PLSA-based clustering is better.When we investigate results where our approach achieves lower precision, we notice that the number of classes in the cluster generated by our approach is larger than that by LSI-based clustering and PLSA-based clustering.For example, for jfreechart.source.org.jfree.chart.plotpackage, there are six classes in the authoritative cluster.Our approach generates five true-positive relevant classes while the LSIbased clustering approach generates four and the four are just the true-positive relevant classes.So the precision of the LSI-based clustering approach is high while our approach is worse.But from the respective of program comprehension, recall is more important because, with more relevant classes, developers can better comprehend the cluster.That is to say, our approach can cover more relevant classes in the authoritative clusters, which can effectively facilitate program comprehension.So from the results discussed above, compared with LSI-based clustering and PLSA-based clustering, our approach can effectively identify more relevant classes in a cluster to help program comprehension. RQ3. Different from other clustering approaches, our approach also labels each cluster with the topics, which is composed of some words to describe the cluster.In this subsection, we discuss whether these topics are useful to comprehend the cluster. First, we provided the topics of each cluster to the participants.They used a five-point Likert scale to answer the quality of the topics.The results are shown in Table 7.The average score of the results is around 4, which indicates that the participants think that the topics are useful to understand the cluster.So for program comprehension, the topics labeling the clusters are useful for users to understand the program. In addition, we also assess the topics of the clusters quantitatively from the precision and recall perspectives.The results are shown in Table 8.For each cluster, our approach can produce a topic which includes some words to label it.These words can cover most of the words given by the participants.For example, for the cluster that includes the AbstractPieLabelDistributor.java class, 82% of the words can be covered.Hence, the participants can use these words to help them understand the clusters.In addition, from the precision perspective, our approach is not very good and most of the precision results are about 10%.However, for the other 90% irrelevant words, some are obviously not related to the cluster, which are easily identified by the participants, for example, the words "method," "refer," and "jEdit."These words are included in the topics because they are not removed in the preprocessing process.To improve the results, we can improve the preprocessing operations to remove words related to the specific subject programs.Although some noisy information is produced from our approach, the participants still feel that the topics are useful to understand the clusters.Hence, from the results, we see that the topics in our approach are helpful for developers to understand the clustering results. Threats to Validity Like any empirical validation, ours has its limitations.In the following, threats to the validity of our case study are discussed. The first threat relates to the correctness of our experiments and implementation.We have checked the implementation and fixed bugs.Another threat relates to participants' bias.We have reduced this bias by not telling the participants of results produced by our approach and those produced by the baseline approach.In addition, we only applied our technique to four subject programs.Moreover, we considered only one programming language (Java) and one development environment (Eclipse).Further studies are required to generalize our findings to large-scale industrial projects and with developers who have sufficient domain knowledge and familiarity with the subject systems.Thus we cannot guarantee that the results in our case study can be generalized to other more complex or arbitrary subjects.However, these subjects were selected from open-source projects and widely employed for experimental studies [42,43].In evaluating the effectiveness of the clustering results, we randomly selected a number of packages.To reduce the threats to validity further, in the future, we plan to evaluate our clustering approach with even more packages from more software projects.The final threat comes from the measures used to evaluate the effectiveness of our approach, that is, precision and recall. These two metrics only focused on the false-positives and false-negatives for authoritative clustering results.However, for program comprehension, other factors may be more important. Related Work Program comprehension is one of the most important activities in software maintenance and reverse engineering [8,10,23,44,45].Clustering techniques are commonly used to decompose a software system into small units for easier comprehension.Some studies analyze syntax features or dependencies to cluster the software [46][47][48][49][50], while others rely on the semantic information in the source code for clustering [51][52][53][54]. Clustering approaches based on the syntax (structure) in the source code usually focus on the structural relationships among entities, for example, variable and class references, procedure calls, usage of packages, and association and inheritance relationships among classes.Mancoridis et al. proposed an approach which generates clusters using module dependency graph of the software system [8].They treated clustering as an optimization problem, which makes use of traditional hill climbing and genetic algorithms.In [46,55], the Bunch clustering system was introduced.Bunch generates clusters using weighted dependency graph for software maintenance.Sartipi and Kontogiannis presented an interactive approach composed of four phases to recover cohesive subsystems within systems.In the first phase, relations between programs are extracted.In the second phase, these relationships are used to build an attributed relational graph.In the third phase, the graph is manually or automatically partitioned using data mining techniques [56].These syntax relationships can help developers understand how the functional features are programmed in the source code.In this article, we focus on the clustering based on functional features in the source code.And we used LDA for semantic analysis of these functional features. Semantic based clustering approaches attempt to show the functional features of a system [57][58][59][60].The functional features in the source code are analyzed from comments, identifier names, and file names [61].Kuhn et al. presented a language independent approach to group software artifacts based on LSI.They grouped source code containing similar terms in the comments [12,62].Scanniello et al. presented an approach to perform the software system partitioning.This approach first analyzes software entities (e.g., programs or classes) and uses LSI to get the dissimilarity between entities, which are grouped by iteratively calling the Kmeans clustering algorithm [63].Santos et al. used semantic clustering to support remodularization analysis in an input program [58].Our approach used LDA to generate the clusters, particularly for large-sized packages, to facilitate their comprehension. In addition, some program comprehension techniques combined the strengths of both syntax and semantic clustering [7,38,[64][65][66].The ACDC algorithm is one example of this combined approach which used name and dependency of classes to cluster all classes in a system into small clusters for comprehension [3].Andritsos and Tzerpos proposed LIMBO, a hierarchical algorithm for software clustering [7].The clustering algorithm considers both structural and nonstructural attributes to reduce the complexity of a software system by decomposing it into clusters.Saeidi et al. proposed to cluster a software system by incorporating knowledge from different viewpoints of the system, that is, knowledge embedded within the source code as well as the structural dependencies within the system, to produce a clustering result [67].Then, they adopted a search-based approach to provide a multiview clustering of the software system.In this article, we focused on semantic analysis of the source code for its clustering.In addition, our approach also generates topics to help users more easily understand the classes in the clusters. Conclusion and Future Work In this article, we propose an approach of clustering classes in large-sized packages for program comprehension.Our approach uses LDA to cluster large-sized packages into small clusters, which are labeled with topics to show their features.We conducted case studies to show the effectiveness of our approach on four real-world open-source projects.The results show that the clustering results of our approach are more relevant than those of other clustering techniques, that is, LSI-based and PLSA-based clustering.In addition, the topics labeling these clusters are useful to help developers understand them.Therefore, our approach could provide an effective way for developers to understand large-sized packages quickly and accurately. In our study, we only conducted studies on four Javabased programs, which does not imply its generality for other types of systems.Future work will focus on conducting more studies on different systems to evaluate the generality of our approach.In addition, during the clustering process, we find that some classes are weakly coupled with its package, but they are more related to another package.That is, there are problems with the current package's structure.So we consider applying our clustering approach to improve the package structure.Finally, our approach is a first step in a top-down program comprehension process; in the future, we plan to cluster other finer-level program elements, for example, methods, to provide a more comprehensive topdown program comprehension support to better understand a software system. Figure 1 : Figure 1: Process of our approach. Figure 2 : Figure 2: An example of separating packages into large-sized packages and small-sized packages for six packages in jEdit when is set to 5. Figure 3 : Figure 3: The process of preprocessing the class InvalidHeaderException.java in jEdit. Figure 4 : Figure 4: An example of the output of an LDA application. Figure 5 : Figure 5: An example of generating clusters for a large-sized package (with the number of topics set to 5). Figure 6 : Figure 6: Shared counts, number of shared classes, and number of nonmatching classes in the initial clusters for number of topics set to 5, 10, and 15. Table 2 : The percentage of classes over P (5, 10, and 15) of the four systems. Table 3 : The selected packages and selected classes. Table 4 : The percentage of initial clusters without nonmatching classes and shared classes. Table 5 : The votes for our approach and LSI-based/PLSA-based clustering approach. Table 6 : The precision and recall of our approach and LSI-based/PLSA-based clustering approach. Table 7 : The score assessed by the participants on the topics. Table 8 : The precision and recall of our approach in inferring representative words to label clusters.
9,298.8
2017-04-11T00:00:00.000
[ "Computer Science" ]
Who leads the conversation? Influential Twitter users during a niche sporting event ¿Quién lidera la conversación? Los usuarios influyentes de Twitter durante un evento partir del Análisis de Redes Sociales (outdegree, indegree y PageRank). Para componer este índice se utilizó el Proceso de Jerarquía Analítica. Esta medida se aplicó a la conversación generada en Twitter en torno a los Mundiales de Ciclismo en Pista 2018. A partir de un corpus de 19.701 tweets, identificamos a los 25 usuarios más influyentes del evento. Los resultados indican que los organizadores y ciclistas participantes jugaron un papel relevante en Twitter. Además, la distribución geográfica de estos usuarios influyentes reflejó la dependencia cultural que tienen los deportes de nicho. Palabras clave AHP; ciclismo; deportes de nicho; eventos deportivos; usuarios influyentes; Twitter Abstract Fans of niche sports generally find minimal content in mainstream media due to their limited audience. Instead, social media offers them the opportunity to follow these specific sports. The dynamics behind digital media are based on individual participation, hence some prominent users lead the social conversation thanks to their capacity to influence. However, the complexity of the concept of influence and the existence of multiple parameters for its measurement make it difficult to identify these key users. Our research proposes a measure of the influence on Twitter based on variables derived from the platform (number of tweets, number of retweets, and number of followers) and from the Social Network Analysis (outdegree, indegree, and PageRank). The Analytic Hierarchy Process was used to assign a weight to each variable. This measure of influence was applied to the conversation generated on Twitter around a niche sporting event: the 2018 UCI Track Cycling World Championships. From a 19 701-tweet corpus, we identified the 25 most influential users. The results indicate that the organisers and the participating cyclists played a relevant role in the Twitter conversation. In addition, the geographic distribution of these influential users reflects the cultural dependence of niche sports. Introduction Sport teams need communication to raise their public awareness. Mainstream media capitalise a good deal of this interest, as they reach a large audience. The development of digital channels has propitiated increasing access to sports information. User interaction is a key difference between the mass media, which provide clear top-down-oriented communication, and the digital media, which allow more horizontal communication. As users engage in the digital conversation, they help to spread information and create new content as a manifestation of a strong sense of belonging (Chan-Olmsted & Xiao, 2019; Thompson, Martin, Gee & Geurin, 2018; Vale & Fernandes, 2018). This engagement can also be considered by sports teams as an asset to increase their financial value (Scelles, Helleu, Durand, Bonnal & Morrow, 2017), promote brand sponsorship (Santomier, 2008) and attract spectators (Nisar, Prabhakar & Patil, 2018). In a similar vein, social networks enhance the experience of attending sport events, providing event organisers and sports teams with new sources of information to help better understand their relationship with spectators, sports fans and sponsoring brands ( Twitter has singular characteristics among the digital media. This social network provides quick interaction among users, and strongly contributes to the spread of information through viral mechanisms. In the context of sports communication, Twitter helps to get a picture of the main topics discussed by users (González et al., 2021;Huang, Shen & Li, 2018; Méndez-Guzmán, Zhang & Ahmed, 2021). The open conversation in Twitter allows us to explore how athlete branding unfolds over a period of time (Su, Baker, Doyle & Kunkel, 2020), and the different way teams can build strong relationships with fans (Naraine, 2019;Wang, 2021). The way professional athletes, sport clubs, and amateurs take part in this online conversation has also been relevant (Hutchins, 2011;Kassing & Sanderson, 2010). In particular, Twitter has modified the way TV spectators watch sport events, as this platform provides an immediate interaction with many other users who are following the same event simultaneously (Smith, Pegoraro & Cruikshank, 2019; Yan, Watanabe, Shapiro, Naraine & Hull, 2018). In addition to these possibilities for broadening the experience of sport communication, Twitter opens up promising opportunities for minority sports. Mainstream media usually pay attention to sports that have a large spectator base, as their business model is based on the viewing figures. However, Internet in general, and Twitter in particular, make low-demand products and services accessible to those users interested in them. This phenomenon, known as long tail (Anderson, 2006), is perfectly suited to niche sports in Twitter. According to Miloch and Lambrecht (2006), professional niche sports appeal to a small segment of sport consumers. Among the examples they provide we find lacrosse, bowling, fishing, curling, horse racing and archery. All of them were related to the US public, as the consideration of a sport as niche depends on the particular society to which it refers. Nevertheless, precisely because this sort of sport can be considered as a niche product, their athletes and followers are much more homogeneous than those sports. This feature is of great interest for sponsorship funding Greenwell, Greenhalgh & Stover, 2013;Miloch & Lambrecht, 2006). In terms of communication, social media is a key channel for the niche sport fan group, especially for information gathering and building communities, thanks to user interaction (Kang, Rice, Hambrick & Choi, 2019). In the dynamics behind these platforms, information flows by interaction. In this regard, there is a particular group of users who boost this information flow in the network when they interact with the message, making content viral (Gross & This research explores the main profiles of the more influential Twitter users during an international event of a niche sport. To tackle this problem, we developed an influence index based on six variables. Some of them are taken from the direct participation of the user, and the rest from social interaction. We composed this index based on a novel methodology in digital communication research: Analytic Hierarchy Process (AHP) (Lamirán-Palomares, Baviera & Baviera-Puig, 2020). This technique establishes a weight for a series of variables based on the judgements of a group of experts. AHP is particularly appropriate for this problem, as it allows us to quantify the attributes of a complex phenomenon such as influence in Twitter. The process ensures consistency among the opinions collected, so that the output can be considered reliable according to the group of experts. Once we had drawn up the index, we applied it to the case of the 2018 UCI Track Cycling World Championships, a niche sport among cycling modalities. The paper is structured as follows. The theoretical framework reviews the literature on influence on Twitter, the Social Network Analysis (SNA) applied to Twitter, and niche sports. After that, we briefly introduce the 2018 UCI Track Cycling World Championships. Then, the research objectives are presented. Next, the research methodology is outlined, where the AHP stands at the core of the process. Subsequently, the results are presented and discussed. The paper ends by pointing out some limitations to our research. Influence on Twitter The dynamics of digital communication have different parameters from those of the traditional media, whose main indicator is the audience. On Internet, the logic of network communication bestows great power on users whose intervention propels dissemination of the message. This action can be considered as part of its influence in the network. Influence can be considered broadly as the ability of an individual to make others change their attitude, opinion, or commentary (Dubois & Gaffney, 2014). In this regard, social networks create special conditions for influencing due to the interactions of the users. Nevertheless, identifying a user as influential becomes complicated. Some theories help to explain different aspects of how this influence can be conceived. Agenda setting theory (McCombs & Shaw, 1972) explains the capacity of mass media to determine the news items that are of informative interest. Depending on the way they present the content, mass media play a role in attributing different levels of importance to them. This way of influencing is focused on the content, and is explained by the prescriptive role exerted by mainstream media. With the arrival of social media, this role has had to be shared, at least partially, with other agenda-setters (Biasco-Duatis & Coenders, 2020). As Rubio García (2014) states, there is a strong correspondence between the media agenda and the public agenda reflected in Twitter. Another relevant conceptualisation to characterise the process of media influence was the twostep flow theory (Katz, 1957). This theory underlined the bridging role that certain individuals played between the media and the public, so that these individuals could be considered as prescribers of the information published by the media. These individuals were designated opinion leaders and were initially characterised by having a wide network of contacts, being considered experts on a specific topic, and having a relevant position within a local community. In this case, influence is exerted by way of personal interaction. The advent of Internet revived this two-step communication model, which had been weakened by the direct effect of television. Veglis and Maniou (2018) suggested an evolution from the two-step theory to a model of communication flows where the role of intermediation with the network of contacts becomes crucial when analysing influence. The possibility of tracking the interactions and the content published in social media has prompted research to identify the key actors in disseminating messages (Denia, 2021). Researchers have used different metrics to identify influential Twitter users with the aim of assessing their position within the structural network. Kwak, Lee, Park & Moon (2010) ranked users according to the number of followers and also added the PageRank variable, which was originally used in web positioning (Page & Brin, 1998). Bakshy, Mason, Hofman y Watts (2011) linked the influence to the follower base and the time active in Twitter, but suggested that users with a smaller base could be more effective for marketing campaigns. González-Bailón, Borge-Holthofer & Moreno (2013) focused on user activity, counting their Twitter messages over a period of time, and identified four types of different users. Lara-Navarra, López-Burrull, Sánchez-Navarro & Yànez (2018) presented different instruments used to measure influence in social media. In sport communication, different studies have analysed some Twitter profiles that can operate as influential in the conversation. Athletes have been the subject of some of these studies, in many cases leading them to be seen as "celebrities" (Kassing & Sanderson, 2010;Pegoraro, 2016). Other sport-related users whose activity on Twitter has also been analysed include journalists (Hambrick & Sanderson, 2013), sponsors (Meenaghan, Mcloughin & McCormack, 2013) and event organisers (Hambrick, 2016;Wäsche, 2015). Focusing on influence, other studies have already attempted to identify influential users on Twitter in different sports ( Influence in Twitter comprises multiple approaches, as it is difficult to capture in just one measure. One methodology that has proved to be very useful for featuring Twitter conversations is Social Network Analysis (SNA). This methodology studies the interaction among social agents (Scott, 2017;Wasserman & Faust, 1994). The social relationships are usually represented in a graph. This figure is composed of nodes, which represent the social agents, and edges, which represent the interactions between two agents. A node will be more important if it plays a relevant role in the interaction (de Nooy, Mrvar & Bategelj, 2005). This relevance corresponds to the concept of centrality. Two basic measures of centrality are the indegree of a node (the number of interactions initiated by that node) and the outdegree (the number of interactions directed to that node). SNA methodology is very well suited to the relationships created through posting in Twitter, as there are some activities, such as following, retweeting, and mentioning, that can be considered interactions To construct our model, we selected two variables related to each of the three dimensions identified, with the aim of quantifying the phenomenon of influence in Twitter. In total, we had six variables. Although it is not easy to find neat boundaries to classify the variables, the dimensions helped us select the appropriate metrics to assess the user influence. Three of these variables correspond to the direct posting by the Twitter user, whereas the other three variables come from considering the Twitter interaction in terms of SNA, i.e., retweeting and mentioning. The six variables are as follows. User activity is reflected in the number of tweets posted by a user (González-Bailón, Borge-Holthofer & Moreno, 2013). More tweets imply a more participative user. Tweets can be just a message, in plain text, or may contain a reference to another user. This is the case when retweeting a tweet or mentioning a user. The way of measuring this referential activity is by looking at the outdegree of the node. A user can be considered endowed with authority if their tweets are retweeted. So, the number of retweets of a user would be a way of indicating peer acknowledgement of the worth of their publications (Albero-Gabriel, 2014; Cha et al., 2010). In terms of SNA, the authority of a user is captured by the PageRank metric. This measure ranks higher when a user is linked by other users who in turn have high PageRank. Therefore, PageRank provides a measure of the density of the interaction relationships. This means that if a high PageRank user intervenes in a conversation, it is much more likely that the message will spread more quickly than if another low-PageRank user intervenes Niche sports and social media Niche sports are characterised by their reduced audience (Miloch & Lambrecht, 2006). The specific interest aroused by this kind of sport make it very appropriate for leveraging the long-tail dynamics (Anderson, 2006): social media bring together supporters and enthusiasts that otherwise would be unattended by mainstream mass media. The interest in researching niche sports has been linked to the special possibilities provided to sponsoring brands (Miloch & Lambrecht, 2006). Greenhalgh and Greenwell (2013) asked more than 30 sponsors about the criteria they used to select which niche sports they would promote. Audience reach, cost-effectiveness, and the fit between company image and target market were positioned as the main criteria, whereas social media opportunities were ranked among the least important criteria for investment. Recently, some researchers have examined specific ways of social media communication about niche sports. Kang et al. (2019) analysed three marketing activities in the context of CrossFit, a niche sport with scant media coverage in the US. They examined Twitter, Facebook and YouTube posts. According to these researchers, these platforms were used primarily to provide information and to interact with the community, but they detected less content regarding product promotion. Trivedi, Soni & Kishore (2021) conducted a study about the pro-Kabaddi league, a minority sport in India. They comprised three activities as social media communication: user-generated content, firm-generated content and social media ads. The results proved the influential role played by social media communication in boosting fans' online community engagement, and subsequently in game attendance and sponsor's product purchase intention. Mastromartino et al. (2020) analysed the factors influencing socialisation of ice hockey fans in the Sunbelt region of the US. This sport began to appear there 25 years ago. These researchers found evidence that the ways the fan-base socialise were departing from traditional sources, as they were through family and media exposure. The paper suggests that this change may be due to the access to communication technology by the new generation. Nevertheless, there is scant attention paid to the specific way information flows in Twitter regarding niche sports through prominent users. This point may be of great interest for event organisers, brand sponsors and sports teams. The 2018 UCI Track Cycling World Championships Track cycling is a modality of bicycle racing mainly oriented to professional bikers. Races take place in velodromes, special arenas designed for cycling at high speed. Track bicycles are characterised by having a fixed gear and lacking brakes. They are designed to reduce resistance as much as possible to increase velocity. Unlike road cycling, track cycling competitions are much less viewed by media audiences, so it can be considered a niche sport. The UCI Track Cycling World Championships are the most important event in this niche sport. They are held annually by the International Cycling Union (UCI in French). Professional and amateurs compete together, representing their countries. These championships encompass several events, such as time trial, individual pursuit, team pursuit or scratch race. There are races for women and races for men. The winning rider is distinguished by the UCI with a rainbow jersey. The 2018 edition took place in Apeldoorn, the Netherlands, from 28 February to 4 March. There were 40 competing nations, and 20 events. The medal table was headed by the Netherlands, the host country, with 12 medals. Germany, Great Britain, Australia, and Italy achieved 6 medals in the championships. Research objectives This aim of this research is to analyse in depth the impact of Twitter in a niche sport. The key issue structuring the whole research process is the concept of influence in social networks. Based on this assumption, three main objectives were set: O1: To draw up an index to measure Twitter influence for each user participant in the conversation based on the following variables: number of tweets, outdegree, number of retweets, PageRank, number of followers and indegree. O2: To identify the most influential users in the Twitter conversation about the 2018 UCI Track Cycling World Championships. O3: To analyse the contribution of the most influential Twitter users to the event from the approach of being a niche sport. Analytic Hierarchy Process The AHP was designed by Saaty (1980) and has been applied in the resolution of a wide variety of problems, including sports-related questions (Lee & Walsh, 2011;Sinuany-Stern, 1988). AHP requires a group of experts to evaluate the importance of different criteria to solve a given problem. Its basic principle assumes that the experience and knowledge of the experts involved is as important as the data, so it is used in problems where both quantitative and qualitative aspects need to be assessed. The goal is to prioritise a series of items. The AHP allows structuring of the items in different levels. In the case of having two levels, the process distinguishes between criteria and subcriteria, in such a way that there is a group of subcriteria depending on each criterion. The experts prioritise the different items by filling in a questionnaire. Each question always compares two items. The respondent scores each pair of items from one to nine, where one means the same importance of both items, and nine means extreme importance of one item over the other one. First, all the criteria are compared in groups of two. Then, the questionnaire asks for the subcriteria pertaining to the first criterion, following the same approach of pairing the items. The following question asks the importance of the subcriteria associated to the second criterion, so that the process continues in this way to cover all the criteria with their respective subcriteria. AHP goes beyond a mere sorting of items, as it integrates several hierarchical levels (criteria and subcriteria) and takes into account the degree of relevance between two items considered by the experts (qualitative estimation). This leads us to the critical contribution of the process. One matrix is created with the peer-to-peer comparisons made by each expert. All the matrix elements are positive and verify the property of reciprocity (Saaty, 1980). However, the matrix does not necessarily comply with the property of consistency, as the expert's judgements are subjective (Marin-García, Aragonés-Beltrán & García Melón, 2014). To verify this property, the consistency ratio must be calculated. This ratio reflects how consistent the judgements made by one expert are on the whole. The ratio is compared with a reference value that varies according to the size of the matrix (Saaty, 2008). Following Marin-García, Aragonés-Beltrán & García Melón (2014), our threshold for the consistency ratio was set at 10%. Values lower than 10% mean that the successive judgements made by the expert have been consistent throughout the process, so that their comparisons can be considered reliable. Values higher than 10% reflect that the expert's judgements lack consistency, and, therefore, that this evaluation should be reviewed or discarded. In order to merge the validated judgements into a single judgement representative of the entire group, the geometric mean is used. Saaty (2008) recommends this method, as it maintains the reciprocity property of comparative judgements. In our case, the dimensions of the influence (activity, authority and popularity) play the role of AHP criteria, while the variables we have identified as critical for measuring influence in Twitter correspond to the AHP subcriteria (number of tweets, outdegree, number of retweets, PageRank, number of followers and indegree). Figure 1 presents the integration in the AHP methodology of the hierarchical relationships between influence dimensions and influence variables. The AHP questionnaire initially provided a brief description of the items we wanted to prioritise, i.e., the criteria and the subcriteria. The question for the criteria was posed as follows: "Which of these two alternatives do you estimate is more important when considering the influence of a user in Twitter? If you estimate one alternative more relevant than the other, please state the degree of importance ranging from one (equally important) to nine (extremely more important)." An example of the scale for evaluating one pair of criteria was: "Authority 9 7 5 3 1 3 5 7 9 Activity". An example of a question regarding a couple of subcriteria was: "Which of these two variables do you estimate is more important when considering the activity of an influential user in Twitter? If you consider that one alternative is more relevant than the other, please indicate the degree of importance ranging from one (equally important) to nine (extremely more important)." The scale presented to the respondent was as follows: "Number of tweets 9 7 5 3 1 3 5 7 9 Outdegree". The questionnaire ended up by asking about sociodemographic variables. The experts who answered the questionnaires were fifteen intensive Twitter users. Ten of them worked in marketing, media and communication agencies of different sizes, and 80% of them collaborated in universities providing training in digital marketing. Fourteen of them had college degrees and five of them were PhDs. Figure 1. AHP structure process used to assess Twitter influence Source: own creation. Corpus and variables The dataset was built by collecting the tweets that included the Championship official hashtag #Apeldoorn2018. The extraction period extended from February 28 to March 4. We employed Audiense software. The size of the resulting corpus was 19,701 tweets. They had been posted by 7,281 different users. We obtained the number of tweets, the number of followers and the number of retweets for each user directly from the dataset. To evaluate the indegree, the outdegree and the PageRank for each user, we first drew up the graph of the interaction among the users registered in the dataset, and then we evaluated those variables with Gephi (Bastian, Heymann y Jacoby, 2009), a popular software for SNA. Research process The main research objective was to identify the influential users based on the selected variables after revising the academic literature. To construct a unified measure of influence, we carried out the expert consultation according to the AHP methodology. The output of this process provided the weight for each variable. Next, we estimated the variables involved for all the users present in the dataset. Given the variables and the corresponding weights, we assessed the degree of influence of each user according to this function: , where C ij corresponds to the weight of the j subcriterion of the i criterion. Each variable was normalised over the sum of all the values. In this way, the influence index is a number between 0 and 1, and the sum of all the influence indexes must be 1. This unified measure allowed us to sort all the users according to their degree of influence. This way, the 25 most influential users were identified. Finally, the top 25 users were categorised into seven groups, according to their profile: 1 = Participating athlete; 2 = Media; 3 = Amateurs; 4 = Cycling-related media; 5 = Journalists, bloggers and content creators; 6 = Cycling-related institutions (federations, event organisation); 7 = Others. Figure 2 summarises the research process. Model of Twitter user's influence estimated by experts The consulted experts completed the questionnaires with the peer-to-peer comparisons. Consistency ratios were calculated for each questionnaire as AHP prescribes. Three out of the fifteen questionnaires had to be discarded, as they obtained a consistency ratio higher than 10%. With the results of the validated questionnaires, the geometric mean was calculated, as recommended by Saaty (2008), to obtain a single representative judgement of the whole group. Table 1 shows the weights of the subcriteria that assess the influence of a user on Twitter according to the experts consulted. The most important subcriteria or variables when evaluating Twitter user influence were the number of retweets (37.28%), PageRank (24.75%) and indegree (20.17%). The less important were the number of tweets (3.22%), outdegree (5.16%) and number of followers (9.42%). In terms of criteria, the influence dimension of authority obtains the highest weight, as the criterion value represents the sum of the respective weights of the dependent subcriteria. The first stage of the methodology ended by obtaining these weights. This was the generic stage; thus, the list of subcriteria and their weights can be used in any Twitter user's influence prioritisation process, regardless of the type of issue being evaluated. In the next section, we will apply these results to the 2018 UCI World Track Cycling Championship to find out the most influential users during this event. Centrality measures Gephi provided the centrality measures for each user registered in the dataset: outdegree, indegree and PageRank. Figure 3 shows the graph of Twitter users' interaction during the studied event. The node size represents the outdegree measure. The colours show the different clusters or groups identified by Gephi. Clusters reflect users who are grouped by close interactions. One of the reasons for this proximity is nationality. Thus, for example, the profiles of Colombian cyclists and their followers are included in the same green group and speak the same language (Spanish). The profiles that describe Spanish cyclists are coloured orange, and the British in blue. The peripheral positions influence neighbouring clusters, while the central positions reach a greater number of users. This figure synthesises the conversation to identify the relative position of the users in the global interaction. Now, we proceed to use the AHP results to find out the most influential users in this interaction. Most influential users The influence is assessed for every user according to the six variables measured (number of tweets, outdegree, number of retweets, PageRank, number of followers and indegree) and the weights provided by the AHP process. This index allows sorting of all the 7,281 users. Table 2 shows the top 25 influential users according to our influence assessment, and includes their profile classification. Observing the 25 most influential users, we find: six athletes, five men and one woman (category 1); thirteen related somehow to the event, such as organisations and federations (category 6); two mass media (category 2), and four from other categories (4 and 7). It should be noted that the first three positions, UCI_Track, BritishCycling and fedeciclismocol (the Colombian Cycling Federation), as well as the fifth, (FederCiclismo) belong to the same category (6). It is also important to note that the local event organiser account, wkbaanapeldoorn, appears in position 25. The next most important user group is that of the athletes. Colombian Fabián Puerta, the Spaniards Sebastián Mora and Albert Torres, Italian Filippo Ganna and German Maximilian Levy stand out in this group. The only woman who appears in this ranking is Chloe Dygert, from the USA. Finally, we have another six users, two from generalist media, Eurosport from Italy and the BBC, which were two media that broadcast the event, and four from two different groups. These were three public figures: Juan Manuel Santos (Colombian President), Carlos Vives (Colombian singer) and Clara Luz Roldan (Colombian politician), as well as a specialised medium (Mundo Ciclístico magazine). These four users have in common that their origin is Colombian. If we analyse the outcomes by country of origin, we see that the country with the highest number of users in this ranking is Colombia, with six positions. Italy comes the next with five users, followed by the United Kingdom and Spain with three, and finally the United States, Japan and France with two users each. Discussion The aim of this research was to study the conversation dynamics in Twitter during a niche sport event. The backbone of the research is the concept of influence. In particular, we are interested in characterising the professional profiles of those users who stand out due to their capacity to spread information through the interaction network and to their status as a reference in the conversation. This is even more relevant when analysing the case of a niche sport, as its public impact is much more reduced because of the scant or null mass media coverage. In this case, we focused on the 2018 UCI World Track Cycling Championships and the conversation generated in Twitter containing the hashtag #Apeldoorn2018. The first objective (O1) was intended to build a measurement for assessing the influence in Twitter. This phenomenon was considered from three perspectives or dimensions: activity, authority and popularity of the user. For each dimension, we pick two variables that could serve as a metric for that aspect. All six variables provided useful information on the impact of the user's influence in the Twitter conversation. For this reason, AHP methodology was employed, as it allows researchers to draw up an index combining different weighted variables. The AHP output provides the weights based on the nuanced judgements of an expert panel, and the procedure assures that those qualitative opinions are consistent. According to the experts consulted for this research, the results of the AHP process highlighted the relevance of considering authority dimension (62%) over popularity (29.6%) and activity (8.4%) of a particular user to be considered as influential in Twitter. Looking at the variables involved, the global opinion given by the experts allocated 37.28% of the weighting to the number of retweets 24.75% to the PageRank and 20.17% to the indegree measure. This result suggests, on the one hand, the importance of the content for being considered as influential, as we can assume that high-quality messages will be shared by a greater number of users. On the other hand, we can infer the advantage of being referenced by other users who are well referenced in the whole interaction network at the same time. The other three variables reached smaller weights: 9.42% for the number of followers, 5.16% for the outdegree metric and 3.22% for the number of tweets. Therefore, in our model we could say that influence is, at an essential level, a phenomenon linked more with quality than quantity. In other words, activity alone does not create influence, but rather the quality of one's influence (understood as part of the authority and popularity dimensions). The weights obtained in the AHP procedure allowed us to identify the top 25 influential users who tweeted about the 2018 UCI Track Cycling World Championships celebrated in Apeldoorn during the first days of March (O2). The two most prominent user profiles in this classification are the sport organisations and the athletes. However, Hambrick, Simmons, Greenhalg & Greenwall (2010) and Naraine, Schenk & Parent (2016) considered that the event organiser and sport journalists would exert an influential role in the Twitter conversation during sport events. In our case, in the first position we find the main organiser user, the UCI_Track, which is by far the most influential user (23.3% of normalised influence, whereas the next user has 7.9%). It posted the most (294 tweets), and its follower base was medium sized compared with the rest of the table (14,178). The fact of being a niche sport made this contribution more valuable. However, we did not find any journalists in the top 25. Instead, the classification revealed two mainstream media (Eurosport_IT and BBCSport), one specialised magazine (mundociclistico) and three public figures (JuanManSantos, ClaraLuzRoldan, and carlosvives). This lack of journalists could be explained precisely because we are dealing with a niche sport. It should be noted that the greater or lesser influence of these top 25 users is not always related to the number of medals achieved in the races. The Netherlands led the final medal count with 12 medals and was also the host country, but this did not translate into a higher number of users from this country among the 25 most influential users. In fact, the local event user wkbaanapeldoorn appears in position twenty-five and it is the only one belonging to the Netherlands in Table 2. In contrast, Colombia won only one medal, and yet there are 6 users of Colombian origin in the ranking. These are the two extreme cases. In the intermediate situations, we find Italy, United Kingdom, Spain, the United States, Japan and France. Athletes from these countries won medals, and in addition Twitter users from these countries were ranked among the most influential ones during the event. This unbalanced situation could be explained because in the Netherlands, track cycling is considered a niche sport. But this is not the case with Colombia, where cycling is much more popular. This fact shows how the consideration of a sport as niche or not depends on the national culture in which it takes place (Miloch & Lambrecht, 2006). These outcomes could confirm the relevant role played by cycling-related institutions, e.g., federations and associations (category 6), in promoting the Twitter conversation about the event. Yan et al. (2019), in their research into the UEFA Champions League Final, obtained a similar result when they identified the prominence of large sports entities in the Twitter network structure. In their case, it was the Champions League that held a privileged position in the ranking. Therefore, we can see how this group is relevant both in large sporting events and in niche sports. In this sense, the strategic management of social media by this type of institution is a fundamental communication resource. The participating cyclists (category 1) are another group with a strong presence in the list of top influential users, and therefore their role as catalysers of the online conversation could be considered transcendental. This suggests that the ranking position could be a consequence of their sporting outcome and of their activity or mention-receiving in Twitter. In this regard, the role that Twitter could play as a means of amplifying the sport results obtained by the athletes would be very important in terms of future personal promotion. These results are consistent with those obtained by Kang et al. (2019) for niche sports. They also found that, despite their relevance in the digital conversation, both kind of actors (categories 1 and 6) did not take advantage of promotional opportunities compared to interaction and information opportunities. It is worth mentioning that all the athletes positioned in this ranking won a medal during the championship, and it is remarkable that in this classification no athlete from the organising country (the Netherlands) appears, when there were several who were present in the medal registry and had user accounts in Twitter also. The scenario for this study was a niche sport (O3). In addition to the cultural differences among the tweeting communities and the invigorating role played by the event organiser, Table 2 highlights the key presence of public figures in the conversation. The Colombian individual users tweeted just once, and they were regarded as an influential actor in the conversation by our index. Two of them had a very significant follower base (JuanManSantos with 5.3M and carlosvives with 5M). The participation of celebrities always has a strong impact, which is even more evident in the case of a niche sport. Although digital communication has helped promote niche sport fandom (Mastromartino et al., 2020), the main actors in the Twitter conversation around niche sports are organisations and mainstream media. This salience makes it difficult for niche sport sponsors to invest in social media . The classification of user profiles into different categories according to origin, profession or sporting results gives our research significant added value by transferring the scope of events to a global digital environment. This suggests that social media influence spans traditional boundaries and expands the reach of events to a broad digital sphere. This could be particularly relevant for event organisers and athletes of niche sports in their digital communication strategy. Limitations and future lines of research One limitation of our research is that it does not consider the dynamic nature of social media. The temporal dimension could also be included in future research to compare the interaction network in different time periods (before, during and after the event) as Abeza et al. (2014) did. In this regard, it would be useful to analyse the network evolution throughout the duration of the event. Other important limitation is the expert panel and its size. The research results depend on this consultation. One way to improve this research outcome would be to expand the group of experts. At least this study has proved the utility of applying AHP methodology to the problem of quantifying influence. As future lines of research, it could be interesting to compare these results with another sporting event (World Cup or European Championships), to determine whether the most influential users follow the same pattern as this research. In this way, the conclusions that refer to the grouping by categories of the different users or the geographical origin of the accounts and their relationship with the sports results obtained could be verified. Another line of research would be to analyse the scope of the Twitter posts of the different influential users over their communities. In this way, the impact could be better estimated, both in terms of media coverage and sponsorships, either of the event itself or of a particular athlete. Finally, further research could test whether these results could also be useful for other social networking sites, such as Facebook or Instagram. Conclusions Influence in Twitter is constructed from a variety of perspectives. Following the AHP methodology, we were able to draw up an index for assessing the relative degree of influence of a user who participates in a Twitter conversation. Aspects more related with quality rather than quantity were rated better by the experts consulted. The variables referring to authority (number of retweets and PageRank) were considered to feature more important Twitter influence than those referring to activity (number of tweets and outdegree). This influence index was applied to Twitter conversation during the 2018 UCI Track Cycling World Championships. The fact that it is a niche sport enabled us to delve into the digital communication around a lower-demand sport. The global conversation confirmed the cultural dependence of this kind of sports. In addition, the top 25 influential users were shaped by cycling organisations, athletes, media and public figures. Although influence is a contextualised and complex phenomenon, the use of AHP provided a useful tool for identifying the most influential users on Twitter. This list helped to approach a niche sport with the aim of determining more accurately who leads the online conversation. In this regard, our research reveals the important role that social media can play when promoting a niche sport without opportunities for consideration in the traditional media. If a proper strategy is developed through the right social media influencers, it can make a significant impact.
9,230.2
2021-01-01T00:00:00.000
[ "Computer Science" ]
Massive photons from Super and Lorentz symmetry breaking In the context of Standard Model Extensions (SMEs), we analyse four general classes of Super Symmetry (SuSy) and Lorentz Symmetry (LoSy) breaking, leading to {observable} imprints at our energy scales. The photon dispersion relations show a non-Maxwellian behaviour for the CPT (Charge-Parity-Time reversal symmetry) odd and even sectors. The group velocities exhibit also a directional dependence with respect to the breaking background vector (odd CPT) or tensor (even CPT). In the former sector, the group velocity may decay following an inverse squared frequency behaviour. Thus, we extract a massive and gauge invariant Carroll-Field-Jackiw photon term in the Lagrangian and show that the mass is proportional to the breaking vector. The latter is estimated by ground measurements and leads to a photon mass upper limit of $10^{-19}$ eV or $2 \times 10^{-55}$ kg and thereby to a potentially measurable delay at low radio frequencies. We largely base our understanding of particle physics on the Standard Model (SM). Despite having proven to be a very reliable reference, there are still unsolved problems, such as the Higgs Boson mass overestimate, the absence of a candidate particle for the dark universe, as well as the neutrino oscillations and their mass. Standard Model Extensions (SMEs) tackle these problems. Among them, SuperSymmetry (SuSy) [1,2] figures new physics at TeV scales [3]. Since, in SuSy, Bosonic and Fermionic particles each have a counterpart, their mass contributions cancel each other and allow the correct experimental low mass value for the Higgs Boson. Lorentz Symmetry (LoSy) is assumed in the SM. It emerges [4][5][6][7] that in the context of Bosonic strings, the condensation of tensor fields is dynamically possible and determines LoSy violation. There are opportunities to test the low energy manifestations of LoSy violation, through SMEs [8,9]. The effective Lagrangian is given by the usual SM Lagrangian corrected by SM operators of any dimensionality contracted with suitable Lorentz breaking tensorial (or simply vectorial) background coefficients. In this letter, we show that photons exhibit a non-Maxwellian behaviour, are massive and possibly manifest dispersion at low frequencies, pursued by newly operating ground radio observatories and future space missions. LoSy violation occurs at larger energy scales than those obtainable in particle accelerators [26][27][28][29][30][31][32]. At those energies, SuSy is still an exact symmetry, even if we assume that it might break at scales close to the primordial ones. However, LoSy violation naturally induces SuSy breaking because the background vector (or tensor) -that implies the LoSy violation -is in fact part of a SuSy multiplet [33], Fig. (1). The sequence is assured by the supersymmetrisation, in the CPT (Charge-Parity-Time reversal symmetry) odd sector, of the Carrol-Field-Jackiw (CFJ) model [34] that emulates a Chern-Simons [35] term and includes a background field that breaks LoSy, under the point of view of the so-called (active) particle transformations. The latter consists of transforming the potential A µ and the field by the photino contribution, the CFJ Lagrangian reads (Class I) where F = F µν F µν . The term in Eq. (2) couples the photon to an external constant four vector and it violates parity even if gauge symmetry is respected [34]. If the CFJ model is supersymmetrised [36], the vector V µ is space-like constant and is given by the gradient of the SuSy breaking scalar background field, present in the matter supermultiplet. The dispersion relation yields, denoting If SuSy holds and the photino degrees of freedom are integrated out, we are led to the effective photonic action, i.e. the effect of the photino on the photon propagation. The Lagrangian (1) where H, the tensor M µν =M µν + 1/4η µν M , andM µν depend on the background Fermionic condensate, originated by SuSy; M µν is traceless, M is the trace of M µν and η µν the metric. Thus, the Lagrangian, Eq. (4), in terms of the irreducible terms displays as The corresponding dispersion relation reads The dispersion law given by Eq. (6) is just a rescaling of Eq. (3) as we integrated out the photino sector. The background parameters are very small, being suppressed exponentially at the Planck scale; they render the denominator in Eq. (6) close to unity, implying similar numerical outcomes for the two dispersions of Classes I and II. The even sector [33] assumes that the Bosonic background, responsible of LoSy violation, is a background tensor t µν . For the photon sector, if unaffected by the photino contribution, the Lagrangian reads (Class III) The dispersion relation for Class III [37] is Integrating out the photino [33], we turn to the Lagrangian of Class IV where a is a dimensionless coefficient and b a parameter of dimension of mass −2 (herein, c = 1, unless otherwise stated). For the dispersion relation, we write the Euler-Lagrange equations, pass to Fourier space and set to zero the determinant of the matrix that multiplies the Fourier transformed potential. However, given the complexity of the matrix in this case and the smallness of the tensor t µν , we develop the determinant in a series truncated at first order and get [37] btk 4 − k 2 + 3a + bk 2 t αβ k α k β = 0 , where t = t µ µ . For determining the group velocity, we first consider V 0 = 0 for Class I [38,39] and obtain In [39], the authors do not exploit the consequences of the dispersion relation and do not consider a SuSy scenario. Dealing with Eq. (11), we have neglected the negative roots; it turns out that the two positive roots determine identical group velocities dw/dk up to second order in V. For θ, the angle between the background vector V and k, we get for θ = π/2. Instead for θ = π/2, one of the two solutions coincides with the Maxwellian value, while the other is dispersive For V 0 = 0, we suppose that the light propagates along the z axis (k 1 = k 2 = 0) which for convenience is along the line of sight of the source. We then obtain We now set V 3 = 0, that is, the light propagates orthogonally to the background vector. Further, for V spacelike and 4V 2 0 k 2 3 /| V| 4 ≪ 1, we get two group velocities, one of which is dispersive The Further, the value of α is not Lorentz-Poincaré invariant. Superluminal behaviour is avoided assuming for both solutions V 0 = 0. If dealing only with a null V 0 and with dispersive group velocities, for a source at distance ℓ, the time delay of two photons at different frequencies, A and B, is given by (in SI units) where x takes the values (2 + cos 2 θ)/4, for Eq. (12), and 1 for Eqs. (13,15). The delays, Eq. (16), are plotted in Fig. (2). Comparing with the de Broglie-Proca (dBP) delay we conclude that the background vector induces an effective mass to the photon, m γ , of value Equation (18) is gauge-invariant, conversely to the potential dependent dBP mass. It appears as the pole of the transverse component of the photon propagator [39]. Class II, just a rescaling of Class I, implies identical solutions, differing by a numerical factor only. The group velocities of Classes III and IV show no sign of dispersion; they are slightly smaller than c -as light travelling through matter, but suffer from anisotropy to a larger degree than in Classes I and II. Indeed, the isotropy is lost due to the tensorial nature of the LoSY and SuSy breaking perturbation. The feebleness of the corrections is due to the coefficient T being proportional to the powers of the tensor t µν components, of 10 −19 eV magnitude [37] v III,IV g = 1 − T t 1 sin 2 θcos 2 ϕ + t 2 sin 2 θsin 2 ϕ + t 3 cos 2 θ , where θ and ϕ are the azimuthal and planar angles of k with respect to the axes respectively. Having seen a typical dBP massive photon behaviour in the group velocities of the odd sector, Since the φ field appears only through its gradient, in the absence of φ time derivatives and thereby of dynamics, ∇φ acts as an auxiliary field and can be integrated out from the Lagrangian. The Euler-Lagrangian equation for χ is disregarded since χ = 0. The term V × A 2 is ex- thereby showing a massive photon term like in the de Broglie-Proca Lagrangian. The quest for a photon with non vanishing mass is definitely not new. The first attempts can be traced back to de Broglie who conceived an upper limit of 10 −53 kg, and achieved a comprehensive formulation of the photon [42], also thanks to the reinterpretation of the work of his doctorate student Proca. To the Lagrangian of Maxwell's electromagnetism, they added a gauge breaking term proportional to the square of the photon mass. A laboratory Coulomb's law test determined the mass upper limit of 2 × 10 −50 kg [43]. In the solar wind, Ryutov found 10 −52 kg at 1 AU [44,45], and 1.5 × 10 −54 kg at 40 AU [45]. These limits were accepted by the Particle Data Group (PDG) [46], but recently put into question [47] [51]. The lowest value for any mass is dictated by Heisenberg's principle m ≥ /∆tc 2 , and gives 1.3 × 10 −69 kg, where ∆t is the supposed age of the Universe. In this letter, we have focused on Susy and LoSy breaking and derived the ensuing dispersion relations and group velocities for four types of Lagrangians. All group velocities show a non-Maxwellian behaviour, in the angular dependence and through sub or super luminal speeds. Superluminal behaviour is exclusive to the odd CPT sector, and may occur only if the time component of the perturbing vector is non-null. Further, in the odd CPT sector, the mass shows a dispersion, proportional to 1/ω 2 , as in dBP formalism. The difference lies in the gauge invariance of the CFJ photon, the mass of which is proportional to | V|. The delays are thus more important at lower frequencies and the opening of the 0.1-100 MHz window would be of importance [41]. Elsewhere, we have analysed the polarisation and evinced the transversal and longitudinal (massive) modes [37]. From the rotation of the plane of polarisation from distant galaxies, or from the Cosmic Microwave Background (CMB), it has been assessed that |V µ | < 10 −34 eV [12,34,48]. This result is comparable to the Heisenberg mass limit value at the age of the universe. A less stringent, but interesting, limit of 10 −19 eV [40] has been set through laboratory based experiments involving electric dipole moments of charged leptons or the inter-particle potential between Fermions and the associated corrections to the spectrum of the Hydrogen atom. These latter estimates imply, Eq. (18), a mass upper limit of 10 −55 kg. The detection of the CFJ massive photon can be pursued by other means, e.g., through analysis of Ampère's law in the solar wind [47]. Incidentally, the odd and even CPT sectors can be experimentally separable [12]. What is the role of a massive photon for SMEs? String theory has hinted to massive gravitons and photons [5,6], while Proca electrodynamics was investigated in the context of LoSy violation, but outside a SuSy scenario [20]. However, if LoSy takes place in a supersymmetric scenario, the photon mass may be naturally generated from SuSy breaking condensates [33,36]. As a final comment, we point out that the emergence of a massive photon is pertinent also to other SME formulations. LB and ADAMS acknowledge CBPF for hospitality, while LRdSF and JAHN are grateful to CNPq-Brasil for financial support.
2,751
2016-07-29T00:00:00.000
[ "Physics" ]
Oscillons in hyperbolic models D. Bazeia, Adalto R. Gomes, K. Z. Nobrega, Fabiano C. Simas, 1 Departamento de F́ısica, Universidade Federal da Paráıba, 58051-970, João Pessoa, PB, Brazil 2 Departamento de F́ısica, Universidade Federal do Maranhão (UFMA) Campus Universitário do Bacanga, 65085-580, São Lúıs, Maranhão, Brazil 3 Departamento de Eletro-Eletrônica, Instituto Federal de Educação, Ciência e Tecnologia do Maranhão (IFMA), Campus Monte Castelo, 65030-005, São Lúıs, Maranhão, Brazil 4 Centro de Ciências Agrárias e Ambientais-CCAA, Universidade Federal do Maranhão (UFMA), 65500-000, Chapadinha, Maranhão, Brazil Abstract In this work we examine kink-antikink collisions in two distinct hyperbolic models. The models depend on a deformation parameter, which controls two main characteristics of the potential with two degenerate minima: the height of the barrier and the values of the minima. In particular, the rest mass of the kinks decreases monotonically as the deformation parameter increases, and we identify the appearance of a gradual suppression of two bounce windows in the kink scattering and the production of long lived oscillons. The two effects are reported in connection to the presence of more than one vibrational state in the stability potential. INTRODUCTION Localized structures are important in nonlinear physics. In low and high energy physics, localized structures have been studied in several different contexts [1][2][3]. In high energy physics, in particular, nontrivial localized structures appear as kinks, vortices and monopoles in (1,1), (2,1) and (3,1) spacetime dimensions, respectively [1]. In the simplest situation, kinks and antikinks appear in scalar field theories described by a single real scalar field. In nonintegrable scalar field theories like the φ 4 model [3], the existence of kinks and antikinks motivates the study of their scattering, that may sometimes lead to surprisingly rich consequences. For instance, when the collision is analyzed as a function of the initial velocity of approximation of the two structures, a complicated structure appears [4], usually connected with the deformation of the field profile and the emission of radiation. However, for larger initial velocities a simple inelastic scattering occurs and the kink-antikink pair retreats from each other. In the richer case with sufficiently small initial velocities, the kink and antikink capture one another, forming a trapped bion state that radiates continuously until being completely annihilated. An intriguing aspect of the collision, observed in particular in the well-known φ 4 model [5][6][7], occurs for some windows of intermediate velocities, named two-bounce windows, where the scalar field at the center of mass bounces twice before the pair recedes to infinity. These windows appear in sequence with smaller thickness, accumulating in the border of the onebounce region. The same effect was also verified for higher levels of bounce windows, leading to a fractal structure [6]. The two-bounce windows were interpreted in the Ref. [5] as related to the exchange of energy between the translational and vibrational modes that are present in the model. The φ 6 model is an exception for this mechanism, since the resonant scattering appears if one considers the effect of collective modes produced by the antikink-kink pair [8]. Another counterexample of the mechanism described in the Ref. [5] appeared in [9]; there, there are no two-bounce windows even in the presence of vibrational modes. Moreover, we want to add that the richness of the scattering may also be connected with the internal structure of the stability potential that appears in the model, which is related with the potential that defines the model and gives rise to the kinks and antikinks. The sense of this is that the internal modes of the stability potential can provide new windows or resonances that can modify the profile of the collision, leading to novel possibilities of current interest. Kinks were also proposed in buckled graphene nanoribbon [27,28], and they also appear as topological excitations in trans-polyacetilene. For instance, the lattice model of the Su-Schrieffer-Heeger [29] also predicted that kinks can propagate as independent entities along the polymer. Kinks also find interesting applications in ferroelectric materials. As one knows, polynomial and modified sine-Gordon models suffer from a drastic weakness due to the rigidity of some ferroelectric materials. Indeed, in these models the barrier height of the double-well potential cannot be varied as a function of the shape parameter. To address this question, an hyperbolic extension [30,31] of the φ 4 model was proposed to describe the structural transitions observed in specific materials [32]. These models belong to the class of deformed double-well potentials V (φ, µ), where φ corresponds to the order parameter and µ is a deformation parameter. In general the functions of µ are introduced at will in the potential, in a phenomenological construction. The Calogero model [33] describes N identical non-relativistic particles in one dimension, having exact soliton solutions in the continuum limit [34]. A hyperbolic extension of the Calogero model was shown to be integrable even in strong confinement, and presenting multisoliton solutions [35]. N = 2 and N = 4 supersymmetric generalizations of the hyperbolic Calogero model were presented in the Ref. [36]. Hyperbolic models also have been applied to attain exact solutions for hairy black holes [37]. Moreover, a generalised inverse cosinehyperbolic potential was considered to describe quintessential inflation [38]. Tachyon matter cosmology with hyperbolic potentials has also been considered in the Ref. [39]. Motivated by the above investigations, in this work we consider the scattering of kinks in models defined in (1,1) spacetime dimensions, in the presence of hyperbolic potentials. In the next section we consider two different models, which are inspired by the φ 4 model and the Refs. [30,31], but we concentrate on collecting results for the kink-antikink collisions and their departure from the standard results obtained with the φ 4 model. An interesting result concerns the production of oscillons, the long-lived and low-amplitude oscillation of the scalar field around the trivial configuration. We finish the work in Sect. III, with some comments and conclusions. II. HYPERBOLIC MODELS We start with the standard action where the potential has two minima and a local maximum at the origin. Then we have one topological sector connecting adjacent minima. The equation of motion is given by Static kink φK(x) and antikink φK = φ K (−x) are solutions that connect the two sectors of the potential. Perturbing linearly the scalar field around one kink solution as φ(x, t) = with V sch (x) = d 2 V dφ 2 being the stability potential. The analysis of the Schrödinger-like or stability potential is useful for understanding some aspects of the scattering structure. For the numerical solutions of kink-antikink scattering we used a 4 th order finite-difference method on a grid N = 4096 nodes and a spatial step δ = 0.05. We fixed x = ±x 0 with x 0 = 12 for the initial symmetric position of the pair and set the grid boundaries at x = ±x max with x max = 400. For the time dependence we used a 6 th order sympletic integrator method, with a time step δ = 0.02. For solving the equation of motion for kink-antikink scattering we used the following initial conditions where φ K (x + x 0 , v, t) means a boost solution for kink and φ v > 0 is one vacuum of the theory (minimum of V (φ v )). A. Model 1 We consider the potential [30] where µ is the deformability parameter. The Fig. 1a shows that the potential has two minima in φ = ±(1/µ) arcsinh(µ). In this form, the minima of the potential are variable, but the height of its barrier is the same. In the limit of small values of µ, the model approaches the usual polynomial φ 4 theory with minima at φ = ±1. Static kink solution and the minima ±φ v are given by [30] φ The corresponding energy (rest mass) is given by In this model, the potential can be written as and so it admits the first order equations The kink in the Eq. 1c is a localized function around x = 0. From the plot one sees that its maximum is fixed, whereas the thickness decreases with the increasing of µ. This means that as µ increases, the solution becomes more and more localized. In the Fig. 1d we see that the kink rest energy decreases with the increasing of µ. The Schrödinger-like or stability potential is given by where q(x) = x µ 2 + 1/2. This potential is presented in the left panel in Fig. 2 Fig. 5a we see, for µ = 3, the production of two oscillons, whereas for µ = 5, we have the production of four oscillons (Fig. 5b). We note that larger values of the parameter µ favor the occurrence of more definite oscillon states, having higher harmonicity and correspondingly higher lifetime. The absence of oscillons for small values of µ conforms with the results that for µ small, the above model becomes the φ 4 model, and that up to now there are no evidence for the presence of oscillons in the scattering of kinks in the φ 4 model. The Fig. 6a shows that the potential has two minima in φ = ±1 and one local maximum at the origin and that barrier height decreases with µ. Compare with the Fig. 1a for the potential V 1 , where the barrier height is constant. The static kink solution is given by [30] φ K (x) = 1 arcsinh(µ) arctanh µ The model here also supports first order equations, now given by The kink and antikink of this model obeys these first order equations, so they are also linearly stable. The Figs. 6b depicts some plots of the scalar field profile φ(x) for some values of µ. Note from the figure that the minima are independent of µ (φ = ±1). The Fig. 6c shows that the energy density is a localized function around x = 0. We note that its height decreases with µ (compare with the Fig. 1c for the energy density for the model V 1 , where this is constant). Also, we see that its thickness grows with µ (compare again with the Fig. 1c for the model V 1 , where this characteristic decreases with µ). In the Fig. 6d. we see that the kink rest energy for this second model also decreases with the increasing of µ. However, this decreasing occurs in a lower rate in comparison to the V 1 model (compare with the Fig. 1d). The stability potential for the kink in this model is given by III. CONCLUSIONS We have analyzed two models of potentials with two degenerated minima, with interest in the two effects: the difference between the minima and the height of the barrier of the potential. For the models V 1 and V 2 one of the characteristics is fixed, whereas the other varies monotonically with the parameter µ. We found that the dynamics of scattering at the center of mass is roughly the same, with the expected one-bounce, bion and two-bounce states. The increasing of µ is accompanied by the generation of more vibrational states. The suppression of two-bounce windows is due to a kind of destructive interference between the two vibrational modes that forbids the realisation of the resonance mechanism of transferring of energy from the translational mode to the vibrational mode. This effect of suppression was already described in other models [9] and is here also identified. We have also observed the production of oscillons, that is, long-lived states that oscillate around one trivial minima of the potential. One sees that the production of oscillons is favored in the following situations: i) for the model V 1 , for fixed barrier, when the difference between the two minima is smaller; ii) for the model V 2 , for fixed minima, when the barrier is smaller. For both V 1 and V 2 models, the unifying factor that favors the production of oscillons is the energy or rest mass E of the kink. Indeed, in both models, this quantity decreases monotonically with µ. In addition, another interesting phenomenon was shown in the kink-antikink scattering of the two models V 1 (φ) and V 2 (φ). As we can see in the Figs. 5 and 8, after some collisions, long-lived, quasi-harmonic and low amplitude oscillating structures are formed and escape to infinity. These states, known as oscillons, can occur for v < v c . The appearance of oscillons are extremely sensitive to the initial velocity. Despite this, we have identified some determinant factors for the production of oscillons. Fist of all, one sees that the production of oscillons is favored in the following situations: i) for the model V 1 , for fixed barrier, when the difference between the two minima is smaller; ii) for the model V 2 , for fixed minima, when the barrier is smaller. For both V 1 and V 2 models, the unifying factor that favors the production of oscillons is the lower energy or rest mass E of the kink. Indeed, in both models, this quantity decreases monotonically with µ. The second aspect to be noted is that we have not observed the production of oscillons for low values of µ, where the models have only one vibrational state. When the parameter µ grows, the increasing in the number of vibrational states results in a greater complexity of the energy distribution of the initial translational modes of the kink-antikink pair, increasing the possibility of production of oscillons. Also, comparing the results from both potentials, we see that the oscillons with larger amplitudes, but more deformed, are favored for the model V 2 . Since the harmonicity and propagation without distortion are desirable properties, this signals that model V 1 , characterized by a fixed height barrier, is more effective for the production of these long-lived states.
3,224.2
2019-11-08T00:00:00.000
[ "Physics" ]
Quasi-Orthogonal Time Division Multiplexing and Its Performances in Rayleigh Fading Channels This paper proposes an efficient transmission scheme, Quasi-Orthogonal Time Division Multiplexing (QOTDM), which employs the shift orthogonal property of the pulse function with raised-cosine spectral shape, and the signal waveforms are quasiorthogonal in time domain. Comparing to orthogonal frequency division multiplexing (OFDM), QOTDM is less sensitive to carrier frequency offset and power amplifier nonlinearities while keeping a similar spectral efficiency with OFDM due to singlecarrier characteristics. QOTDM is a suitable consideration for the downlink transmission such as in satellite communications. An upper bound of sample error probability (SER) is derived to evaluate the performance of QOTDM. Comparisons of QOTDM and OFDM in Rayleigh fading channels show that the proposed QOTDM system is better than that of OFDM system in terms of bit error rate (BER) in high Eb/No regions. Introduction Orthogonal frequency-division multiplexing (OFDM) is a promising technique for high-speed data transmission in mobile communications [1,2] due to its favorable properties such as high spectral efficiency, robustness to channel fading, and capability of handling multipath fading. However, there are many disadvantages in OFDM, for example, OFDM systems are very sensitive to carrier-frequency offsets (CFO) [3], since they can only tolerate offsets which are a fraction of the spacing between the subcarriers. That is, high accurate synchronization of the carrier frequency at the receiver is required, or there will be loss of orthogonality between the subcarriers. Moreover, in typical cases, the transmitted signals exhibit high peak-to-average power ratio (PAPR) [4], which means that an amplifier must either have a large linear operating range or it will lead to nonlinearities for the transmitted signals. For the inherent disadvantages of OFDM, on the one hand, many researchers try to present some effective schemes to overcome both CFO and PAPR issues of OFDM; on the other hand, single-carrier signal processing schemes have been investigated. In this paper, a quasi-orthogonal time division multiplexing (QOTDM) system is proposed, which is a singlecarrier modulation technique and has high spectral efficiency while overcoming the CFO and PAPR drawbacks in OFDM. In Section 2, the basic principles of QOTDM system are investigated. In Section 3, an upper bound of sample error probability in QOTDM is derived. The performance comparisons of QOTDM and OFDM in Rayleigh fading channels are presented in Section 4. And conclusions are given in Section 5. Concept of QOTDM. It is well known that the function has shift orthogonal property, that is, for integer k, the following equation holds: OFDM has high spectral efficiency by exploiting the shift orthogonal property shown in (1). ISRN Communications and Networking In OFDM, this is achieved by making all the carriers orthogonal to each other, suppressing interference between the closely spaced carriers. Making the carriers for each channel orthogonal to one another allows them to be spaced very closely. Based on the basic principle of time-frequency duality for Fourier transform, it can be known that the function also has shift orthogonal property. That is, one can obtain an orthogonal time-division multiplexing (OTDM) system by exchanging the time variable t and frequency variable f in (1). Thus (2) has shift orthogonal property in the time domain instead of the shift orthogonal property in the frequency domain as in OFDM. In OTDM system, the waveform of each transmitted signal is composed of a number of overlapped Sinc functions with rectangular spectral shape. And the number is determined by the amount of the parallel substreams. However, the Sinc function has an infinite nonzero range and is impractical for implementation. Fortunately, a low-pass filter with raised-cosine spectral shape has an impulse response where α is the roll-off factor and T denotes one-sided time duration of the main lobe of the shape pulse. c(t) is similar to a Sinc function and has approximate orthogonality, that is, for all integers except k = 0, it holds that one calls this property quasi-orthogonality and d(k) a quasiorthogonal function. If the quasi-orthogonal function family is employed in OTDM, then the quasi-orthogonal timedivision multiplexing (QOTDM) could be obtained. QOTDM System Model. QOTDM is based on sampleinterweaving rather than bit-interweaving as in the conventional time-division multiplexing. On the one hand, each of the input sequences can be a sample sequence resulted from sampling any continuous signal, so that QOTDM can be applied to continuous wave time division multiplexing [5]. On the other hand, the multiple sample sequences (such as N sample sequences) can also be obtained from sampling a complex envelope of digital modulation signal of a data stream. Therefore, QOTDM can also be applied to data transmission similar to OFDM. That is, a high bit rate stream is split up into N parallel low bit rate-substreams, each substream is modulated and sampled into one sample sequence, respectively, and then the N sample sequences are multiplexed into one sequence by means of sample interweaving. Finally, the multiplexed sample sequence is transformed into a continuous signal by pulse amplitude modulation (PAM) or quadrature amplitude modulation (QAM) for transmission via a continuous channel. And QOTDM can transmit multiple sample sequences (X n (m), n = 1, 2, . . . , N, m = ∞, . . . , −1, 0, 1, . . . , ∞) with time-division-multiplexing mode via a continuous channel. At first, the N sample sequences are multiplexed into one by sample-interweaving, and then transformed into a continuous signal, by means of PAM or QAM. As long as the overall impulse response of the channel is equivalent to the quasi-orthogonal function and the sampling of received signals is entirely synchronous with the transmitted signal, the N sample sequences can be completely separated with each other and can be exactly recovered at the receiver ignoring the difference of an amplifying factor. Thus multiple continuous signals can be transmitted in QOTDM at higher bandwidth efficiency with simple implementation by acting on the discrete samples of these signals. The equivalent baseband QOTDM system is shown in Figure 1. At the transmitter, the N input sample sequences and N s synchronous sequences are interwoven into one sequence, and then the sequence is modulated by PAM or QAM. The transmitted signal s(t) can be represented as where T s and T are the sample interval and the PAM/QAM symbol duration, respectively, d i,n denotes the nth sample of the ith sample sequence, and p(t) is the sample shaping waveform. A procedure of QOTDM is illustrated as follows. (a) Suppose there are N signals to be transmitted, and the bandwidth of each signal is not greater than B Hz. The N signals are sampled in turns at a rate of NF s to get a sample sequence, where F s is a sampling rate, B < F s < 2B. And each N samples (i.e., one from each signal) are grouped together. Each sample in the sequence can be regarded as a symbol. (b) N s synchronous and training symbols are placed equally among the N samples/symbols. Thus, one gets (N + N s ) samples in a group, called one frame. (c) For each frame, M zero sample (here, M = 1) is inserted between every two adjacent samples to implement the operation of upsampling, shown in Figure 2. (d) After the insertion of zero samples, the frame passes through a pulse shaping filter for transmission. The synchronous sequences are specially designed in [5], which can be used not only as a synchronous signal but also as a training sequence for channel equalization. Each QOTDM frame consists of (N + N s ) samples, and the multiplexed sample sequence is converted into a continuous signal by PAM or QAM. At the receiver, an adaptive channel equalizer is employed to make the impulse response of the overall (transmitter, channel, and receiver) equivalent channel satisfy the First Nyquist criterion. The channel equalizer is a finite impulse response (FIR) adaptive filter of (N + N s )th order. If the received signal is sampled with the same sampling rate and at the accurate locations as the transmitter, then demultiplexed with the help of the synchronous sequence, one can recover the input parallel signals. Bandwidth Comparison of QOTDM and OFDM . For a QOTDM system of N subchannels, let the bit rate of the input bit-stream be R b = 1 = T b , split up into N bit steams of bit-rate R s = 1 = NT b for each, and then modulated into N sample sequences, respectively, by using BPSK modulation with a raised cosine shaping FIR filter of a role-off factor α. The bandwidth of every signal should be (1 + α) R s . After the N sequences are multiplexed and added with N s synchronous sequences, the total bandwidth of QOTDM is The bandwidth efficiency of QOTDM with BPSK is , bps/Hz. For an OFDM system of N substreams with BPSK, the bandwidth efficiency is where N cp denotes the length of the cyclic prefix of OFDM. Generally speaking, the bandwidth efficiency of QOTDM is only a little lower than that of OFDM due to the role-off factor α, if N s equals to N cp . However, QOTDM appears to be more robust against multipath fading [6]. Furthermore, QOTDM does not have a high peak-to-average power ratio as OFDM because only one carrier is used in QOTDM. Applications of QOTDM. The concept and implementation of QOTDM is rather straightforward, and it finds applications in many scenarios, such as for the downlink in satellite communications system, and for wireless transmission of multichannel electronic waves of human body. That is, if multiple signals are amplified, transmitted simultaneously, one can significantly reduce the complexity and cost of the equipment by QOTDM. An Upper Bound of Sample Error Probability for QOTDM Let us consider the situation that the samples are quantized, each sample may take one of D values, and it is assumed that D is even. The allowed value of D is given by The samples of a frame can be considered independent since they are sampled from N parallel signals. Each sample takes one of D values at random. If the intersample interference I has K nonzero terms, then I has a discrete probability distribution consisting of (D − 1) K allowed values with equal probability. If I j is an allowed value of I, then −I j is also an allowed value. The probability distribution of I is, therefore, symmetric about zero. At the receiver, the received symbol is sampled periodically at the times mT s + τ, (m = 0, ±1, ±2, . . .). τ lies in [0, T s ], and it would be chosen in a, manner, which optimizes the system performance. The sampled signal takes the form The first term in (10) is the desired signal, while the second and third terms are intersample interference and Gaussian noise, respectively, similar to that of conventional intersymbol interference, and n is zero mean with variance σ 2 n . For the sake of simplicity, we rewrite I + n as z and |p(τ)| as p τ . It is easily seen that the probability distribution of z is symmetric about zero due to n being zero means. Thus, when a i = −D + 1, an error results if z > p τ . If a i = D − 1, 4 ISRN Communications and Networking the condition for error is z < p τ . In all other cases, either condition results an error. The overall error probability is, therefore, Since the terms of interference are finite and each quantized level is no larger than the largest quantified level, it can be considered that the total interference I lies in [−A, +A] and A is some sufficient larger number, and let f (x) denote the probability density function (pdf) of I. Since I and n are independent, the pdf of z is in the form where f (n) = (2πσ 2 n ) −1/2 exp(−n 2 /2σ 2 n ). Considering that I lies in [−A, +A], (12) can be rewritten as By using (13), we can rewrite (11) as Applying the modified Chernoff bound [7], we get Substituting (15) into (14) yields By setting the derivative of the right side of (16) with respect to s to zero, we can find the value of s which makes P e get its upper bound where Δ = (p τ − x) 2 + 4σ 2 n . The main advantage of (16) is that it can be efficiently applied in the evaluation of the sample error performance of QOTDM in the presence of additive white Gaussian noise. In addition, (17) is helpful to get the upper bound in QOTDM system. Performance Comparison of QOTDM and OFDM in Rayleigh Fading Channels Computer simulations are performed to compare the performance of QOTDM system with that of OFDM system. In the simulation, N = 30 parallel substreams and 2 synchronous sequence substreams were considered in QOTDM system and QAM modulation. A raised cosine pulse shaping with roll-off factor of 0.35 and band-limitation effects were also included. Due to 32 parallel sub-streams (including 30 data streams and 2 synchronous streams) considered in QOTDM system, the corresponding OFDM system should have N = 32 subcarriers to make the simulation comparison fairly as possible. The length of Cyclic Prefix (CP) is 8, and QAM modulation is employed. In the comparison, the same data symbols were considered in both QOTDM and OFDM systems. Two Rayleigh fading models named scenario-1 and scenario-2 are considered. In scenario-1, the channel varies independently during 40 samples duration, and in scenario-2, the channel varies during 10 samples duration. From Figure 3, one can see that the bit error rate (BER) of OFDM has an error floor in the two scenarios mentioned above, while it does not exist in QOTDM system. It also can be seen that the BER of QOTDM is lower than that of OFDM in high signal-to-noise ratio (SNR). This is mainly because in fast fading channels, the impulse response has varied during the inverse fast Fourier transform (IFFT) integral, and the fast Fourier transform (FFT) operation is applied to the fading signals, which have been varied differently due to the channel. After FFT operation at the receiver, the original signal (before IFFT) cannot be obtained properly due to the channel variation in one OFDM symbol duration time. While in QOTDM, the samples are interleaved before transmission, and at the receiver, an adaptive equalizer is employed to make the impulse response of the overall equivalent channel satisfy the First Nyquist criterion as possible. All these make the signal suffer much less from fading channel, which makes QOTDM have higher performance than OFDM. Conclusions We have presented an efficient transmission scheme, Quasi-Orthogonal Time Division Multiplexing (QOTDM), which does not suffer the drawbacks, inherent in OFDM systems, that is, high sensitivity to carrier frequency offset (CFO) and high peak-to-average power ratio (PAPR), due to its single carrier modulation. And multiple continuous signals can be transmitted in QOTDM at higher band efficiency with simple implementation by acting on the discrete samples of these signals. An upper bound of SER is also derived to evaluate the performance of QOTDM systems. From computer simulations, it is clear that the proposed QOTDM system is better than that of OFDM system in terms of bit error rate (BER) in the high E b /N o regions. In the low E b /N o region, the bit error rate of QOTDM is higher than that of OFDM. The reason is that, at low E b /N o , less information of the channel can be obtained and the channel equalization is not well enough. However, with the increasing of the E b /N o , the channel equalization can be better, so the QOTDM system can provide much better performance than OFDM.
3,693.2
2012-01-01T00:00:00.000
[ "Engineering", "Physics" ]
The phosphatidylinositol 3-kinase/Akt/mTOR signaling network as a therapeutic target in acute myelogenous leukemia patients. The phosphatidylinositol 3-kinase (PI3K)/Akt/mammalian target of rapamycin (mTOR) signaling axis plays a central role in cell proliferation, growth, and survival under physiological conditions. However, aberrant PI3K/Akt/mTOR signaling has been implicated in many human cancers, including acute myelogenous leukemia (AML). Therefore, the PI3K/Akt/mTOR network is considered as a validated target for innovative cancer therapy. The limit of acceptable toxicity for standard polychemotherapy has been reached in AML. Novel therapeutic strategies are therefore needed. This review highlights how the PI3K/Akt/mTOR signaling axis is constitutively active in AML patients, where it affects survival, proliferation, and drug-resistance of leukemic cells including leukemic stem cells. Effective targeting of this pathway with small molecule kinase inhibitors, employed alone or in combination with other drugs, could result in the suppression of leukemic cell growth. Furthermore, targeting the PI3K/Akt/mTOR signaling network with small pharmacological inhibitors, employed either alone or in combinations with other drugs, may result in less toxic and more efficacious treatment of AML patients. Efforts to exploit pharmacological inhibitors of the PI3K/Akt/mTOR cascade which show efficacy and safety in the clinical setting are now underway. INTRODUCTION Acute myelogenous leukemia (AML) is a highly heterogeneous group of malignant clonal diseases characterized by deregulated proliferation of hematopoietic stem cells and myeloid progenitors. This results in accumulation, in the bone marrow, of myeloid cells with an impaired differentiation program and resistant to cell death. AML accounts for about 80% of adult leukemias and is a disorder of the elderly, with a median age at diagnosis of 65 years and a growing incidence over 65 years [1]. Most AML cases respond well to initial polychemotherapy, but disease relapse occurs in the large majority of patients. The standard therapeutic approach for AML patients is high-dose polychemotherapy, consisting of cytarabine and an anthracycline antibiotic like daunorubicin or idarubicin, or the anthracendione mitoxantrone [2]. While results of AML treatment have improved in younger patients who can tolerate intensified treatment strategies, there have been limited changes in outcome among individuals who are older than 60 years. Therefore, the prognosis of AML remains severe, with an overall 5-year survival rate around 20%, despite continuous advances in our understanding of AML biology. Furthermore, patients with AML arising out of myelodysplastic syndrome or who are older than 60 years have an even worse prognosis (<10% survival at 5 years) [3]. Therefore, there remains a need for innovative, rationally designed, minimally toxic, therapies for AML, especially for the elderly [4]. Only one subtype of AML, acute promyelocytic leukemia (APL), displays a much better prognosis, as differentiation therapy with arsenic trioxide or all-trans retinoic acid (ATRA), used alone or in combination with chemotherapeutic drugs, has proven quite successful in APL patients [5]. It is now clear that a hierarchical organization of the hematopoietic system does exist in AML, as in normal hematopoiesis. Indeed, AML is initiated and maintained by a small, self-renewing population of leukemic stem cells (LSCs), which give rise to a progeny of more mature and highly cycling progenitors (colony forming unit-leukemia, CFU-L). CFU-Ls do not self-renew, however they are committed to proliferation and limited differentiation. By doing so, they originate a population of blast cells which constitute the majority of leukemic cells in both the bone marrow and peripheral blood of patients. The exact phenotype of LSCs is still debated, but they are comprised in the CD34 + / CD38 -/low population [6]. The majority of LSCs are quiescent and insensitive to traditional chemotherapeutic drugs. This latter feature explains, at least in part, the difficulties in eradicating this cell population by conventional polychemotherapy. Thus, novel therapeutic strategies for AML eradication should also target LSCs [7]. In AML, aberrant activation of several signal transduction pathways strongly enhances the proliferation and survival of both LSCs and CFU-Ls [8,9]. Therefore, these signaling networks are attractive targets for the development of innovative therapeutic strategies in AML [10]. The phosphatidylinositol 3-kinase (PI3K, a family of lipid kinases)/Akt/mammalian target of rapamycin (mTOR) signaling cascade is crucial to many widely divergent physiological processes which include cell cycle progression, transcription, translation, differentiation, apoptosis, Fig. 1. The PI3K/Akt/mTOR signaling pathway. GPCRs, RTKs, and Ras activate PI3K. PI3K generates PtdIns (3,4,5)P 3 from PtdIns (4,5) P 2 . PtdIns (3,4,5)P 3 attracts to the plasma membrane PDK1 which phosphorylates Akt on Thr308. Full Akt activation requires Ser473 phosphorylation which is effected by mTORC2. Most of the Akt substrates are inactivated by phosphorylation. Active Akt inhibits TSC2 activity through direct phosphorylation. TSC2 is a GAP that functions in association with TSC1 to inactivate the small G protein Rheb. Akt-driven TSC1/TSC2 complex inactivation allows Rheb to accumulate in a GTP-bound state. Rheb-GTP then activates the protein kinase activity of mTORC1. mTORC1 targets p70S6K and 4E-BP1 which are critical for translation. 4E-BP1 phosphorylation by mTORC1 results in the release of eIF4E, while p70S6K phosphorylates ribosomal S6 protein. The TSC1/2 complex is required to activate also mTORC2. However, other signaling cascades impinge on mTORC1, including GSK3β, the Ras/Raf/MEK/ERK1/2/p90 RSK pathway, and the LKB1/AMPK network which is sensitive to the ADP/ATP ratio. Arrows indicate activating events, whereas perpendicular lines indicate inhibitory events. motility, and metabolism [11]. However, the PI3K/Akt/ mTOR signaling pathway represents one of the major survival pathways that is deregulated in many human cancers and contributes to both cancer pathogenesis and therapy resistance. Over the last few years, it has been reported that constitutive activation of the PI3K/Akt/mTOR signaling network is a common feature of AML patients [12]. Furthermore, pathway activation confers leukemogenic potential to mouse hematopoietic cells [13]. Therefore, this signal transduction cascade may represent a valuable target for innovative therapeutic treatment of AML patients. The aim of this review is to give the reader an updated overview of the relevance of PI3K/Akt/mTOR signaling activation in AML patients and to focus on small molecules which will possibly have an impact on the therapeutic arsenal we have against this disease. Vacuolar protein sorting 34 (vps34) is the only class III PI3K and exists as a heterodimer bound to the vps15 regulatory subunit (previously referred to as p150 in mammals). Vps34 has been implicated in nutrient signaling, endocytosis, and autophagy [20]. Activating mutations in the gene coding for p110α (PIK3CA) have been found in many human cancer types, including tumors of the colon, brain, ovary, breast, liver, and stomach, and could at least partially explain pathway up-regulation in these neoplasms [21]. Nevertheless, in tumor models (brain, prostate, breast) driven by PTEN (phosphatase and tensin homolog deleted on chromosome 10) deficiency, knock-out of p110β, but not p110α, was required to inhibit Akt activation [17]. Wild-type p110α is not oncogenetic when overexpressed, whereas wild-type p110β, p110γ, and p110δ PI3Ks are oncogenetic when ectopically expressed in chicken fibroblasts [22]. Nevertheless, their contribution to oncogenesis is only beginning to emerge [23]. Akt Akt, a 57-kDa serine/threonine protein kinase, is the cellular homolog of the v-akt oncogene. The Akt family comprises three highly conserved isoforms: Akt1/α, Akt2/β, and Akt3/γ, which display a high degree of sequence homology [14]. However, functional differences exist between Akt isoforms, as Akt2 is involved in insulin-mediated glucose uptake [24] and in cell motility/invasion/metastatic potential of cancer cells [25]. Akt contains an NH 2 -terminal PH domain, that interacts with PtdIns (3,4,5)P 3 . Once Akt is recruited at the plasma membrane, its activation loop is phosphorylated on Thr308 by PDK1 while the mTOR complex 2 (mTORC2) phosphorylates Ser473 in the Akt COOH-terminus ( Figure 1). Full Akt activation requires both the phosphorylation steps. Active Akt migrates to both the cytosol and the nucleus. Nuclear Akt may fulfil important anti-apoptotic roles [26]. Nevertheless, the relative contribution of Akt signaling at the plasma membrane, the cytosol, and the nucleus remains to be elucidated. However, it is intriguing that the protein promyelocytic leukemia (PML) is involved in the dephosphorylation of nuclear Akt as PML specifically recruits the Akt phosphatase, protein phosphatase 2A (PP2A), as well as phosphorylated Akt into PML nuclear bodies [27]. These bodies, however, are disrupted by the fusion protein, PML-RARα, which is the hallmark of APL [5,28]. This could be one of the reasons for Akt activation which is detected in APL [29]. Thus, this finding highlights the growing importance of Akt compartmentalization in human cancer pathogenesis and treatment. So far, over 100 Akt substrates have been identified [30]. Of these, about 40 which mediate the pleiotropic Akt functions have been characterized, including Bad, caspase-9, murine double minute 2 (MDM2), IĸB kinase (IKK) α, proline-rich Akt substrate 40-kDa (PRAS40) 40, the FOXO family of Forkhead transcription factors, apoptosis signal-regulated kinase 1 [ASK1, a negative regulator of pro-apoptotic c-Jun N-terminal kinase (JNK)], Raf, p27 Kip1 , p21 Cip1 , glycogen synthase kinase 3β (GSK3β . Each of these substrates has a key role in the regulation of cell survival and proliferation, either directly or through an intermediary [16,31]. A rare, oncogenetic, activating muta-tion (E17K) in the PH domain of Akt1 has been detected in some types of solid cancers (breast, colon, ovary). This mutation resulted in Akt constitutive binding to the plasma membrane and was leukemogenic in mice [32]. mTOR mTOR is an atypical 289-kDa serine/threonine kinase, originally identified in the yeast Saccharomyces Cerevisiae, that belongs to the PI3K-related kinase family and displays a COOH-terminal catalytic domain with a high sequence homology to PI3K (Figure 2). This similarity could explain the cross-inhibition of mTOR by drugs which target PI3K (see below) [33]. mTOR signaling is conserved in eukaryotes from plants and yeasts to mammals. mTOR exists as two complexes, referred to as mTOR complex 1 (mTORC1) and mTORC2. mTORC1 is comprised of mTOR/Raptor/mLST8/PRAS40/FKBP38/Deptor and is sensitive to rapamycin and its derivatives (rapalogs). mTORC2 is composed of mTOR/Rictor/mLST8/SIN1/ Protor/Deptor and is generally described as being insensitive to rapamycin/rapalogs, although long-term treatment of about 20% of cancer cell lines with rapamycin/rapalogs leads to dissociation of mTORC2 [34,35]. mTORC1 signaling integrates environmental clues (growth factors, hormones, nutrients, stressors) and information from the cell metabolic status. Thus, mTORC1 controls anabolic processes for promoting protein synthesis and cell growth [36]. mTORC1 regulates translation in response to nutrients/growth factors by phosphorylating components of the protein synthesis machinery, including p70S6 kinase (p70S6K) and eukaryotic initiation factor 4E-binding protein 1 (4E-BP1). p70S6K phosphorylates the 40S ribosomal protein, S6, leading to active translation of mRNAs, while 4E-BP1 phosphorylation by mTORC1 on several amino acidic residues (Ser37; Thr46; Ser65; Thr70) results in the release of the eukaryotic initiation factor 4E (eIF4E). eIF4E is a key component for translation of 5' capped mRNAs, which include transcripts encoding growth promoting molecules, such as c-Myc, cyclin D1, cyclin-dependent kinase 2, retinoblastoma protein, p27Kip1, vascular endothelial growth factor (VEGF), and signal activator and transducer of transcription 3 (STAT3) [34,37]. Furthermore, mTORC1 negatively regulates autophagy, a non-apoptotic form of cell death, which is attracting much attention, as it could affect sensitivity of tumors (including leukemias) to various forms of therapy [38]. Akt-mediated regulation of mTORC1 activity involves several mechanisms. Akt inhibits TSC2 (Tuberous Sclerosis 2 or hamartin) function through direct phosphorylation. TSC2 is a GTPase-activating protein (GAP) which associates with TSC1 (Tuberous Sclerosis 1 or tuberin) for inactivating the small G protein Rheb (Ras homolog enriched in brain). TSC2 phosphorylation by Akt represses GAP activity of the TSC1/TSC2 complex, allowing Rheb to accumulate in a GTP-bound state. The mechanism by which Rheb-GTP activates mTORC1 has not been fully elucidated yet, although Rheb requires to be farnesylated for activating mTORC1 [39]. Thus, it could be inhibited by farnesyl-trasferase inhibitors (FTIs). Akt also phosphorylates PRAS40, an inhibitor of the interactions between mTORC1 and its substrates, and by doing so, prevents PRAS40 ability to suppress mTORC1 signaling [40]. Moreover, PRAS40 is a substrate of mTORC1 itself, and it has been demonstrated that mTORC1-mediated phosphorylation of PRAS40 facilitates the removal of its inhibition on mTORC1 [41]. The mechanisms for mTORC2 regulation have only begun to be revealed. However, mTORC2 activation requires PI3K and the TSC1/TSC2 complex, but is independent of Rheb and is largely insensitive to either nutrients or energy conditions [44]. mTORC2 phosphorylates Akt on Ser473 which enhances subsequent Akt phosphorylation on Thr308 by PDK1 [45]. Moreover, mTORC2 plays a role in cytoskeleton organization by controlling actin polymerization [46] and phosphorylates protein kinase C (PKC) α [44]. Another down-stream target of mTORC2 is serumand glucocorticoid-induced protein kinase 1 (SGK1) [47]. The oncogenetic role of mTORC2 has been recently highlighted by an investigation that documented the importance of mTORC2 in the development and progression of prostate cancers induced in mice by PTEN loss [48]. Akt and mTORC1/2 are linked to each other via positive and negative regulatory feedback circuits, which restrain their simultaneous hyperactivation through mechanisms which involve p70S6K and PI3K. Assuming that an equilibrium exists between mTORC1 and mTORC2, when mTORC1 is formed, it antagonizes the formation of mTORC2 and reduces Akt activity. Indeed, once mTORC1 is activated through Akt, the former elicits a negative feedback loop for inhibiting Akt activity [34]. This negative regulation of Akt activity by mTORC1 is a consequence of p70S6K-mediated phosphorylation of insulin receptor substrate (IRS) 1 adapter protein, downstream of insulin receptor and/or Insulin-like Growth Factor-1 Receptor (IGF-1R) [49,50]. Indeed, IRS-1 phosphorylation on Ser307 and Ser636/639 by p70S6K targets the adapter protein to proteasomal degradation [51]. Therefore, at least in principle, inhibition of mTORC1 activity by rapamycin/rapalogs could result in hyperactivation of both Akt and its downstream targets. Such a phenomenon has been documented to occur both in vitro and in vivo [52,53]. mTORC1 is capable of downregulating also IRS2 expression by enhancing its proteosomal degradation [54]. Consistently, mTORC1 inhibition by the rapalog, RAD001, increased IRS2 expression and Akt phosphorylation levels in AML cells [55]. Recent work has also highlighted a p70S6K-mediated phosphorylation of Rictor on Thr1135. This phosphorylation event exerted a negative regulatory effect on the mTORC2-dependent phosphorylation of Akt in vivo [56]. Thus, both mTORC1 and mTORC2 control Akt activation. Nevertheless, the extent to which disruption of negative feedbacks mechanism actually limits the therapeutic effects of mTOR inhibitors in cancer patients in vivo remains to be determined [57]. Activation of PI3K/Akt/mTOR signals in AML From 50% to 80% of patients with AML display Akt phosphorylated on either Thr308 or Ser473 (or both) [66][67][68][69][70][71]. Both the disease-free survival and the overall survival were significantly shorter in AML cases where pathway up-regulation was documented [70,[72][73][74]. Poor prognosis of AML patients with elevated PI3K/Akt/mTOR signaling could be also related to the fact that this pathway controls the expression of the membrane ATP-binding cassette (ABC) transporter, multidrug resistance-associated protein 1, which extrudes chemotherapeutic drugs from leukemic cells and is usually associated with a lower survival rate [75,76]. Nevertheless, a more recent report has highlighted that constitutive activation of PI3K/Akt/mTOR signaling could be a favourable prognostic factor in de novo cases of AML. One hypothesis for the lower relapse rate in patients with enhanced PI3K/Akt/mTOR signaling is that it could drive immature leukemic cells (LSCs and CFU-L) into S phase, thus rendering them more susceptible to polychemotherapy [77]. Causes of PI3K/Akt/mTOR signaling up-regulation in AML may be the result of several factors, including activating mutations of Fms-like tyrosine kinase 3 (FLT3) receptor [71] and c-Kit tyrosine kinase receptor [78], N-or K-Ras mutations [79], PI3K p110β and/or δ overexpression [80][81][82], low levels of PP2A [70], autocrine/paracrine secretion of growth factors such as IGF-1 [82][83][84] and VEGF [85,86]. Overexpression of PDK1 has been reported in 45% of a cohort of 66 AML patients, however it was related to PKC hyperphosphorylation, while the relationship (if any) with Thr308 Akt up-regulation was not investigated [87]. Interactions between leukemic cells and bone marrow stromal cells through CXCR4 (a GPCR which is abundantly expressed on leukemic cell surface where it is up-regulated by hypoxic conditions [88,89]) and its physiological ligand, CXCL12, produced by stromal cells [89,90], could result in PI3K/Akt/mTOR activation [91]. Furthermore, interactions between β1 integrins on AML cells and stromal fibronectin could lead to pathway activation [92,93], possibly through up-regulation of integrin-linked kinase 1 (ILK1) which is involved in Akt phosphorylation on Ser473 in a PI3K-dependent manner in AML cells [94]. The ability of ILK1 to function as a Ser473 Akt kinase could be related to the fact that ILK1 interacted with Rictor and was required for Akt phosphorylation by mTORC2 on Ser473 [95]. Possible causes of pathway activation in AML cells are highlighted in Figure 3. No activating mutations in p110α PI3K [96] or Akt1 PH domain [70,97] have been detected so far in AML patients. Although PTEN is deleted in many solid cancers and T-cell acute lymphoblastic leukemia, PTEN deletion is extremely rare in AML [66,69,70]. PTEN can be inactivated by post-translational mechanisms, including phos-phorylation at the COOH-terminal regulatory domain. This phosphorylative event stabilizes PTEN molecule but makes it less active towards PtdIns (3,4,5)P 3 , thus resulting in Akt up-regulation [98]. PTEN phosphorylation has been reported in AML patients where it was significantly associated with high levels of p-Akt and with shorter overall survival [99]. However, subsequent studies could not confirm these findings [70,74]. A reassessment of the PTEN role in AML could be important, as in mice, hematopoietic stem cells without functional PTEN, began multiplying rapidly, showed diminished self-renewal capacity, and started to move out of the bone marrow, colonizing distant organs, and originating a leukemic-like disease [100,101]. Of note, these effects were mostly mediated by mTOR, as rapamycin not only depleted LSCs, but also restored normal hematopoietic stem cell function [101]. It is conceivable that several concomitant extrinsic and intrinsic causes converge to activate PI3K/Akt/mTOR signaling in AML patients, even if this fundamental issue has not been thoroughly investigated. Indeed, in the only published study, it was demonstrated that, in a small cohort of patients, overexpression of PI3K p110δ [81] could coexist with activating FLT3 and Ras mutations. It has also been reported that mTORC1 activation was independent of PI3K/ Fig. 3. Constitutive activation of PI3K/Akt signaling in AML cells. In this cartoon, mutated (Mut) C-Kit, FLT3, or Ras, and autocrine/paracrine secretion of growth factors (VEGF, IGF-1) impinge upon increased levels of p110β and/or p110δ PI3K. This results in high levels of PtdIns (3,4,5)P 3 synthesized at the plasma membrane from PtdIns (4,5)P 2 . PtdIns (3,4,5)P 3 recruits at the plasma membrane both PDK1 and inactive Akt (Akt off). PDK1 phoshorylates Akt on Thr308, whereas phosphorylation on Ser473 is driven by mTORC2. These two phosphorylative events fully activates Akt (Akt on). Bone marrow stromal cells secrete CXCL12 and fibronectin. Fibronectin, by interacting with β integrins, could activate ILK which, in turn, stimulates mTORC2 activity on Ser473 Akt. CXCL12 binds its receptor CXCR4, a GPCR which results in increased PI3K activity. Bone marrow stromal cells could also secrete VEGF and IGF-1. Activated Akt migrates to both nucleus and cytosol to phosphorylate its substrates. Akt activity in AML patients [55]. In some AML cases, it has been documented that either MEK/ERK 1/2 [102] or Lyn signaling [103] could be up-stream of mTORC1. TSC2 gene expression was found to be down-regulated in AML patients, most likely due to promoter hypermethylation. However, it is not known if it impinged on mTORC1 activation [104]. It should be emphasized here that PI3K/Akt/mTOR network up-regulation has been detected not only in the bulk of the AML blasts, but also in LSCs transplanted in non-obese diabetic/severe combined immunodeficiency (NOD/SCID) mice, where it exerted a powerful pro-survival effect. This finding suggests that therapeutic targeting of this pathway has the potential for eradicating AML [105]. Targeting PI3K/Akt/mTOR module in AML Either used alone or in combination with other drugs, PI3K/Akt/mTOR signaling inhibitors have been proven useful for down-regulating cell proliferation and inducing apoptosis in pre-clinical settings of AML, using cell lines or animal models. However, clinical trials of these compounds are limited. We shall now highlight some compounds which have been used for targeting PI3K/Akt/ mTOR signaling in AML cells. PI3K inhibitors Wortmannin and LY294002 are the best characterized PI3K inhibitors that have been widely used as research tools to elucidate the role of PI3K/Akt/mTOR signaling in various tumor cells. Both inhibitors are cell-permeable and low molecular weight compounds. Wortmannin is a natural metabolite produced by Penicillium wortmanni and inhibits all class PI3K members with a 50% inhibitory concentration (IC 50 ) in vitro of 2-5 nM, while inhibiting other kinases [mTOR, DNA-dependent protein kinase (DNA-PK), and ataxia telangiectasia mutated kinase] with higher IC50 values [106]. It is interesting that DNA-PK was found to phosphorylate Akt on Ser473 under conditions of DNA damage [107]. LY294002 is a flavonoid-based synthetic compound and inhibits PI3K with an IC 50 of 1-20 μM. However, LY294002 blocks not only PI3K activity but also mTOR, DNA-PK, Pim kinase, polo-like kinase, and CK2 to the same extent as PI3K [106]. Both wortmannin and LY294002 bind to the p110 catalytic subunit of PI3K, leading to the blockade of ATP bound to the active portion. PI3K inhibition with LY294002 is reversible and ATP-competitive while wortmannin irreversibly inhibits PI3K in a non-ATP-competitive manner [106]. Wortmannin and LY294002 have been used in preclinical models of AML where they displayed powerful cytotoxic effects in vitro [66,79,108,109]. Since the insolubility in aqueous solutions and high toxicity of both inhibitors precluded their clinical application, efforts to develop PI3K inhibitors more suitable for clinical use are currently underway [110]. Several selective inhibitors of p110 PI3K isoforms are now available [111]. IC87114 is a compound that selectively inhibits the p110δ isoform of PI3K. IC87114 downregulated p-Akt and p-FOXO3a, reduced proliferation, and induced apoptosis in AML primary cells overexpressing p110δ PI3K. Moreover, it synergized with etoposide [81]. In primary APL cells, both IC87114 and TGX-115 (a p110β PI3K-selective inhibitor) triggered apoptosis in the presence or in the absence of the differentiating agent, ATRA [29]. Conceivably, the use of selective PI3K isoform inhibitors could be associated with less undesirable side effects than the use of broad spectrum PI3K inhibitors [111]. For example, it is established that insulin control of glucose homeostasis is mainly mediated through p110α PI3K [112] and, to a much lower extent, by p110β PI3K [113]. Akt inhibitors Perifosine is a zwitterionic, water soluble, synthetic alkylphosphocholine with oral bioavailability that inhibits Akt phosphorylation through interaction with the Akt PH domain, resulting in disruption of its membrane targeting. Interestingly, recent evidence has documented that perifosine targets both mTORC1 and mTORC2 activity by downregulating the levels of mTOR, raptor, rictor, p70S6K, and 4E-BP1, owing to their enhanced degradation [114]. Perifosine reduced cell proliferation and induced apoptosis accompanied by Akt dephosphorylation in a wide variety of neoplasias, including AML [115]. Perifosine synergized with etoposide in AML blasts, and reduced the clonogenic activity of CD34 + cells from leukemic patients, but not from healthy donors [116]. Moreover, perifosine synergized with histone deacetylase inhibitors [117] or pro-apoptotic TRAIL (TNF-related Apoptosis Inducing Ligand) in AML cell lines and primary cells displaying Akt constitutive activation [118]. However, perifosine also targeted the MER/ ERK 1/2 pro-survival pathway and activated pro-apoptotic JNK, [116][117][118][119][120] therefore it could not be considered specific for the Akt pathway. A phase 1 clinical trial combining perifosine and UCN-01 (a staurosporine derivative which inhibits PDK1) (NCT00301938) and a phase II clinical trial with perifosine alone (NCT00391560) have been performed in patients with refractory/relapsed AML, but the results have not yet been disclosed. Akt-I-1/2, a synthetic reversible allosteric inhibitor, is an Akt1/Akt2 isoform-specific inhibitor that forms a PH domain-dependent inactive conformation with Akt1 and Akt2 [121]. Akt-I-1/2 inhibited cell proliferation and clonogenic properties, and induced apoptosis in AML cells with high-risk cytogenetic changes/abnormalities [70]. However, it is at present unknown which Akt isoforms are expressed by AML blasts. mTOR inhibitors mTOR inhibitors are by far the most developed class of compounds which target the PI3K/Akt/mTOR pathway. They include: rapamycin (sirolimus, a macrolide derived from the bacterium Streptomyces hygroscopicus, originally discovered in a soil sample collected on Easter Island) and its derivatives CCI-779 (temsirolimus), RAD001 (everolimus), and AP23573 (deforolimus) [122]. Temsirolimus was approved by US Food and Drug Administration in 2007 for the first-line treatment of poor prognosis patients with advanced renal cell carcinoma. The overall survival of treated patients was increased by nearly 50% (~ 3 months) relative to the control group [123]. Some clinical benefits of rapamycin/rapalogs have been reported also against endometrial carcinoma and mantle cell lymphoma, however, the overall objective response rates in major solid tumors have been modest [124]. Rapamycin and rapalogs do not target the catalytic site of mTORC1, but rather bind its immunophilin, FK506 binding protein 12 (FKBP12) (Figure 2). The rapamycin/ FKBP12 complex then binds mTORC1 and inhibits downstream signaling events [125]. Thus, rapamycin and rapalogs act as allosteric mTORC1 inhibitors. Recent evidence has documented that complex formation with FKBP12 is not an absolute requirement for repression of mTORC1 activity by rapamycin/rapalogs, however, in the absence of FKBP12, the drugs display a 100 to 1000-fold lower potency than in the presence of the immunophilin [126]. Available data suggest that rapamycin treatment, over long time periods, also targets mTORC2 [127]. Accordingly, both CCI-779 and RAD001 (10-20 nM) inhibited Akt phosphorylation on Ser473 in AML cells in vitro and in patients in vivo after a 24 h incubation, through suppression of the mTORC2 assembly [128]. In contrast, it has been documented that RAD001 (10 nM for 24 h) increased Akt phosphorylation in vitro on Ser473 in AML samples displaying constitutive PI3K/Akt activation [55]. Since a neutralizing monoclonal antibody to the IGF-1R α-subunit, reversed the RAD001-induced increase of Akt phosphorylation and RAD001 treatment led to a significant increase in IRS2 protein expression, it was concluded that p-Akt upregulation could be explained by the existence of an IGF-1/ IGF-1R autocrine loop, as well as by increased expression of IRS2. At present, it is not easy to reconcile these contradictory findings. Rapamycin had only a modest effect on primary AML cell survival in liquid culture, however, it markedly downregulated AML blast clonogenicity while sparing normal hematopoietic precursors [129]. Accordingly, others have reported that rapamycin led to only a slight decrease in AML blast survival in short term cultures, whereas in long term cultures the effect was more pronounced [105]. These results suggested that the target of rapamycin is the proliferating contingent of the leukemic clone, rather than the bulk of AML blasts which are predominantly blocked in the G0/G1 phase of the cell cycle. However, rapamycin cytotoxicity in short term cultures could be dramatically increased by co-treatment with etoposide. Importantly, etoposide toxicity on CD34 + cells from healthy donors was not enhanced by addition of rapamycin. Of note, co-incubation with rapamycin enhanced etoposide-mediated decrease in the engraftment of AML cells in NOD/SCID mice, suggesting the drugs also targeted putative LCSs [105]. The rapalog RAD001 synergized with both ATRA and histone acetylase inhibitors in inducing growth arrest and differentiation of APL cell lines [130,131]. A few phase I/II clinical trials with rapamycin and rapalogs have been performed in patients with relapsed/refractory AML. Rapamycin induced a partial response in 4 of 9 adult patients with de novo or secondary AML, who displayed activation of mTORC1 signaling, as documented by increased levels of p-p70S6K and p-4E-BP1 [129]. RAD001 has been evaluated in a phase I clinical trial in patients with relapsed/refractory hematologic malignancies, including AML [132]. However, no AML patients achieved a complete or even partial response. AP23573 has been tested in a phase II study in 22 patients with AML [133]. Only one patient displayed an objective hematological improvement, consisting of normalization of neutrophils. A significant reduction in mTORC1 activity was observed in response to the drug, as documented by decreased p-4E-BP1 levels. A recent phase I study in which rapamycin was combined with MEC (mitoxantrone, etoposide, cytarabine) polychemotherapy failed to demonstrate any synergistic effect of the combination in relapsed/refractory AML patients, even if proof of rapamycin biological activity in vivo was detected, consisting in the dephosphorylation of p70S6K [134]. Several clinical trials with rapamycin/rapalogs combined with chemotherapeutic agents are now underway in AML patients [135]. Moreover, a phase I study has recently documented the efficacy, in elderly AML patients, of the combination etoposide and tipifarnib (R11577, an FTI). Intriguingly, the effect of tipifarnib was not always related to Ras inhibition, but rather to inhibition of Rheb farnesylation and, consequently, of mTORC1 signaling, as documented by decreased levels of p-p70S6K and of its substrate, p-S6 [136]. Dual PI3K/mTOR inhibitors The rationale for using dual PI3K/mTOR inhibitors is that mTORC1 allosteric inhibitors, such as rapamycin/rapalogues, could hyperactivate Akt through p70S6K/PI3K, as discussed earlier in this review. Moreover, it is now emerging that rapamycin/rapalogs have only modest efficacy on total translation rates, and the effects are cell-type specific. In contrast, small molecules designed for inhibiting the catalytic site of mTOR, were much more effective in this respect, especially in cancer cells [137][138][139][140][141]. Such a phenomenon has been recently reported to occur also in AML cells, where rapamycin was unable to block protein synthesis, owing to a failure in inducing 4E-BP1 dephosphorylation [142]. Furthermore, in some AML cases, mTORC1 activity does not seem to be under the control of PI3K/Akt, despite concomitant PI3K/Akt activation [103]. Therefore, the use of a single inhibitor which targets both PI3K and mTORC1 catalytic sites could present substantial advantages over drugs which only target either PI3K/Akt or mTORC1. PI-103 is a pyridonylfuranopyrimidine class synthetic molecule that represses the activity of both class IA and IB PI3Ks, as well as of mTORC1/mTORC2 [143,144]. Two papers have documented the efficacy of PI-103 in pre-clinical settings of AML. It has been reported that PI-103, which itself displayed only modest pro-apoptotic activity, acted synergistically with Nutlin-3 (an MDM2 inhibitor) [145,146], to induce apoptosis in a wild-type p53-dependent fashion in AML cell lines and primary cells [147]. Another group demonstrated that PI-103 was mainly cytostatic for AML cell lines. However, in AML blast cells, PI-103 inhibited leukemic proliferation and CFU-L clonogenicity, induced mitochondrial apoptosis, and synergized with etoposide [148]. Of note, PI-103 was not apoptogenic in CD34+ cells from healthy donors and had only moderate effects on their clonogenic and proliferative activities. Since either RAD001 or IC87114 did not induce apoptosis in AML primary cells, it was concluded that dual-targeted therapy against PI3K/Akt and mTOR with PI-103 may be of therapeutic value in AML [148]. Nevertheless, it is conceivable that the new frontier in mTOR inhibition will be represented by the second generation, ATP-competitive mTOR inhibitors which bind the active site of both mTORC1 and mTORC2 [137][138][139][140]. These drugs target mTOR signaling functions in a global way, so that they are expected to yield a deeper and broader antitumor response in the clinic. However, global inhibition of mTOR is expected to be accompanied by greater toxicity to normal cells [149]. CONCLUSIONS In this review, we have documented that the PI3K/Akt/ mTOR pathway influences proliferation, survival, and drug resistance of AML cells. However, there still are many unresolved problems regarding the relevance of PI3K/Akt/ mTOR pathway up-regulation and its druggability in AML patients. We have a very limited knowledge of the downstream targets (genes/proteins) of this pathway in AML cells. Therefore, more detailed investigations of these targets are highly desirable. Indeed, data emerging from gene expression and proteome/phosphoproteome analysis could pave the way for functional studies which could then provide valuable information for improving future therapeutic strategies. At present, we do not know what is the most effective target in the pathway, and whether combinations of horizontal or vertical blockade of the signaling cascade may be more effective than blocking at a single node [150]. As with all molecularly targeted approaches, pharmacodynamic markers are necessary to direct therapeutic development of PI3K/Akt/mTOR inhibitors. Hence, clinical trials should examine the inhibitor effects on PI3K/Akt/ mTOR targets to establish the best predictor of response [151]. However, no predictive markers for AML patients with a high probability of responding to PI3K/Akt/mTOR inhibition, or biomarkers of dose/efficacy, have been validated. Quantitative flow cytometry appears particularly well suited for this kind of analysis, because it offers obvious advantages over other techniques (western blot, for example), including quickness, a much lower number of cells required to perform the assay, and the possibility of identifying different subclones in the leukemic population by co-immunostaining with multiple antibodies to surface antigens. Accordingly, flow cytometry is rapidly becoming the choice analytical technique to study PI3K/Akt/mTOR pathway activation in AML patients [70,133,152,153]. Another promising quantitative technique requiring a limited number of cells, which has been already applied to the study of AML patients samples, is represented by reversephase protein arrays [74]. It is highly unlikely that inhibition of a single signaling pathway will achieve long-lasting remissions or cure in AML, especially for refractory/relapsed patients. However, combining PI3K/Akt/mTOR inhibitors with conventional chemotherapy drugs, differentiation inducers (ATRA and/or arsenic trioxide), or innovative (e.g. TRAIL) agents could be a very effective therapeutic option for AML patients, as indicated by results obtained in pre-clinical settings. The spectacular effect of Bcr-Abl tyrosine kinase inhibitors, such as imatinib for the treatment of chronic myelogenous leukemia (CML) patients in the chronic phase of the disease [154], has fed optimism that modulators of signal transduction networks might be very effective also in other types of cancer. However, clinical trials performed with small molecules targeting the PI3K/Akt/mTOR pathway have mostly given a disappointing outcome. This fact has led to the suggestion that imatinib success in CML may be the exception and not the rule, because imatinib is one of the few examples of a drug targeting the anomaly which constitutes the underlying pathologic event in the formation of the disorder [155]. Human cancers are known to evolve through a multistage process which can extend over a period of several years. Therefore, they progressively accumulate mutations and epigenetic anomalies in expression of multiple genes [156]. As a consequence, neoplastic disorders are characterized by multiple signaling abnormalities and the deregulated pathways are extremely redundant. Furthermore, the hierarchy of anomalies has not been established in many tumors. Therefore, it could be very difficult to find the right target or combinations of target. AML is no exception to this rule. However, the continuous development of molecularly targeted drugs displaying higher selectivity, coupled with additional mechanistic studies and advances in profiling the signaling networks of cancer cells, should make it possible to exploit deregulation of the PI3K/Akt/mTOR cascade to achieve more effective and less toxic therapies for AML.
7,947.8
2010-05-27T00:00:00.000
[ "Biology" ]
64-pixel NbTiN superconducting nanowire single-photon detector array for spatially resolved photon detection We present the characterization of two-dimensionally arranged 64-pixel NbTiN superconducting nanowire single-photon detector array for spatially resolved photon detection. NbTiN films deposited on thermally oxidized Si substrates enabled the high-yield production of high-quality SSPD pixels, and all 64 SSPD pixels showed uniform superconducting characteristics. Furthermore, all of the pixels showed single-photon sensitivity, and 60 of the 64 pixels showed a pulse generation probability higher than 90% after photon absorption. As a result of light irradiation from the single-mode optical fiber at different distances between the fiber tip and the active area, the variations of system detection efficiency in each pixel showed reasonable Gaussian distribution to represent the spatial distributions of photon flux intensity. ranging, high-resolution depth imaging, free space optical communications, and so on. One of the critical issues to realizing large-format SSPD arrays is how to reduce heat flow from room temperature through coaxial cables, the number of which increases as of the number of pixels increases in a conventional readout scheme. Therefore, our primary effort so far has been focused on the development of cryogenic readout electronics using a single flux quantum (SFQ) circuit because the required number of cables can be drastically reduced by implementing SFQ circuits and SSPDs in a cryocooler system. We have confirmed correct SFQ operation with output signals from SSPDs [14], and we have succeeded in implementing a four-pixel SSPD array and SFQ circuit in a Gifford-McMahon (GM) cryocooler system with no serious crosstalk [15]. Accordingly, scaling the number of SSPD pixels should be a next step, which will require significant uniformity of superconducting nanowire characteristics. Because our recent development of single-pixel NbTiN SSPDs prepared on a thermally oxidized Si substrate can provide high SDE devices with high yield [4], it is natural to apply this technology to scale up the SSPD arrays. In this work, we report the development of two-dimensionally arranged 64-pixel NbTiN SSPD array and characterization of electrical and optical properties in a GM cryocooler system. We also show the spatial resolution of the 64-pixel SSPD array by irradiating it with incident light from a single-mode optical fiber at different distances between the fiber tip and the active area. Figure 1 shows the scanning electron micrograph (SEM) of the 64-pixel NbTiN SSPD device. The NbTiN nanowire pixels were fabricated on a Si substrate with a 250-nm-thick thermally oxidized SiO 2 layer. Although we chose a thermally oxidized substrate that can configure the double-sided cavity structure by placing a λ/4 dielectric cavity and mirror on the nanowire [4], they were not embedded in this device in order to exclude the influence of optical absorbance fluctuations that may be caused by each layer's thickness variations or by partial imperfections of the cavity structure. The fabrication process of the NbTiN SSPDs was basically the same as described elsewhere [4,16]. The 5-nm-thick NbTiN nanowire was formed to be 100 nm wide and 100 nm spaced meandering lines covering an area of 5 × 5 µm, thus configuring one nanowire pixel. The 8 × 8 nanowire pixels were two-dimensionally arranged with spacing of 3.4 µm, covering an area of 63 × 63 µm. The 200-nm-wide interconnection lines were formed using the same 5-nm-thick NbTiN nanowires in the spaces between the nanowire pixels, in which the width of interconnection lines was two times wider than that of the nanowire pixel in order to prevent a response to single-photon incidence. The interconnection lines were then connected to coplanar waveguide (CPW) lines. Since the dc resistance of long (~2 mm) CPW lines could lead to a disturbance of the correct operation of the SSPD, 100-nm-thick NbN films with a superconducting transition temperature (T c ) of ~15 K were used to assure zero dc resistance. Figure 2(a) shows a photograph of the chip-mounting block for the 64-pixel SSPD array. As shown in the figure, the 64-pixel SSPD array chip was mounted on the chip-mounting block with a specifically designed printed circuit board (PCB), and each nanowire pixel was wire bonded to a 50 Ω microstrip line on the PCB. Since the total number of coaxial cables introduced into our cryostat system was nine, we characterized the electrical and optical properties of the 64 nanowire pixels by changing the connections between the microstrip lines on the PCB and the coaxial cables in turn. A single-mode optical fiber for a wavelength of 1550 nm was introduced into the cryocooler system, and the end of the fiber was fixed to the rear side of the chip-mounting block after aligning the incident light from the fiber with the device active area, as schematically drawn in Fig. 2(b). We adjusted the distance between the fiber tip and the device active area (L fiber-sspd ) to 3 mm and 470 µm in order to observe the incident photon response at the respective distances. The packaged block was cooled with a 0.1 W GM cryocooler system, which can cool the sample stage to 2.3 K [17]. Experimental procedure To measure the SDE, a continuous tunable laser was used as the input photon source. The wavelength of the light source was fixed to 1550 nm and attenuated so that the photon flux at the input connector of the cryostat was 10 9 -10 11 photons/s. Although these values are much higher than usually used for SDE measurements (~10 6 photons/s [16]), the incident photon flux to each pixel was low enough to maintain the linearity of the output counts to photon flux due to the low coupling efficiency to each pixel (P couple,pixel ). We confirmed the linearity of the output counts to the incident photon flux before deciding on the input photon flux. A fiber polarization controller was inserted in front of the cryocooler's optical input in order to control the polarization properties of the incident photons so as to maximize the SDE. The SDE was determined by the relation SDE = (R output -R DCR )/R input , where R output is the SSPD output pulse rate, R DCR the dark count rate, and R input the input photon flux rate to the system. The SDE of each pixel was measured individually by changing the connection of the readout components at the outer side of the cryocooler system. indicate that all pixels were fabricated uniformly without significant defects in terms of their electrical properties. We then characterized the optical responses of all pixels. Figure 4(a) shows the SDE as a function of bias current for the 64 nanowire pixels when L fiber-sspd is adjusted to 3 mm. Results Although the absolute value of the SDE was low due to P couple,pixel , all pixels showed a response against single-photon irradiation. In addition, bias current dependencies in most of the pixels reaching to plateau, indicating high pulse-generation probability after photon absorption (P pulse ). Figure 4(b) shows the histogram of maximum pulse generation probability in each pixel (P pulse,max ), the values of which were derived from the fitting using a sigmoid function [4,18]. The sigmoid shape for the bias current dependence of the SDE has been shown empirically, and it is also well fitted to our devices in this work [2,4,19]. As shown in the figure, 60 of the 64 pixels exceeded 90% of approximated P pulse . The SDE of each nanowire pixel can be expressed as P couple,pixel × P abs × P pulse × (1 − P loss ), where P abs is the optical absorptance into the nanowire and P loss is the optical loss of the system. Here, the asymptotic system detection efficiency SDE asymp , which is the expected plateau value in bias current dependencies, should only be proportional to P couple,pixel if P abs and P loss are uniform for all pixels, because P pulse can be treated as 1.0. As described above, the dielectric cavity and mirror layers on the nanowires were intentionally not included in order to retain the variation of P abs as small as possible. P loss must be constant over the 64 pixels because we introduced the incident light through one optical fiber to all pixels. Therefore, the spatial distribution of SDE asymp can clearly reflect the spatial distribution of the incident photon intensity. Figures 5(a) and (b) show the color maps of SDE asymp over the 64 pixels when L fiber-sspd is 3 mm and 470 µm, respectively. In the figures, SDE asymp for each pixel was normalized by the highest value among the 64 pixels. If L fiber-sspd is 3 mm, the illuminated light from the fiber-end spread and beam waist at the device active area become much larger than the active area of 63 × 63 µm. Since the illuminated light from the single-mode fiber follows the Gaussian beam profile, the active area is exposed to a tiny center area of Gaussian beam where the spatial photon flux intensity distribution can be regarded as approximately flat. On the other hand, illuminated light from the fiber-end at L fiber-sspd of 470 µm does not spread as compared to that at L fiber-sspd of 3 mm, and the size of the beam waist at the active area is supposed to be smaller than the size of the active area, resulting in spatial variations of the illuminated light power in the device active area according to the Gaussian distribution. The obtained spatial distribution of SDE asymp in Figs. 5(a) and (b) clearly represents the spatial photon flux intensity distribution as explained above. Especially, the SDE asymp distribution in Fig. 5(b) could be well fitted to a Gaussian function by the method of least-squares fit, which is shown in Fig. 5(c), and the beam waist (2ω 0 ) was estimated to be 19.1 µm at L fiber-sspd of 470 µm. Although SDE asymp was verified to be a good reference for determining the photon flux intensity distribution, a sigmoid function fitting process from the bias current dependency is necessary in order to derive the values, which are unfavorable for realizing a real-time signal processing unit such as an SFQ circuit. For simultaneous 64-pixel operation with real-time signal processing, bias currents supplied to each pixel should be the same in order to employ a simple biasing scheme such as that reported in [20]. Therefore, we next verified whether the actual SDE values at a constant bias current well represent the spatial photon flux intensity distribution. Figures 6(a)-(d) show color maps of the SDE distributions (left) and histograms of P pulse (right) at L fiber-sspd of 470 µm at constant bias currents of 15.0, 15.5, 16.0, and 16.5 µA, respectively. As the bias current increased, the SDEs reached their asymptotic values and the variations over the 64 pixels decreased, enabling to represent the spatial distribution of the photon flux intensity more precisely. However, the number of nanowire pixels that could not operate due to locking into a resistive state (N disabled ) increased with increasing bias current, resulting in poorer visibility, as shown in the figures. On the other hand, N disabled was zero at the lowest bias current of 15.0 µA. Although the variations of P pulse were larger than those at higher bias currents, the SDE variations still represented the photon flux intensity properly. The value of 2ω 0 estimated from Gaussian function fitting was 18.5 µm, which is almost same as that obtained from SDE asymp . Conclusion We characterized the electrical and optical properties of a 64-pixel NbTiN SSPD array prepared on a thermally oxidized Si substrate. Our two-dimensionally arranged 64-pixel SSPD array exhibited uniform superconductivity in all pixels and pulse generation higher than 90% in 60 of the 64 pixels. We verified that the spatial distribution of SDE asymp in the 64-pixel SSPD array reasonably represents that of the photon flux intensity by irradiating the light from the fiber tip with different distances to the device active area. In addition, even at a constant bias current of 15.0 µA, we obtained similar SDE distributions as compared to SDE asymp distributions with no disabled pixels. Our next study will be a simultaneous operation of all of 64 nanowire pixels with an SFQ signal processing circuit for real-time spatially resolved photon detection. To accomplish this, the 64-pixel NbTiN SSPD array verified in this work already has favorable features. For example, a bias current of 15.0 µA is available to operate the SFQ circuit [21], and adequate image acquisition of the photon intensity distribution at a constant bias current makes it possible to apply the parallel biasing scheme with few feed lines [18]. Of course, although further improvement of bias current dependencies in the SDE such as those achieved by WSi-SSPDs at a low operation temperature of ~300 mK is more favorable for retaining the variations of P pulse even at low bias current regions [2], the results of this work provide insights into realizing large-format position-sensitive SSPD arrays even at operating temperatures of 2-3 K. Fig. 1. Scanning electron micrograph of 64-pixel NbTiN SSPD array.
3,031
2014-01-31T00:00:00.000
[ "Physics" ]
An OpenCL-Based FPGA Accelerator for Faster R-CNN In recent years, convolutional neural network (CNN)-based object detection algorithms have made breakthroughs, and much of the research corresponds to hardware accelerator designs. Although many previous works have proposed efficient FPGA designs for one-stage detectors such as Yolo, there are still few accelerator designs for faster regions with CNN features (Faster R-CNN) algorithms. Moreover, CNN’s inherently high computational complexity and high memory complexity bring challenges to the design of efficient accelerators. This paper proposes a software-hardware co-design scheme based on OpenCL to implement a Faster R-CNN object detection algorithm on FPGA. First, we design an efficient, deep pipelined FPGA hardware accelerator that can implement Faster R-CNN algorithms for different backbone networks. Then, an optimized hardware-aware software algorithm was proposed, including fixed-point quantization, layer fusion, and a multi-batch Regions of interest (RoIs) detector. Finally, we present an end-to-end design space exploration scheme to comprehensively evaluate the performance and resource utilization of the proposed accelerator. Experimental results show that the proposed design achieves a peak throughput of 846.9 GOP/s at the working frequency of 172 MHz. Compared with the state-of-the-art Faster R-CNN accelerator and the one-stage YOLO accelerator, our method achieves 10× and 2.1× inference throughput improvements, respectively. It is a huge challenge to deploy a CNN-based object detection network model that is computationally intensive and storage intensive to mobile devices with limited resources (such as smartphones, smart wearable devices, etc.). As shown in Table 1, the Faster R-CNN detection model [4], whose backbone network is vgg16 [2], requires up to 271.7 billion floating point operations (FLOPS) and more than 137 Megabytes(MB) of model parameters. Therefore, we need to choose a suitable computing platform for object detection. Recent studies have shown [9] that the computing capacity of a typical CPU can only reach 10-100 Giga Floating-point Operations Per Second (GFLOPS), and the energy consumption efficiency is normally below 1 Giga Operation Per Joule (GOP/J). In contrast, the computing power of GPU can be as high as 10 Tera Operation Per Second (TOP/s), which is a good choice for object detection applications. However, GPUs can usually only conduct 32-bit or 16-bit floating point operations and heavily rely on off-chip storage, which makes power consumption high (typical GPUs exceed 200 W). In addition, FPGAs are becoming a candidate platform for energy-saving and low-latency neural network acceleration processing through hardware design for neural networks. FPGA can perform data-parallel and task-parallel computing simultaneously to help improve efficiency. The flexibility of FPGA can also leave more room for the realization and optimization of neural network algorithm functions. Furthermore, FPGA-based CNN hardware accelerator designs [10][11][12][13][14][15] are rapidly developing due to their reconfigurability and fast development time, especially when FPGA vendors provide high-level synthesis (HLS) tools. In [10] proposed a design space exploration method by optimizing the computing resources and external memory access of the CNN accelerator, but they only implemented the convolutional layer. The author of [13] proposed a fixed-point CNN accelerator design scheme based on the OpenCL framework. However, because their convolution implementation method was based on the matrix multiplication mode and the device core separation design, the advantages of FPGA's deep pipeline characteristics were not been tapped to achieve a higher computing efficiency and smaller storage bandwidth. Due to the higher computational complexity of object detection algorithms and their more complex network designs, hardware accelerator designs [16][17][18][19][20][21][22][23] for CNN-based object detection algorithms are still rare from both the computing and storage perspectives. Table 1 shows that the two-stage Faster R-CNN detection algorithm is more computationally expensive than the single-stage Yolo detection algorithm by 5×-7×, so almost all FPGAbased object detection accelerator designs only consider single-stage detection algorithms, such as Yolo [18], Yolov2 [17,19,24], Yolov3 [22,23], etc. Due to the two-stage detection algorithm, Faster R-CNN usually has an improved recognition accuracy for small objects compared to the one-stage detection algorithm [4]. The optimization flow proposed in work [16] can implement a Faster R-CNN; however, the peak performance and bandwidth utilization of the design are greatly limited due to its use of a 32-bit floating-point format. The work of [18] presents a high-performance hardware implementation of Faster R-CNN and Yolov1 [6] on FPGA. However, their work only implements convolution computations on FPGA and fully connected layer computations on CPU. This design is very unfriendly to resource-constrained embedded platforms, because the CPUs of embedded platforms are generally limited in terms of their computing power. The reason for the large amount of parameters and calculation of the Faster R-CNN detection algorithm is that it includes a fully connected(fc) layer with a large amount of parameters and region proposals. Each region proposal needs to complete the calculation of the fully connected layer just like a complete picture, and this will bring great obstacles to memory and bandwidth, especially when applied to embedded FPGA platforms. In this paper, we have studied how to deploy a complete Faster R-CNN object detection accelerator on FPGA platform. An efficient and scalable hardware accelerator design for Faster R-CNN object detection based on OpenCL is proposed. Specifically, this paper makes the following contributions: • We propose an OpenCL-based deep pipelined object detection hardware accelerator design, which can implement Faster R-CNN algorithms for different backbone networks (such as vgg16 [2], resnet50 [3]). To our knowledge, we are the first to systematically analyze and design a Faster R-CNN object detection accelerator. • We perform hardware-aware algorithm optimizations on the Faster R-CNN network, including quantization, layer fusion, and a multi-batch RoIs detector. The cost of quantizing the network is a less than 1% accuracy loss and the multi-batch RoIs detector method can help the network to increases its speed by up to 11.1×. This greatly improves the utilization of hardware resources and bandwidth, maximizing the performance gains of the final design. • We introduce an end-to-end design space exploration flow for the proposed accelerator, which can comprehensively evaluate the performance and hardware resource utilization of the accelerator to fully exploit the potential of the accelerator. • Experimental results show that the proposed accelerator design achieves a peak throughput of 846.9 GOP/s at a working frequency of 172 MHz. Compared with the state-of-the-art Faster R-CNN accelerator and the one-stage YOLO accelerator, our method achieves 10× and 2.1× inference throughput improvements, respectively. Preliminaries This section mainly reviews the Faster R-CNN [4] object detection algorithm and OpenCL-based heterorgeneous computing platform setup. Review of the Faster R-CNN Algorithm After the development of R-CNN [7] and Fast R-CNN [5], Faster R-CNN [4] is the most classic object detection algorithm in the two-stage object detection algorithm to date. Faster R-CNN was created to solve the bottleneck of candidate region extraction and further share the convolution operation. Faster R-CNN is the first object detection algorithm to achieve end-to-end training. More specifically, Figure 1 shows the entire Faster R-CNN object detection algorithm flow. Faster R-CNN mainly consists of four essential parts: backbone network, region proposal network (RPN), region of interest (RoI) pooling layer, and classification and regression network. The first part is the backbone network, which includes the preprocessing of the input image and the forward computation of CNN. Generally speaking, this consists of some typical CNN networks, such as vgg16 [2] and resnet50 [3], which are mainly used to extract the features of the pictures. The last convolutional layer of the backbone network is used as a shared convolutional layer, and the output feature map is used as the input of RPN and RoI pooling. RPN is the second part of Faster R-CNN, used to generate region proposals, which are the regions of interest in the network. Classical detection methods are very timeconsuming when generating region proposal. For example, R-CNN [7] uses the Selective Search(SS) [26] method to generate region proposals. Faster R-CNN uses RPN to generate regional proposals, abandoning the traditional sliding window and SS methods, which significantly improves the generation speed of region proposals. Specifically, from Figure 1, we can see that the RPN is divided into two routes. The upper route is used to classify anchors in softmax to obtain the foreground and background (the detection target is foreground) and the lower route is used to calculate the bounding box regression offset for the anchors to obtain accurate region proposals. The final proposal layer is responsible for integrating foreground anchors and bounding box regression offsets to obtain proposals, and eliminate proposals that are too small and beyond the boundary. The third part is the RoI Pooling layer, which uses downsampling of the feature maps of the region of interest generated by the RPN and the shared convolutional layer to further extract the feature maps of the region of interest and send them to the subsequent network. As shown in Figure 2, spatial_scale is the scaling factor of the stride from the first convolutional layer to the shared convolutional layer (the inverse of the product of all strides from the first convolutional layer to the shared convolutional layer). Our goal is to obtain region proposals on the feature map output using the shared convolutional layer, but the size of the region proposals generated by RPN is relative to the original image size. Accordingly, we need to multiply the value of the region proposals by the scaling factor spatial_scale to obtain the mapped coordinates of the region proposals and the size of the sub-grid. As the RoI pooling layer is characterized by more input and less output, we performed a parallel processing on the input and output in the hardware, which can greatly speed up the inference time. This part will be explained in detail in Section 3.2.4. *spatial_scale RoI Pooling Mapping Logic The classification and regression network is the last part of Faster R-CNN. It uses the candidate regions generated by the RPN to calculate the specific category (such as TV, horse, car, etc.) to which each region proposal belongs through the fc layer and softmax. Bounding box regression is used again to obtain the position offset of each region proposal, which is used to regress a more accurate object detection proposal. However, each region proposal needs to complete the calculation of the fully connected layer, as shown in Table 1. When the number of region proposals is equal to 300, the operations of the Faster R-CNN (vgg16) [4] model are as high as 271.7 G, so our detection accelerator will become very inefficient. Here, we popose the multi-batch RoIs detector method to parallelize and reuse data for different region proposals. This can help networks to increase their speed by up to 11.1×, significantly reducing bandwidth utilization and increasing throughput. This part will be explained in detail in Section 3.3.3. OpenCL-Based Heterorgeneous Computing Platform Setup In recent years, with the increasing demand for computing speed and real-time data processing in different fields, the advantages of FPGA's high degree of parallelism and reconfigurability have gradually emerged. Compared with Register Transfer Level (RTL), such as HDL, High-Level Synthesis (HLS) tools have gradually become dominant in FPGA applications due to their short development cycle and lower research costs. Due to its parallel programming model [27], many people have recently paid more and more attention to the adaptation of the OpenCL heterogeneous computing framework (programming language based on C or C++) in FPGA. As shown in Figure 3, in this article, we used the OpenCL framework to design a Faster R-CNN FPGA accelerator. Generally speaking, this divides the computing system into two parts: the host side and the device side. (a) The host side (usually a CPU processor) is a set of application program interfaces (API) used to define and control the computing platform; (b) the device side (usually FPGA, DSP, GPU, etc.) is used to compile a set of kernel functions to accelerate operations on the FPGA board. The OpenCL device side first sends the data from the DDR memory to the Global Memory, and then communicates with the host side from the Global memory or the local memory through PCIe. Software and Hardware Architecture Co-Design Scheme As shown in Figure 4a, in this article, we propose a software and hardware co-design scheme for Faster R-CNN object detection accelerator. This consists of two parts, the host side and the device side (FPGA). The host side is a series of host task functions running on the CPU, including Reorg function, RPN function, host Max pooling function, Fast R-CNN detection function, host RoI Pooling function, and a task scheduler. The device side is composed of a set of kernel functions with high parallelism running in FPGA, which include Memory Convolution Read (MCR) kernel, Memory Convolution Write (MCW) kernel, Convolution kernel, Max Pooling kernel and RoI Pooling kernel. The proposed software and hardware co-design scheme places the computationally extensive layer on the hardware acceleration device FPGA for execution and places the small computationally complex and logically complex modules(such as RPN, Fast R-CNN detection, etc.) the host side. The specific hardware architecture design and software solutions are described in detail in the following Sections 3.2 and 3.3. Overall Architecture As shown in Figure 4a, the proposed Faster R-CNN object detection hardware architecture includes five acceleration kernels, which can implement a series of CNN basic layers, so that we can obtain object detection accelerators for different backbones by adjusting the network configuration parameters. The MCR and MCW kernels are responsible for reading and writing data from the global memory. They are cascaded with the convolution kernel through the OpenCL pipeline, so there is no need to repeatedly transmit the middlelayer feature map data and weight parameters, which will greatly improve the bandwidth utilization of the hardware. Figure 4b shows the internal structure of the convolution kernel, which reads the vectorized feature map and weight parameters from the input buffer of the MCR kernel through the OpenCL pipeline. We set the degree of parallelism at the Compute Unit(CU) level, which can efficiently accelerate the convolution kernel. Each CU unit is responsible for processing a series of sub-operations, including multiply and accumulate modules, delay registers, and Rectified Linear Unit(Relu) units. Convolution Kernel The multiply-accumulate module is shown in Figure 5. The vectorized input feature map and weight parameters are sent to the multiplier and then output to the delay shift register through the addition tree. The reason for designing the delay shift register is that the accumulator will self-add, which will cause the reading and writing of the accumulator results to be in the same memory area, causing memory conflicts. At this time, when a shift register is added after the accumulator, the result of the accumulator forms a pipeline between the accumulator and the shift register, which will greatly improve our convolution kernel execution's efficiency and throughput. Figure 6 shows the accelerator's max pooling kernel, which consists of a shift register, a comparator, and two line buffers. The figure shows that the size of the pooling window is 3 × 3. We can see that it first reads data from the global memory and puts them into a shift register of length three. Then the output of the shift register is compared, and the result of the comparison is sent to the two line buffers. Finally, we compare the data in the two row caches again, and the output result is the maximum value of the two row cache data, which is written back to the global memory. From the perspective of the entire architecture, the designed max pooling kernel only delays three clock cycles in the shift register, and will efficiently perform pipeline operations to improve the efficiency of the max pooling kernel. As shown in Figure 7, we propose a RoI Pooling kernel hardware design based on the NDRange method. First, we used a local work-item to read the region proposal feature map in the global memory and obtain the information of the four coordinates. Then, the region proposals generated by the RPN network (the generated size is based on the size of the original image) were mapped to the size of the last convolution feature map, which was multiplied by the scaling factor spatial_scale. According to the size of the obtained region proposal, the max pooling operation was performed on the feature map of the last convolutional layer. Finally, the output result was reordered and written back to the global memory in the RoI pooling kernel according to the degree of parallelism. In order to improve the concurrent workgroup processing of the kernel, work items were assigned to multiple concurrent workgroups, and the size of each workgroup is (K, K, C). For example, if we process 64 region proposals at a time, then we can map them to a single 3-D dataset with the NDRange size of (8, 8, C). Figure 4a, the data transfer kernels MCR and MCW are responsible for transferring data between the convolution kernel channel and the global memory. Specifically, MCR transfers the feature map and weight parameters of the image stored in the global memory to the input buffer and then transfers them to the convolution kernel. Similarly, MCW is responsible for writing the feature map data output by the convolution kernel back into the global memory to feed them into the next layer of the network. The data flow on the cache is realized through the OpenCL pipeline, which makes the data flow between the kernels more efficient. RoI Pooling Kernel For the MCR and MCW cores, we propose a parallel circuit design in the on-chip cache. Figure 8 shows the mapping process of the convolutional layer weights, input and output feature maps in the prefetch window of the MCR/MCW kernel. We design parallelism in three directions, namely, the channel vectorization parallelism PZ vec along the z direction, the parallelism PY nc of multiple convolutions within the prefetch window along the y direction, and parallelism based on the convolution kernel dimension PM cu . Specifically, since the convolution is based on the operation of sliding windows, in order to increase the bandwidth utilization, we will read the data of a prefetch window size each time, which vectorizes the data in the z direction; then, multiple convolution operations can be executed in parallel within the prefetch window in the y direction. Finally, in the dimension of the convolution kernel, we perform parallel processing on multiple convolution kernels. For example, we can execute PM cu convolution kernels concurrently. Buffer Design In order to achieve simultaneous access and maximum data-sharing of multiple groups of convolutional data on the prefetched window feature map, we propose a single-input and multiple-output line buffer structure to achieve a feature map buffer. As shown in Figure 9, the designed feature map buffer consists of a dual-port RAM, including one write port and multiple read ports. Each time we read feature map data that are equal to the size of the convolution kernel from the prefetch window in Z-order, the sub-window sequentially slides the convolution kernel step size S units along the X axis. In order to avoid memory conflicts caused by repeated readings of the same block address by adjacent convolutions, we read the feature maps of S lines each time and write them into the line buffer in turn. The proposed design can significantly improve the bandwidth utilization of feature map transmission. Fixed-Point Quantization for Faster R-CNN Although floating-point numbers can represent higher data accuracy, the implementation of floating-point data on FPGA will use more storage resources and computing resources, which will lead to a longer object detection forward inference time. With the complexity and deepening of the network backbone, the requirements for reasoning delay will become more and more demanding. Therefore, it is extremely important to compress the network. Recent studies [14,28] have shown that using fixed-point formats (8/16bit) instead of floating-point data in FPGAs can significantly reduce bandwidth requirements and dependence on on-chip resources. However, this does not mean that we can use a too-short bit width to represent the weight and activation of the network, because this will cause a serious loss of accuracy. For example, the current binarization research work only uses 1 bit to compress the network model to the extreme, but there will still be a significant decrease in accuracy. At present, there is some research [29][30][31] on ultra-low precision quantization, such as binarization research work, which only uses one bit to compress the network model extreme. However, at present, the gap with the full-precision model is still huge. In this paper, we extend the dynamic precision data quantification scheme proposed in [14]. Specifically, we performed 8-bit width fixed-point quantization on the weights, input and output of the convolutional layer, as well as the fully connected layer in Faster R-CNN. Since the data of Faster R-CNN on the host side are a floating-point number, we need to convert them to a fixed-point number before they can be sent to the FPGA device. The definition of fixed-point quantization is as follows: where Q f denotes the quantized fixed-point number, s represents the sign bit, FL indicates that the fractional length may be positive or negative, bw denotes the bit width of the fixed-point number, and B i denotes mantissa. The goal of quantifying Faster R-CNN object detection is to find the optimal fractional length FL for the model weight parameters, the input and output in each convolution layer or the fully connected layer under the condition of minimal losses of accuracy. They are denoted as W FL , I N FL , and OUT FL , respectively. Specifically, as shown in Algorithm 1, we first set the target bit width bw for the model parameters, the input, and the output of the convolutional layer or the fully connected layer of Faster R-CNN(denoted as BW w , BW in and BW out ), and then traversed this layer by layer until it met the detection accuracy constraints. Taking model parameters as an example, we set the traversal range to [−R + W i F L init , R + W i F L init ], where R is a threshold , and W i F L init represents the fractional length initialization of i-layer weights. Here, we set the method of initializing the length of the weight of the i layer as follows: The input and output settings of the convolutional layer and the fully connected layer were the same as the parameters. Here, we set the traversal range to be very small (usually set to 3). This will not affect the accuracy losses of Faster R-CNN, which will greatly improve the efficiency of our experiment. Table 2 shows the quantization results of the Faster R-CNN object detection framework based on different network backbones. The model with the result of vgg16 was used as the backbone network, and compressed four times, from the original 137.1 MB to 34.3 MB. Therefore, fixed-point quantization can compress the model to accelerate the Faster R-CNN object detection accelerator. Algorithm 1: Fixed-Point Quantization Algorithm Flow For Faster R-CNN Input: Total number of convolutional and fc layers N , Faster R-CNN model parameters W = {W 1 , W 1 , · · · , W L } the traversal range of fractional length R, target bit-width of model parameters, input and ouput in each layer BW = {BW w , BW in , BW out }, minimal gap between fixed-point precision and 32-bit full-precision 1 , Output: The fractional length of Faster R-CNN model weight parameters, the input and output in each convolution and fully connected layer Layer Fusion The convolution layer convolves the input feature map with the convolution kernel to obtain the output feature map. This is actually a three-dimensional multiply-accumulate operation, defined as follows: where f in c,i·S+k x j·S+k y and f out n,i,j represent the input and the output feature map of the convolutional layer, respectively. W n,c,k x ,k y denotes model parameters. The Batch Normalization (BN) layer is used by many deep models due to its ability to speed up convergence and prevent gradient explosion or gradient disappearance. Generally speaking, the BN layer is behind the convolutional layer, which allows us to directly embed the operations of the BN layer into the convolutional layer during the inference stage of the network. This can effectively reduce the amount of network calculations, and can also increase the network's reasoning time. Specifically, in the inference stage, the definition of the BN layer is as follows: where f in is input of the BN layer (the output of the convolutional layer), f BN represents the output of the BN layer, and µ represents the average and variance of the mini-batch. γ is the scaling factor, β is the translation factor, and avoids the minimum value set by the division by zero. We further expand the above Equation (4): Therefore, we can transform this into the following: Then, the weight of the fusion of the BN layer and the convolutional layer is a simple multiplication of the two-layer parameter: The experimental results show that the layer fusion optimization has no accuracy loss, but provides great benefits in the utilization of hardware resources. Multi-Batch RoIs Detector From Figure 10a, we know that the region proposals (denoted as Rois) obtained through the RoI pooling layer are used in the subsequent detection phase (including two fully connected layers and two 1 × 1 convolutional layers). As shown in Figure 10b, since Rois is an independent execution detection phase, we propose a multi-batch RoIs detector method for this feature in this article. Specifically, assuming that the total of N rois region proposals are output and N rois can perform square root rounding operations, we can rearrange these into the frame on the right side of Figure 10b, where N rois = R x × R y . Through this reordering, we can transform the original serial execution of multiple region proposals into a stage outputting multiple region proposals at once. This can help the network to achieve a speedup gain of up to 11.1×, which greatly reduces bandwidth utilization and increases throughput. Section 5.2 demonstrates the effectiveness of the approach. Performance Modeling Maximizing the performance of the designed Faster R-CNN object detection accelerator while being constrained by the limited resources of the FPGA is a formidable challenge. Synthesis fails because FPGA synthesis runs for a long time (maybe hours) or because of insufficient hardware resources. Compiling for every combination of hardware parameters is unwise and unfeasible. Therefore, this paper models performance and bandwidth for rapid design space exploration. We assume that the input feature map size of the l layer of our network is N l × P l × C l , the size of the convolution kernel is K l × K l × C l , the step size is S l , and the output feature map size is N l × P l × C l . Figure 8 shows that the designed accelerator has three dimensions of parallelism, which are based on the parallelism of different convolution kernels (PM cu ), the parallelism PY nc of multiple convolutions within the prefetch window along the y direction and the channel vectorization parallelism PZ vec along the z direction. Choosing the best combination of design variables (PM cu , PY nc , PM cu ) can maximize the performance of the Faster R-CNN accelerator. Since the fully connected layer can be regarded as a convolutional layer of 1 × 1, we model the running time of the convolutional or fully connected layer under the condition of FPGA resource constraints: where #Operations l = N l × P l × C l × K l × K l × C l represents the l layer operations, Clock indicates the clock frequency at which the accelerator works. R use represents the number of FPGA resources consumed by the designed accelerator to run, including DSP, logic resources, and on-chip memory, and MAX RC represents the total number of resources that are actually owned by a given FPGA. The total time of the other functional layers is insignificant when compared to the total runtime, so the total throughput can be evaluated as: The detailed design space exploration process and resource exploration are elaborated in the following experimental Section 5.2, and we compare the obtained theoretical time with the time measured on the board on an actual FPGA. Experimental Setup To evaluate the performance of the proposed object detection accelerator, we implemented the design on Intel Arria-10 GX FPGA Development Kit. The FPGA device has 427 K Adaptive Logic Modules, 1518 DSP blocks, 66 Mb of on-chip memory, 2 GB of off-chip memory, and external memory DDR. The bandwidth is 19.2 GB/s. The FPGA board is equipped with an Intel i9-9900k CPU and 64 GB memory on the workstation. The proposed framework adopts a high-level synthesis (HLS)-based design method, and its OpenCL kernel code is compiled on Intel FPGA OpenCL SDK v20.1. The host-side CPU executes the host program and the device-side FPGA executes the kernel code with a large amount of computation, such as convolutional layers, fully connected layers, etc. We selected two backbone networks (resnet50 [3] and vgg16 [2]) to test the performance of the Faster R-CNN accelerator. The Pascal VOC2007 dataset [32] was used to measure the detection accuracy of Faster R-CNN. We quantized the Faster R-CNN model with 8-bit precision, and the accuracy dropped by less than 1%. Design Space Exploration We developed an automatic design space exploration engine based on a python script to fully and reasonably use hardware resources. As shown in Figure 11, this solution can automatically load the fixed-point model to quickly compile multiple accelerator kernel codes. Then, we analyzed and counted the consumed hardware resources using compilation report and selected the best theoretical performance that met the expectations to execute the complete compilation and synthesis process, and finally generate the FPGA bitstream file. Specifically, we first analyzed the Faster R-CNN model from the perspective of performance modeling. For a specific network later, the speedup gain introduced by the parallel data flow along the y direction iwa affected by the ratio N /PY nc . Figure 12 shows the speedup gain curves of the Faster R-CNN model under different degrees of parallelism PY nc . We can see that the speedup gain of different backbone network models rapidly decreased when PY nc > 14. For large values of PY nc , the increase in speed drops rapidly for layers with small feature maps along the y direction. Therefore, we chose PY nc = 14 as the best configuration for hardware parameters. As shown in Figure 9, the line buffer of the feature map should accommodate the prefetch window of one line, and the buffer depth FT d of the feature map should satisfy FT d ≥ FT pw · S · C. In this way, we can obtain the optimal FT d in the model. From Figure 12, we can also see that the proposed multi-batch RoIs' detector method can greatly improve the speed. For example, when PY nc = 14, the speed using this method increased by 11.1× (blue line in the figure), while the speed when not using this method only increased by 3.2× (green line in Figure 12). Then, we quickly compiled the kernel code for the target FPGA device multiple times, and obtained the consumed hardware resource information such as DSP, on-chip storage, and logic from the compilation report. The huge advantage of this fast compilation is that the model can be generated quickly through the python scripting language. Figure 13 shows the design space exploration results of the proposed Faster R-CNN accelerator, and the average execution time per image is calculated by Equation (9). We can see that when PM cu = 16, PZ vec = 16, the DSP utilization of the target device Arria-10 GX1150 FPGA exceeds 99%; therefore, when we increase the parallelism, the compilation will fail. Therefore, for the Faster R-CNN network whose backbone network is resnet50, the hardware parameters are configured as PM cu = 16, PZ vec = 16 to maximize the performance and resource utilization of the accelerator. Comparison with Estimated Performance As shown in Figure 14, we obtained the theoretical execution time of convolutional and fully connected layers from Equation (9) for performance modeling, and compared this with the actual execution time in the designed accelerator. From Figure 14, we can observe that the execution efficiency of most convolutional layers is about 80%. The main reason for this is that the size of the convolution kernel is 1 × 1, which means that the calculation amount on the FPGA chip is too small. In this case, the accelerator transmits data most of the time; that is to say, the computing unit on the FPGA chip is waiting most of the time instead of working, resulting in a low utilization of the accelerator's core channel pipeline. Figure 14. Comparison of the efficiency of the convolution computation for each layer of the Faster R-CNN and the backbone network is vgg16 [2]. The estimated time is calculated using the theoretical performance model, and the actual time is on the Faster R-CNN-vgg16 design.The hardware configure parameters is PY nc = 14, PM cu = 16, PZ vec = 8. Comparison with Start-of-the-Art As shown in Table 3, we first compared the proposed accelerator design with the state-ofthe-art Faster R-CNN accelerator with the same backbone network on different acceleration platforms. Using the premise that the backbone networks are all vgg16 [2], we achieved the highest detection accuracy, while the detection speed was 3.5× of work [16] and 2.7× of work [18]. On this basis, we achieved a full throughput improvement of 10× over work [18]. Second, we compared the YOLO family of state-of-the-art one-stage detectors, whose designs employ different types of convolution strategies, including spatial convolution [19], frequency domain convolution [24], and multiplication-free binary convolution [17]. As shown in Table 1, since the input image resolution of Faster R-CNN is larger than that of the one-stage detector, the computation of Faster R-CNN ranges from about 3 to 6 times larger than that of YOLO. Therefore, even though our designed Faster R-CNN accelerator has a lower detection speed than the YOLO accelerator, the proposed design comparison work [17] achieves a performance improvement of 2.1× in terms of throughput. Finally, the first row of Table 4 shows a comparison with the results achieved on NVIDIA K40 GPU [4]. When the same backbone network is run vgg16 [2], the results show that our designed Faster R-CNN accelerator achieved a 5.7× improvement in power efficiency at the cost of a 1-point drop in detection accuracy. Compared with GPU, the hardware accelerator designed in this paper, which can be deployed on FPGA, has more flexibility and a higher practical application value. Conclusions This work proposes a high-throughput Faster R-CNN object detection accelerator design. The hardware architecture designed in this paper adopts a series of flexible and scalable kernel pipeline designs to support Faster R-CNN architectures of different backbone networks such as resnet50, vgg16, etc. Through an 8 bit width quantization, layer fusion, and ,ulti-batch RoIs detector method, the resource utilization of the hardware is greatly improved. We also propose end-to-end design space exploration, and the experimental results show that our design achieves a 10x improvement in inference throughput compared to state-of-the-art designs. For future studies, we intend to sparse the Faster R-CNN design and achieve a higher compression ratio, using the pruning algorithm to achieve a more efficient Faster R-CNN detection accelerator.
8,250.2
2022-09-23T00:00:00.000
[ "Computer Science", "Engineering" ]
The Pragma-Dialectical Approach to the Fallacies Revisited This article explains the design and development of the pragma-dialectical approach to fallacies. In this approach fallacies are viewed as violations of the standards for critical discussion that are expressed in a code of conduct for reasonable argumentative discourse. After the problem-solving validity in resolving differences of opinion of the rules of this code has been discussed, their conventional validity for real-life arguers is demonstrated. Starting from the extended version of the theory in which the strategic maneuvering taking place in argumentative discourse is included, the article explains that the violations of the rules that are committed in the fallacies involve derailments of strategic maneuvering. This culminates in a discussion of the exploitation of hidden fallaciousness as an unreasonable way of increasing the effectiveness of argumentative discourse – a vital topic of research in present-day pragma-dialectics. the pragmatic perspective on these argumentative moves as communicative acts with a specific function in the discourse. In order to resolve the difference, the critical discussion that is ideally conducted needs to go through four different stages: a confrontation stage in which the difference of opinion becomes apparent, an opening stage in which the starting points of the resolution process are established, an argumentation stage in which the standpoints at issue are defended against doubt or criticism, and a concluding stage in which it is determined to what extent the resolution has been resolved. Unlike in the 'logical standard approach' to the fallacies discussed by Hamblin (1970), in the pragma-dialectical standard approach the fallacies occurring in argumentative discourse are not automatically viewed as violations of the logical validity standard, but can be violations of a variety of standards. These standards have in common that all of them pertain to the reasonableness of argumentative discourse, i.e., the appropriateness of the argumentative moves that are made for serving the purpose of resolving a difference of opinion on the basis of their merits. Logical validity is one of these standards, but this standard applies only to argumentative moves made in the argumentation stage of the argumentative process which are claimed to involve logical reasoning. To a great many of the argumentative moves made in the argumentation stage this claim usually does not apply, let alone to the argumentative moves made in the other stages. The rules for critical discussion proposed in the pragma-dialectical theory of argumentation ( van Eemeren & Grootendorst 1984;Eemeren & Grootendorst 2004) are incorporated in a less abstract practical 'code of conduct' for reasonable argumentative discourse, which contains standards representing the various kinds of soundness conditions that need to be fulfilled in making argumentative moves aimed at resolving a difference of opinion on the merits. 1 These standards define what argumentative discourse is like in a critical discussion in which reasonableness is fully maintained, i.e., when the discourse is optimally aimed at resolving a difference of opinion on the merits. Depending on what maintaining reasonableness in the various stages of the argumentative process involves, in addition to the set of communicative requirements that always apply, different standards apply to the argumentative moves made in the 1 The most recent version of the rules of the code of conduct (van Eemeren 2018: 58-61) is as follows: (1) Freedom Rule: Discussants may not prevent each other from advancing standpoints or calling standpoints into question; (2) Obligation to Defend Rule: Discussants who advance a standpoint may not refuse to defend this standpoint when requested to do so; (3) Standpoint Rule: Attacks on standpoints may not bear on a standpoint that has not actually been put forward by the other party; (4) Relevance Rule: Standpoints may not be defended by non-argumentation or argumentation that is not relevant to the standpoint; (5) Unexpressed Premise Rule: Discussants may not falsely attribute unexpressed premises to the other party or disown responsibility for their own unexpressed premises; (6) Starting Point Rule: Discussants may not falsely present something as an accepted starting point or falsely deny that something is an accepted starting point; (7) Validity Rule: Reasoning that is in an argumentation explicitly and fully expressed may not be invalid in a logical sense; (8) Argument Scheme Rule: Standpoints defended by argumentation in which the reasoning is not explicitly and fully expressed may not be regarded conclusively defended unless the defence takes place by means of appropriate argument schemes that are applied correctly; (9) Concluding Rule: Inconclusive defences of standpoints may not lead to maintaining these standpoints and conclusive defences of standpoints may not lead to maintaining expressions of doubt concerning these standpoints; (10) Language Use Rule: Discussants may not use any formulations that are insufficiently clear or confusingly ambiguous, and they may not deliberately misinterpret the other party's formulations. confrontation stage, the opening stage, the argumentation stage, and the concluding stage. Every argumentative move that is an infringement of any of the rules for critical discussion, whichever party performs it at whatever stage of the discussion, is in this perspective a fallacy (van Eemeren 2018: 62-69). The rationale for calling an argumentative move a fallacy is that it obstructs or hinders the resolution of a difference of opinion based on the merits of the argumentative moves that are made. The instrumentality of the rules for critical discussion in resolving differences of opinion on the merits and exposing counterproductive argumentative moves as fallacious, demonstrates their 'problem-solving validity'. 2 A difference of opinion cannot be resolved by means of argumentative discourse if it has not become clear in the confrontation stage what the difference involves. The first rule of the code of conduct, the Freedom Rule (1), guarantees that in a critical discussion it is always possible to make clear which difference is to be discussed by requiring that standpoints and doubt regarding standpoints can be advanced freely. Even when the difference has been externalized, it cannot be resolved if the party that advanced the standpoint at issue is not prepared to take on the role of a protagonist defending this standpoint. The Obligation to Defend Rule (2) ensures that standpoints put forward and called into question will be defended against critical attacks. This defense cannot lead to a resolution of the difference if the antagonist who is to be convinced (or the protagonist who defends it) misrepresents the standpoint that is defended. The Standpoint Rule (3) ensures that this does not happen. A difference of opinion is not resolved on the merits when the defense of the standpoint is based on argumentation that does not support this standpoint or merely on ethos or pathos. In such a case the defense does not comply with the Relevance Rule (4). The resolution of the difference is also obstructed when protagonists do not accept their responsibility for elements they have left implicit in their argumentation or antagonists do not stick to what can be assigned to the protagonist on the basis of a careful reconstruction. In such cases the Unexpressed Premise Rule (5) has been violated. A necessary requirement for an argumentative process that can lead to a resolution of the difference on the merits is also that the starting points are not used in an improper way. When something is treated as a starting point that is in fact not accepted or an accepted starting point is denied, the Starting Point Rule (6) is violated. The difference of opinion is, again, not resolved on the merits when explicit reasoning is advanced in the argumentation that is invalid in a logical sense: this goes against the Validity Rule (7). Neither is a difference of opinion resolved on the merits when the argumentation in defense of the standpoint at issue that is advanced in the argumentation stage does not rely on admissible argument schemes that have been used correctly, so that the defense is in agreement with the Argument Scheme Rule (8). A difference is not resolved if the parties involved do not agree in the concluding stage whether or not the standpoint at issue has been conclusively defended. This means that the Concluding Rule (9) needs to be observed. Finally, a general prerequi-site for resolving a difference on the merits in all stages of the argumentative process is that the participants do not create pseudo differences or pseudo solutions by not expressing or not interpreting their own and each other's intentions as accurately as possible, without relying on not transparent, vague or equivocal formulations or inaccurate, sloppy or biased interpretations. This general prerequisite calls for observation of the Language Use Rule (10). When taken together, the rules for critical discussion that constitute the code of conduct for maintaining reasonableness provide in principle all standards that need to be taken into account in resolving a difference of opinion on the merits. Every argumentative move made in the discourse that violates in the stage in which it is made a rule for critical discussion is a fallacy. In Argumentation, Communication, and Fallacies (van Eemeren & Grootendorst 1992: 93-127), a list of violations is provided that is, as a matter of course, not complete but gives a good indication of the different kinds of fallacies that can be committed in the various stages of the argumentative process to be passed through in resolving a difference of opinion. In this way, the various types of fallacies are in the pragma-dialectical approach systematically connected with the functional variety of standards that need to be observed in resolving a difference of opinion on the merits. The fallacies distinguished in the literature can in this way be characterized more systematically and consistently. Certain fallacies that were in the traditional categories only lumped together are shown to have nothing in common and are clearly distinguished from each other while genuinely related fallacies that were earlier separated are brought together. The fallacy known as argumentum ad verecundiam, for example, has variants that are violations of different standards, which are in fact separate types of fallacies. In one variant a party makes an appeal to authority at the opening stage of the argumentative process by giving a personal guarantee of the correctness of a standpoint ("You can take it from me that every war leads to another war"). In this case, the fallacy is a violation of the Obligation to Defend Rule (2) that a party that advanced a standpoint is obliged to defend this standpoint if this is desired. Another variant occurs when a party is prepared to defend its standpoint in the argumentation stage but does so by only parading its own qualities. This fallacy constitutes a violation of the Relevance Rule (4), which outlaws non-argumentation and argumentation merely based on ethos or pathos. Yet another variant occurs when a party appeals in the argumentation stage to an authority that is in fact not an expert in the field the standpoint at issue relates to. The latter kind of fallacy constitutes a violation of the Argument Scheme Rule (8), which prescribes that the source referred to in argumentation from authority should indeed be an authority in the area concerned. Other examples of variants of a fallacy which are not of the same kind when viewed from the perspective of resolving differences of opinion on the merits concern the fallacy traditionally viewed as an argumentum ad populum. In one variant, this fallacy constitutes a violation of the Relevance Rule (4), in another variant it is a fallacy that violates the Argument Scheme Rule (8). In contradistinction, the fallacy traditionally regarded as a variant of the argumentum ad verecundiam and the fallacy traditionally regarded as a variant of the argumentum ad populum which are both violations of the Argument Scheme Rule (8), are in fact variants of the same kind of fallacy when viewed from the perspective of resolving differences of opinion on the merits. In addition, the pragma-dialectical approach to fallacies allows us to identify fallacies that earlier went unnoticed. So far, these obstacles to resolving a difference of opinion on the merits now distinguished as "new" fallacies were not named. They include declaring a standpoint sacrosanct (violation of the Freedom Rule, 1), evading the burden of proof by immunizing a standpoint against criticism (violation of the Obligation to Defend Rule, 2), denying an unexpressed premise (violation of the Unexpressed Premise Rule, 5), falsely presenting something as a common starting point (violation of the Starting Point Rule, 6), falsely presenting a premise as selfevident (violation of the Starting Point Rule, 6), denying an accepted starting point (violation of the Starting Point Rule, 6), and making an absolute of the success of the defence (violation of the Concluding Rule, 9). Conventional Validity of the Theory Next to ensuring problem-solving validity by being instrumental in resolving differences of opinion on the merits, the rules for critical discussion also need to be intersubjectively valid to the parties involved in the resolution process. Otherwise they cannot be helpful in doing their actual job of resolving the difference in actual argumentative discourse. As we made clear in Sect. 1, the problem-solving validity of the pragma-dialectical standards for reasonableness can be analytically determined on theoretical grounds. Their intersubjective validity, however, which lends them conventional validity in argumentative practices, can only be determined empirically. 3 In 1996 we therefore started the comprehensive experimental research project Conceptions of Reasonableness, which was completed in 2008 (see van Eemeren, Garssen & Meuffels 2009), to determine empirically whether the standards of reasonableness expressed in the rules for critical discussion of the pragma-dialectical code of conduct will be intersubjectively approved by the people involved in a difference of opinion. The research question of this project was: to what extent are the norms the arguers claim to use in evaluating argumentative discourse in agreement with the pragma-dialectical standards? Instead of asking the research subjects directly for their views about the rules for critical discussion, in each of our tests the respondents were asked to judge the reasonableness of discussion contributions in which a specific discussion rule was violated. Otherwise they might be forced to make pronouncements about abstractly formulated matters that are too theoretical for them. In the tests, a variety of discussion fragments consisting of short dialogues between two interlocutors were presented to the participants. In order to create a baseline to make comparisons, the participants also had to judge discussion fragments which resembled the fallacious cases in appearance in which no rule for critical discussion was violated: non-fallacious direct personal attacks, for instance, were paired with abusive ad hominem fallacies. For all our tests, we constructed clear-cut paradigmatic cases of the fallacies, always without loaded content. The argumentative dialogues concerned were put in a domestic, political or scientific context. The participants were invariably asked to judge the reasonableness of a particular contribution to the exchange that was offered to them. They had to indicate their judgment on a 7-point Likert scale, ranging from very unreasonable (= 1) to very reasonable (= 7), with neither unreasonable nor reasonable (= 4) in the middle. 4 In this way, we examined 24 different types of fallacies which are violations of rules for critical discussion spread over all four discussion stages. The rules violated are the Freedom Rule (Rule 1), the Obligation to Defend Rule (Rule 2), the Argument Scheme Rule (Rule 8), and the Concluding Rule (Rule 9). The general aim of the tests was to check to what extent the norms applied by ordinary arguers in judging the reasonableness of argumentative moves match the standards of reasonableness expressed in the rules for critical discussion. 5 Table 1 provides an overview of the reasonableness scores for all violations included in our research project. In all cases the results were quite consistent. As can be seen in Table 1, except for the tu quoque variant of the argumentum ad hominem, all fallacious contributions to the discussion are rated below the neutral point (4) on the 7-point scale, and hence considered unreasonable. The non-fallacious counterparts of the fallacies are generally considered reasonable. The respondents prove to be consistent in making a clear, and statistically significant, distinction between the discussion moves that are unreasonable according to the pragma-dialectical standards because they involve a fallacy and the discussion moves that are not fallacious. Fallacious discussion moves are in general considered unreasonable by the respondents and non-fallacious moves are considered reasonable. These outcomes provide strong support for the hypothesis that the pragma-dialectical discussion rules are intersubjectively valid and are therefore entitled to be recognized as conventionally valid. Fallacies as Derailments of Strategic Manoeuvring The dialectical goals pursued in real-life argumentative discourse by the argumentative moves that are made always have a rhetorical analogue. When arguers are aiming to resolve a difference of opinion on the merits in actual argumentative practices, they are engaged in strategic manoeuvring in which they try to realize effectiveness through reasonableness ( van Eemeren, Garssen & Meuffels 2012a). In principle, they thus combine, in every argumentative move they make in the discourse and every 4 In all cases, the setup of these experiments involved a repeated measurement design combined with a multiple message design. This means that in the questionnaire multiple instantiations of each fallacy were included, together with non-fallacious counterparts. 5 In a number of cases, a replication study was carried out -sometimes to check certain interpretations, sometimes to exclude alternative explanations, sometimes to optimize the external validity of the testing by choosing a different group of respondents. The replications were carried out with respondents from the Netherlands, the UK, Germany, Spain, and Indonesia. mode of strategic manoeuvring employed in making it, their dialectical and their rhetorical aims. In order to cover the strategic manoeuvring taking place in argumentative discourse, in addition to the dialectical dimension of pursuing reasonableness, a rhetorical dimension of pursuing effectiveness is included in the extended pragmadialectical theorizing ( van Eemeren 2010). The strategic manoeuvring taking place in the argumentative moves made in the discourse consists of trying to keep a balance between these two dimensions of argumentative discourse in combining aiming for effectiveness with maintaining reasonableness ( van Eemeren & Houtlosser 2002: 138-141). When any of the rules for critical discussion constituting the code of con- duct for reasonable argumentative discourse is violated in the process, the strategic manoeuvring derails into fallaciousness ( van Eemeren 2010: 187-212; van Eemeren 2018: 120-123). In actual argumentative practices, fallacious derailments of strategic manoeuvring may occur in the empirical equivalents of all four stages of the argumentative process. In the initial situation, i.e., the empirical equivalent of the confrontation stage of a critical discussion in the argumentative discourse, the arguers will aim to define the difference of opinion at issue in the way most suitable to their purposes. In the equivalent of the opening stage in the discourse, they will try to establish the starting points that agree most with their own purposes -material as well as procedural starting points. In the equivalent of the argumentation stage, they will advance argumentation consisting of arguments selected for being optimally instrumental in realizing their purposes. In the equivalent of the concluding stage, finally, they will try to reach the conclusion that comes closest to the outcome they are out to achieve. In all stages, in the strategic manoeuvring something can go wrong that involves a violation of a rule of the code of conduct for reasonable argumentative discourse, and therefore amounts to committing a fallacy. The extended pragma-dialectical approach enables us to do justice to the treacherous character of the fallacies and their potential persuasiveness in actual argumentative practices: it makes it easier to explain how the various kinds of fallacies "work" and why they may go unnoticed, so that they can be effective without being reasonable. Above all, the concept of strategic manoeuvring can help us understand why in certain cases sound and fallacious argumentative moves are hard to distinguish. One of the causes of this problem is that in argumentative discourse there is a presumption of reasonableness that is almost automatically attached to all elements in the discourse (Jackson 1995). The fact that in argumentative discourse reasons are offered in support of standpoints makes some people consider it likely that the treatment of these standpoints will be well-considered. By providing reasons, the arguers who advance argumentation are regarded as indicating that they respect the Principle of Reasonableness ( van Eemeren & Houtlosser 2009 in van Eemeren 2015: 632;van Eemeren 2010: 32, 253). Another reason why the difference between sound and fallacious argumentative moves is in some cases not immediately clear is that in their manifestations sound and fallacious argumentative moves are not distinguished from each other by their appearance. In each particular case, both of them are representatives of the same mode of strategic manoeuvring and in that sense they are one of a kind. Sound as well as fallacious uses of a personal attack, for instance, are manifestations of one and the same mode of strategic manoeuvring. In studying the role of argumentative discourse in resolving differences of opinion on the merits, however, it is necessary to make a clear separation between the two. Therefore, in the pragma-dialectical theorizing we make a terminological distinction between the mode of strategic manoeuvring that is called a personal attack and the fallacious use of a personal attack which is called an argumentum ad hominem. The same kind of terminological distinction is also made, for instance, between the neutral label argument from authority and the fallacious version of it, an argumentum ad verecundiam. The fallacious versions of the argumentative moves are generally given a more technical, often latinized name. Just like in the case of a personal attack, if the required soundness criteria have been complied with, the use of an argument from authority can be a reasonable and effective mode of strategic manoeuvring. However, the strategic manoeuvring derails when, for instance, the authority appealed to does not pertain to the topic at issue, cannot be attributed to the source referred to or when this source is wrongly quoted or on a point where having it is not relevant (Woods & Walton 1989: 15-24; van Eemeren & Grootendorst 1992: 136-137). In these cases one or more of the critical questions associated with the use of an argument from authority cannot be answered satisfactorily, so that the Argument Scheme Rule (Rule 8) is violated and an argumentum ad verecundiam has been committed. Another cause of problems in distinguishing between sound and fallacious argumentative moves is that certain modes of strategic manoeuvring have a continuum of possible realizations that goes from argumentative moves that are unquestionably sound to argumentative moves that are indisputably fallacious. There may also be manifestations of a mode of strategic manoeuvring that are situated in-between the two extremes, so that it is not always immediately clear to which category they belong. One can imagine, for instance, that there are personal attacks or arguments from authority that are neither clearly sound nor clear specimens of an argumentum ad hominem or an argumentum ad verecundiam -not because they are hard to interpret, but because they do not represent one of the outspoken extremes of the modes of strategic manoeuvring concerned that are for the sake of didactic clarity highlighted in textbooks. If only for fear of loss of effectiveness of their discourse, arguers do not want their argumentative moves to be perceived as fallacious. In order to prevent this from happening, they will be inclined to try to stretch the scope of the soundness of the mode of strategic manoeuvring concerned to such an extent that their fallacious argumentative move is made to appear sound. An argumentum ad hominem, for instance, is then presented as if it were a personal attack that is fully justified in the circumstances in which it is made and relevant to resolving the difference of opinion at issue. Or an argumentum ad misericordiam is portrayed as an appeal to pity that is fully justified in the context concerned and decisive. In this sense, the inclination to strategically hide fallaciousness in argumentative discourse by moving the goal posts is another cause of the problem that sound and fallacious argumentative moves are sometimes hard to distinguish. The deceptiveness that makes fallacies sometimes hard to detect is still increased by the fact that argumentative moves that are sound in the institutional context of the one communicative activity type (or cluster of communicative activity types) may be fallacious in the institutional context of another communicative activity type (or cluster of communicative activity types). This means that the criteria for deciding whether an argumentative move is to be regarded sound may vary to some extent depending on the institutional macro-context of the communicative domain in which the argumentative discourse takes place. As a consequence, the general standards for reasonable argumentative discourse incorporated in the rules for critical discussion that constitute the pragma-dialectical code of conduct need to be specified in accordance with the requirements of the communicative activity type, or cluster of communicative activity types, to which they are applied. This contextual differentiation may be another cause of the problems involved in distinguishing between sound and fallacious argumentative moves in actual argumentative practices. 6 Exploiting Hidden Fallaciousness The results of our comprehensive empirical project 'Conceptions of reasonableness' described in Sect. 2 indicate that when confronted with clear cases of violations of rules for critical discussion people consistently judge these argumentative moves as unreasonable. The fact that most violations of the pragma-dialectical rules are emphatically rejected as unreasonable contributions to the discussion leads to the question how it can be explained that fallacies so often occur in oral and written argumentative discourse without being recognized as such by the listeners or readers. In real life argumentative discourse, fallacies will generally not be committed straightforwardly and usually they will not appear in a clear way. This makes it much harder to recognize them straightaway as unreasonable discussion moves. Before they can be detected, it first needs to be determined to what extent the discourse concerned can be reconstructed as a critical discussion aimed at resolving a difference of opinion on the merits. For a better understanding of the problem of recognizing fallacies "in the wild", we turn to the theory of strategic manoeuvring ( van Eemeren 2010). Arguers themselves know that fallacious moves are unreasonable and may be detected by others. In order to prevent this from happening, they will sometimes try to manoeuvre strategically in such a way that their fallacious argumentative moves do not look unreasonable. As we already explained, certain modes of strategic manoeuvring, such as a personal attack or an appeal to authority, are not automatically unreasonable by themselves, but only when in using them in the case concerned a rule for reasonable argumentative discourse has been violated. A personal attack, for instance, is in principle a legitimate argumentative move if it conveys the content of a standpoint that is fully appropriate in the institutional macro-context concerned: a politician may certainly call another politician unreliable without being considered unreasonable when making this personal attack does not involve an infringement of the Freedom Rule. In the same vein, it is also quite legitimate to criticize an arguer who wrongfully refers to himself or herself as an expert regarding a certain topic for misusing authority argumentation and who is thus committing an argumentum ad verecundiam. In certain cases, strategic manoeuvring by using the fallacy of an abusive ad hominem can have a reasonable appearance because it mimics a legitimate reaction to such an abuse of authority argumentation ( van Eemeren, Garssen & Meuffels 2012b). When arguers present themselves wrongfully as experts in a certain field or claim to be trustworthy when in fact they are not, it is perfectly reasonable to attack them personally about that. Due to the existence of such special cases, it is not always immediately clear whether a personal attack must be regarded as reasonable criticism or as a fallacious argumentative move which is an argumentum ad hominem. In two experiments we have tested empirically the hypothesis that abusive ad hominem attacks are considered substantially less unreasonable by people when they are presented as critical reactions to authority argumentation in which the person attacked parades as an authority (van Eemeren, Garssen & Meuffels 2012b). In both experiments this hypothesis was confirmed. In a similar vein, the argumentum ad baculum can also be camouflaged. This fallacy counts as a violation of the Freedom Rule because by making a threat the arguer prevents the interlocutor from advancing a standpoint or doubt. While in its straightforward presentation this fallacy is recognized right away, the arguer can present it as reasonable by making the threat look like a well-intended advice to the interlocutor, so that its fallacious character is harder to recognize. This is made possible because the speech acts of advising and threatening share a couple of important characteristics that enable the arguer to present the threat in their strategic manoeuvring as a warning for something they are not really responsible for ( van Eemeren, Garssen & Meuffels 2015: 316). The hypothesis we tested experimentally is that arguers will be more inclined to consider an argumentum ad baculum as reasonable that could also be seen as a piece of advice rather than a straightforward argumentation ad baculum. The results of our empirical research indicated that this was indeed the case ( van Eemeren, Garssen & Meuffels 2015: 316-319). In the argumentum ad consequentiam a non-legitimate step is made from a normative premise to a descriptive standpoint. By means of this argumentative move, the arguer tries to defend a descriptive standpoint by pointing at the negative consequences of the state of affairs mentioned in the standpoint. The argumentum ad consequentiam comes in two variants. The first one is based on a causal claim: the standpoint is true because it leads to a positive outcome. This variant resembles in its structure the pragmatic argument scheme. The difference is that in pragmatic argumentation the standpoint at issue is by definition prescriptive and in case of the argumentum ad consequentiam descriptive. In strategic manoeuvring to camouflage the unreasonableness of an argumentum ad consequentiam, the standpoint is presented ambiguously, so that it could be descriptive as well as prescriptive. This can be done by using phrasings such as x should (not) be seen as y or x should (not) be regarded as y. The second variant resembles ad absurdum argumentation. In this fallacious variant, the reason that is given is: X is true, because if X is not true, then Y is true, and Y is not desirable, while in the reasonable ad absurdum argumentation the reasoning is as follows: X is true because if X is not true, then Y is true and Y is not true. The difference between the fallacious and the non-fallacious versions lies in the (bridging) premise: Y is not desirable vs. Y is not true. The descriptive proposition in the reasonable version becomes normative in the fallacious version. To avoid detection of the fallacy, in the strategic manoeuvring a phrasing is therefore sometimes chosen that allows for both interpretations (untrue and undesirable) (Garssen 2016: 251). In an experiment we tested the claim that the camouflaged pragmatic variant of the argumentum ad consequentiam would be regarded less unreasonable than the straightforward argumentum ad consequentiam. Like in our other investigations of hidden fallaciousness, we found no empirical grounds for rejecting our hypothesis ( van Eemeren & Garssen 2019: 331-332). So far we focused in the hidden fallaciousness project on the principle of mimicry: by way of an ambiguous presentation, the fallacy is given a reasonable appearance. The arguer guilty of using a fallacy then counts on the presumption of reasonableness and hopes that the listener of reader will choose the reasonable interpretation. Other strategies that may have similar effects will have to be investigated. An example of such a strategy is making it hard for the interlocutor to ask relevant critical questions. Contextualisation of the Study of the Fallacies The standards for reasonable argumentative discourse that need to be observed to prevent fallacies from occurring are in the pragma-dialectical theorizing supposed to be agreed upon in the empirical counterpart of the opening stage of a critical discussion. This does not mean that in practice there is always an actual discussion between the participants about these procedural starting points: in argumentative reality, especially in strongly conventionalized communicative activity types such as a civil lawsuit, in a great many cases the crucial starting points, including certain evaluation procedures, are already given when people start taking part in a particular type of communicative activity. 7 Generally participation is in fact only possible on the condition that the participants know and respect these starting points. Nevertheless, in a great many communicative activity types there will still be some room left for procedural deliberations, particularly in informal communicative activity types where a formally approved conventionalization of the communicative activity type is lacking, such as a chat between friends. As a rule, such deliberations will start from the frameworks of starting points already familiar to the participants from their upbringing at home, in school or as members of a specific social or professional institution, i.e., from their primary or secondary socialization. 8 The evaluation procedure for argumentative discourse may in the various institutional contexts be implemented in slightly different ways. For the detection of fallacies it is therefore necessary to carefully examine all soundness criteria pertaining to a certain mode of strategic manoeuvring in order to determine whether they need to be specified for being applied to a particular argumentative practice -and if so, in exactly which way. 9 For the various communicative domains this may result in the articulation of (slightly) distinct sets of specific soundness criteria for particular modes of strategic manoeuvring in specific communicative activity types or clusters of communicative activity types. In the communicative activity type of a criminal trial in the legal domain, for example, the critical questions pertaining to appealing to authority which must be dealt with in the discourse will differ in some respects from the critical questions pertaining to appealing to authority in a scholarly paper or in other communicative activity types. In a criminal trial it is, for instance, appropriate to ask whether a witness whose testimony is used to support a juridical claim is indeed reliable, while in an academic debate and in certain other communicative activity types asking such a critical question would be inappropriate. It always depends on the institutional point that is to be realised in a certain communicative activity type, its conventionalisation and the institutional preconditions going with it, which requirements are pertinent to that communicative activity type. In our future research concerning the fallacies, next to further studies examining the various kinds of fallacies and formulating their soundness conditions by making explicit the critical questions associated with carrying out the modes of strategic manoeuvring concerned, we need to situate these modes of strategic manoeuvring in the institutional macro-contexts of the various kinds of communicative activity types of argumentative reality in which they have been used. By reflecting upon the rationale of their conventionalisation captured in the institutional points of these communicative activity types and the practical circumstances of the institutional macro-context in whey they are carried out, we can determine the primary and secondary institutional preconditions constraining them and find out how the general reasonableness standards are to be implemented in the institutional macro-context concerned. In this way we can give a precization of the specific standards of reasonableness that need to be taken into account in that context in order to resolve a difference of opinion on the merits. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
8,463
2023-02-13T00:00:00.000
[ "Philosophy" ]
Blockade of growth hormone secretagogue receptor 1A signaling by JMV 2959 attenuates the NMDAR antagonist, phencyclidine-induced impairments in prepulse inhibition Rationale Schizophrenic-spectrum patients commonly display deficits in preattentive information processing as evidenced, for example, by disrupted prepulse inhibition (PPI), a measure of sensorimotor gating. Similar disruptions in PPI can be induced in rodents and primates by the psychotomimetic drug phencyclidine (PCP), a noncompetitive inhibitor of the NMDA receptor. Mounting evidence suggests that the hunger hormone ghrelin and its constitutively active receptor influences neuronal circuits involved in the regulation of mood and cognition. Objectives In the present series of experiments, we investigated the effects of ghrelin and the growth hormone secretagogue receptor (GHS-R1A) neutral antagonist, JMV 2959, on acoustic startle responses (ASR), PPI, and PCP-induced alterations in PPI. Results Intraperitoneal (i.p.) administration of ghrelin (0.033, 0.1, and 0.33 mg/kg) did not alter the ASR or PPI in rats. Conversely, i.p. injection of JMV 2959 (1, 3, and 6 mg/kg), dose dependently decreased the ASR and increased PPI. Pretreatment with JMV 2959 at a dose with no effect on ASR or PPI per se, completely blocked PCP-induced (2 mg/kg) deficits in PPI while pretreatment with the highest dose of ghrelin did not potentiate or alter PPI responses of a sub-threshold dose of PCP (0.75 mg/kg). Conclusion These findings indicate that the GHS-R1A is involved in specific behavioral effects of PCP and may have relevance for patients with schizophrenia. Introduction The growth hormone secretagogue receptor (GHS-R1A), initially an orphan receptor activated by growth hormonereleasing peptides and nonpeptidyl ligands such as GHRP-6 and MK-0677, is expressed in discrete areas throughout the central nervous system (Howard et al. 1996;Guan et al. 1997). The receptor, which mediates several biological activities, including secretion of GH and stimulation of appetite and serves to maintain energy homeostasis, is constitutively active when expressed in cell lines and is activated by its endogenous gastric-derived ligand ghrelin (Howard et al. 1996;Kojima et al. 1999;Holst et al. 2003). In recent years, there has been an increasing interest in the modulatory effect of ghrelin and the GHS-R1A on central dopamine and glutamate signaling (Abizaid et al. 2006;Jerlhag et al. 2006Jerlhag et al. , 2011aJiang et al. 2006;Kern et al. 2012;Goshadrou et al. 2013;Ghersi et al. 2014). Ghrelin and GHSR-1A ligands, thus, have been shown to regulate feeding behavior, memory function, and cognition via dopamine and/or glutamate signaling Jacoby and Currie 2011;Jerlhag et al. 2011a;Goshadrou et al. 2013;Ghersi et al. 2014). Furthermore, ghrelin augments while GHSR-R1A antagonist attenuate cocaine-and amphetamine-induced locomotor stimulation and accumbal dopamine release (Wellman et al. 2008;Jerlhag et al. 2010) as well as the rewarding properties of alcohol (Jerlhag et al. 2009) consistent with effects on dopamine signaling. Patients with psychiatric disease, in particular schizophrenia-spectrum patients, are commonly unable to filter incoming sensory stimuli, which led to the hypothesis that these patients are afflicted by impairments in information processing (Braff et al. 1978;Freedman et al. 1987;Braff 1993). Deficits in preattentive information processing/gating mechanisms, as measured for example by prepulse inhibition (PPI) of the acoustic startle reflex, are found in patients with psychiatric disease (Braff et al. 1978). Alterations in PPI responses has also been demonstrated following the administration of various psychotomimetic drugs affecting central dopaminergic and glutamatergic signaling, such as amphetamine in humans and rodents (Mansbach et al. 1988;Hutchison and Swift 1999) or the N-methyl-D-aspartate (NMDA)-receptor antagonists phencyclidine (PCP) in monkeys and rodents (Bakshi et al. 1994;Javitt and Lindsley 2001). In contrast to the effects of PCP on PPI responses in rodents and monkeys, other NMDA receptor antagonists, such as ketamine and memantine, increase PPI responses when tested in humans (Duncan et al. 2001;Swerdlow et al. 2009). In humans, PCP mimics the symptomology of schizophrenia in the sense that it encompasses both negative and positive symptoms as well as cognitive dysfunctions (Allen and Young 1978). Phencyclidine also causes behavioral abnormalities in experimental animals that are similar to those observed in patients with schizophrenia (Moghaddam and Adams 1998). PCPinduced deficit in sensorimotor gating has been shown to be antagonized by both atypical antipsychotics such as clozapine (Bakshi et al. 1994) and recently also by the new dopamine stabilizer aripirazole (Fejgin et al. 2007), underlining the interaction between dopaminergic and glutamatergic signaling in schizophrenia. Given the dopamine modulatory effects of ghrelin/GHSR-1A signaling combined with the neuroanatomical overlap found between the central expression of the GHSR-1A and areas recognized to be involved in sensorimotor gating (Guan et al. 1997;Swerdlow et al. 2001), prompted us to investigate the involvement of GHS-R1A and ghrelin signaling on NMDA receptor-mediated deficits in prepulse inhibition, a model of schizophrenia, in rodents. Materials and methods Animals Two-hundred-gram male Sprague-Dawley rats (B & K Universal AB, Sollentuna, Sweden) were used in the study. Upon arrival, the animals were housed in groups of four and allowed to acclimatize for 1 week before the start of the experiment. They were maintained under a 12/12-h light/dark cycle (lights on at 0600 hours), constant humidity (50 %), and temperature (20±1°C) and had free access to standard food pellet (Lactamin, Vadstena, Sweden) and tap water. The study was approved by the local Ethics Committee at the University of Gothenburg, Sweden. Drugs, doses, and administration All drugs used were dissolved in a physiological saline solution (0.9 % NaCl) in the morning on the day of the experiment and administered in a volume of 2 ml/kg via intra peritoneal (i.p.) injections. Acyl ghrelin (Tocris, Bristol, UK) was given in a dose range (0.033, 0.1, and 0.33 mg/kg) that previously has been shown to affect feeding responses, central c-Fos expression, and behavior (Hewson and Dickson 2000;Wren et al. 2000;Davis et al. 2007). The doses of the selective GHSR-1A neutral antagonist, JMV 2959 (a gift from AeternaZentaris GmBH, Frankfurt, Germany), used for the JMV 2959 dose response were 1, 3, and 6 mg/kg. The 3 and 6 mg/kg doses have previously been shown to inhibit ghrelin and fastinginduced feeding and affect various behavioral responses ). Phencyclidine hydrochloride (PCP, Sigma, St. Louis, MO, USA) was given at a dose of 2 mg/kg, which is known to produce robust disruptions of PPI ). The sub-threshold dose of PCP used was 0.75 mg/kg which has no or very weak effects on PPI . For the interaction studies between PCP and JMV 2959 or ghrelin, a dose of 2 mg/kg of JMV 2959 and 0.33 mg/kg of ghrelin was used. Prepulse inhibition apparatus Acoustic startle was recorded using a MOPS 3 startle response recording system (Metod och Product Svenska AB, Sweden). The animals were placed in small Plexiglas® cages (10×5.5× 6 cm) that were suspended at the top in a piston. The movements of the animal in the cage were registered by a piezoelectric accelerometer connected to the piston, and the signal generated was digitized by a microcomputer that also controlled the delivery of acoustic stimuli. Startle amplitude was defined as the maximum signal amplitude occurring 8-30 ms after the startle-eliciting stimulus, hence taking response latency into account. Four cages were used simultaneously and each cage was housed in a dimly lit and sound-attenuated cabinet (52×42×38 cm). The cages were calibrated for equal sensitivity prior to testing and each animal was always tested in the same cage at subsequent tests in order to minimize intertrial variation. The acoustic stimuli consisted of white noise, which was delivered by two high-frequency loudspeakers built into the ceiling of the cabinet. Prepulse inhibition paradigm Each test session was initiated with an 8-min adaptation period containing only white background noise at 62 dB followed by series of five startle pulse-alone trials and five prepulsealone trials. These initial pulse-alone trials served only to accommodate the animals to the sudden change in stimulus conditions and were omitted from the data analysis and the prepulse-alone trials were analyzed to ensure that they did not evoke any startle responses on their own. The animals were then subjected to a pseudo-randomized combination of three prepulse-alone trials for each prepulse intensity, 45 pulse-alone trials and 15 prepulse-pulse trials for each of the three prepulse intensities. Trials were separated by 5-to 15-s intervals and the test sessions lasted approximately 24 min including the adaptation period. The startle pulse was set to 105 dB and prepulse intensities to 9, 12, and 15 dB above background. Duration of acoustic stimuli was set to 20 ms for both prepulses and startle pulses and the interstimulus interval was set to 40 ms. Experimental design All animals used in the experiments were initially subjected to a pretest in the startle apparatus without drug treatment to ensure that they expressed basal startle activity and PPI. Animals with deviant acoustic startle response (ASR) or PPI in the pretest were excluded from the experiments. Prior to all sessions, the animals were put in the test room in the morning at least 1 h prior to the test in order to habituate them to the test environment. Experiment 1: JMV 2959 dose response The animals (n=15) were randomly assigned to an initial treatment dose or vehicle and subsequently received all the different doses tested in a counter balanced design. Each test was separated by a 3-to 4-day-long washout period. The rats were given the injection of JMV 2959 (or vehicle) 17 min prior to being placed in the startle cages (i.e., 25 min prior to the first pulse). Experiment 2: JMV 2959 in combination with PCP In order to examine the putative interaction between JMV 2959 and PCP, rats (n=23) were pretreated with JMV 2959 (2 mg/kg) or vehicle 10 min prior to the injection of PCP (2 mg/kg) or vehicle. Seven minutes following the last injection, the animals were placed in the startle cages for the adaptation period and subsequent PPI testing. Each animal received all of the four treatment combinations (sal/sal, JMV2959/sal, sal/PCP, and JMV2959/PCP) in a counter-balanced design. Each test was separated by a 3-to 4-daylong washout period. Experiment 3: ghrelin dose response The ghrelin dose response test was performed in the same way as experiment 1 except the animals (n=24) received ghrelin injections. Experiment 4: ghrelin in combination with low-dose PCP Animals (n=12 in each group) were assigned to one of the following four treatment combinations: sal/sal, ghrelin/sal, sal/PCP, and ghrelin/PCP. The animals were first pretreated with ghrelin (0.33 mg/kg) or vehicle (25 min prior to first pulse) and 10 min later injected with either PCP (0.75 mg/kg) or vehicle. Seven minutes following the last injection, the animals were placed in the startle cages for adaption and subsequent PPI testing. Data and statistical analysis The mean response amplitude for pulse-alone trials (P) was calculated for each test. This measure was used in the statistical analysis to assess drug-induced changes in acoustic startle response (ASR). The mean response amplitude for prepulsepulse trials (PP) was also calculated and used to express the prepulse inhibition (PPI) according to the following formula: PPI % ð Þ ¼ 100− PP=P ð Þ*100 ½ Experiments 1 and 2 were analyzed by a two-way repeated measures ANOVA with treatment dose and prepulse intensity as within-subject factors. A three-way mixed model ANOVA with pretreatment and treatment as between-subject factors and prepulse intensity as within-subject factor was applied when analyzing data from experiment 3 while experiment 4 was analyzed using a three-way repeated measures ANOVA with pretreatment, treatment, and prepulse intensity as withinsubject factors. There was a significant main effect of prepulse intensity in each experiment (data not shown); however, as no prepulse intensity × pretreatment × treatment interaction was obtained (i.e., the effect of prepulse intensity did not vary significantly between testing conditions), PPI data collapsed across prepulse intensities and presented as an average %PPI throughout. The acoustic startle response and intertrial activity (ITA) were analyzed using a two-way repeated measures or mixed model ANOVA with pretreatment and treatment as within-or between-subject factors depending on the experiment type. A Bonferroni post hoc analysis was done to compare individual treatment combinations or doses. Results Using prepulse noise level (9, 12, or 15 dB above background level) and the different treatment combinations or doses as within-subject factors revealed no statistically significant interaction between treatments and noise level for the dose response studies and the JMV 2959/PCP interaction study (ghrelin dose response; F(6,138)=1.05, ns; JMV 2959 dose response: F(6,84) = 2.37, ns; JMV 2959/PCP: F(2,44)=1.40, ns)). Furthermore, the interaction study investigating the possible influence of ghrelin treatment on sub-threshold PCP revealed no statistically significant interactions between noise level and treatment (ghrelin/PCP; F(2,76)=2.9, ns). Consequently, changes in prepulse level were considered not to significantly alter the effect of treatment on PPI and hence noise levels were collapsed across intensities and the resultant variable was used in the statistical analysis. Treatment with the ghrelin antagonist, JMV 2959 dose dependently decreased the startle response (F(3.42) = 4.4, p<0.01) and increased %PPI (F(3.42)=3.9, p<0.05) in the prepulse inhibition paradigm. The alteration in the startle response was mainly due to a 27 % decrease in the startle seen in the highest dose of JMV 2959 (6 mg/kg) compared to vehicle (p<0.05, Bonferroni post hoc test) (Fig. 1a). Even though the ANOVA revealed a dose-dependent increase in the %PPI response, no differences between individual JMV 2959 treatment doses or vehicle could be found (Fig. 1b). No overall difference in intertrial activity was found in the ANOVA (F (3, 42)=0.44, ns) (Fig. 1c). Discussion Herein, we show that modulation of the GHS-R1A alter acoustic startle responses (ASR) as well as prepulse inhibition Fig. 1 Effects of increasing doses of the ghrelin antagonist JMV 2959 (1-6 mg/kg, i.p.) on acoustic startle (a), prepulse inhibition of acoustic startle (b), and intertrial activity (c). JMV 2959 was injected 25 min before the first pulse. The data are presented as mean values±SEM. *p<0.05 compared to saline treatment (statistically significant ANOVA followed by Bonferroni post hoc test) (PPI) of the ASR. Specifically, JMV2959, a highly selective GHSR-1A antagonist, dose dependently decreased ASR and increased %PPI. In addition, JMV 2959 completely blocked the effects of PCP-induced deficits in PPI at a dose that by itself did not significantly affect either ASR or PPI. On the contrary, peripheral treatment with ghrelin did not have any effect on ASR or PPI and did not potentiate PCP-induced effects on PPI. Recent findings has shown that modulation of the GHS-R1A signaling alters dopamine release and dopamine turnover Fig. 2 Interaction between JMV 2959 (2 mg/kg, i.p.) and PCP (2 mg/kg, i.p.) on acoustic startle (a), prepulse inhibition of acoustic startle (b), and intertrial activity (c). JMV 2959 was injected 25 min and PCP injected 15 min before the first pulse. The rats were tested every 3-4 days in a randomized order until they had received all treatments. The data are presented as mean values±SEM. ***p<0.001 (statistically significant ANOVA followed by Bonferroni post hoc test) in both subcortical and prefrontal areas of the brain and that antagonism at the GHS-R1A can block dopamine release in response to drugs of abuse. Our finding that JMV 2959 dose dependently increase %PPI and decrease ASR in animals could possibly be explained by the modulatory effects of ghrelin and GHS-R1A signaling on dopamine transduction. Interestingly, heterodimerization of GHS-R1A with both D1 and D2 receptors facilitates dopamine transduction in vitro (Jiang et al. 2006;Kern et al. 2012). Furthermore, GHS-R1A is coexpressed with D2 receptors in hypothalamic neurons and with D1 receptors in the hippocampus and striatum (Jiang et al. 2006;Kern et al. 2012), which would support the notion that the effects of JMV 2959 on ARS and PPI could be mediated via modulatory effects on dopamine signaling. Similar to the effects of JMV 2959 to increase %PPI and decrease ASR, previous studies have shown that atypical antipsychotics that modulate dopamine receptor activity such as aripirazole and clozapine as well as the D2 receptor antagonists such as haloperidol dose dependently increase %PPI and decrease ASR in the acoustic startle and prepulse inhibition paradigm (Depoortere et al. 1997;Fejgin et al. 2007). In our study, we were not able to find any effects of ghrelin treatment on %PPI and ASR, which might suggest that the GHS-R1A rather than ghrelin has an important role in regulating ASR and PPI. Supportively, the GHS-R1A has been shown to be constitutively active and the GHS-R1A/D2 heterodimer allosterically modify D2-mediated calcium mobilization in the absence of the endogenous ligand ghrelin; effects that were blocked by both D2 and GHS-R1A antagonism (Holst et al. 2003;Kern et al. 2012). Furthermore, previous findings on alcohol intake and alcohol-induced reward also suggest GHS-R1A-mediated rather than circulating ghrelinmediated involvement in the regulation of dopamine transduction (Jerlhag et al. 2009(Jerlhag et al. , 2011a(Jerlhag et al. , b, 2014. In the present study, we found that treatment with JMV 2959 completely blocked the effects of PCP on %PPI. Phencyclidine, a noncompetitive antagonist of the N-methyl-D-aspartate (NMDA) receptor, is known to induce a state that closely resemble schizophrenia in humans, including both positive and negative symptoms as well as cognitive dysfunctions (Yesavage and Freman 1978;Javitt and Zukin 1991), and has previously been used to investigate behaviors associated with schizophrenia in experimental subjects. In animals, PCP and other noncompetitive antagonists of the NMDA receptor such as MK-801 are widely used to model aspects of the human disease, including sensorimotor-gating deficits . Recent findings have shown that ghrelin treatment can enhance NMDA receptor signaling through intracellular phosphorylation of the NR1 subunits of the NMDA receptor via the cAMP/PKA pathway indicating that ghrelin, possibly through the GHS-R1A, may interact with NMDA receptor signaling (Isokawa 2013a, b). However, we did not see any potentiating effects of ghrelin on sub-threshold PCP treatment in %PPI responses indicating that ghrelin is not involved in PCP-induced deficits of sensorimotor gating. Supportively, no associations between ghrelin levels and schizophrenia have been found in humans (Tsai et al. 2011). The interaction between ghrelin (putatively via GHS-R1A signaling) and the NMDA receptor may still, however, partially explain the beneficial effects of JMV 2959 on PCP-induced disruption of the PPI response. There is strong evidence for an interaction between dopamine and glutamate signaling in schizophrenia (Carlsson and Carlsson 1990;Bakshi et al. 1994;Bakshi and Geyer 1995;Fejgin et al. 2007). Thus, it has been put forth that a hyperdopaminergic condition could be a result of cortical NMDA receptor hypofunction with reduced inhibition of midbrain brain dopamine neuron firing as a consequence that may precipitate positive symptoms (Kegeles et al. 2000). The abolition of PCP-induced deficits in PPI by JMV 2959 could thus, in addition to direct modulations at the NMDA receptor, also be a result of a stabilizing effect of JMV 2959 on dopamine signaling. The present findings are based on systemic administration and further investigation of the neuroanatomical regulation of gating mechanisms by GHSR1A signaling using parenchymal brain injections of GHSR1A ligands is needed. A deeper understanding of how GHS-R1A antagonists, such as JMV 2959, alter dopamine transduction and GHS-R1A heterodimerazation with dopamine receptors will also give a better understanding of the effects of central GHS-R1A signaling in behaviors in general and in schizophrenia and schizophrenia-related behaviors specifically.
4,401.6
2015-08-29T00:00:00.000
[ "Biology", "Psychology", "Medicine" ]
An analytical model for the growth of quantum dots on ultrathin substrates The self-assembly of heteroepitaxial quantum dots on ultrathin substrates is analyzed within the context of small perturbation theory. Analytical expressions are derived for the dependence of the quantum dot separation on the substrate thickness. It is shown that the substrate thickness is critical in determining this separation when it is below the intrinsic material length scale of the system. The model is extended to simultaneous dot growth on both sides of the substrate. It is shown that vertically anticorrelated structures are preferred with an increase in the dot separation of 15% above that found in the one-sided case. © 2011 American Institute of Physics. doi:10.1063/1.3583447 The instability of elastically strained heteroepitaxial thin films leads to the self-assembly of quantum dot ͑QD͒ nanostructures.Known as the Asaro-Tiller-Grinfeld ͑ATG͒ instability, 1 the size and spacing of the QDs is determined by the competition between the energetic driving forces ͑strain and surface energy͒ and the kinetics of the material transport process ͑surface diffusion͒.Recently, it has been shown that some control of the QD size and spacing can be achieved through varying the thickness of the substrate. 2,3The energetics of the system are strongly affected when the thickness of the substrate is comparable with the QD length scale.For ultrathin substrates, such as nanomembranes, 2,4 significant stress relief can arise due to bending of the substrate.The local strain field beneath the QDs is also modified due to the proximity of the lower free surface.If QDs are simultaneously deposited on both sides they can interact through the substrate to create a vertically anticorrelated QD structure. 2n this paper the ATG instability model is extended to consider growth of QDs on ultrathin substrates to quantify how the substrate thickness affects the QD size and spacing and their spatial correlation. The problem is formulated within the context of a kinetic variational principle, 5 whereby has contributions from the dissipation potential, ⌿, which represents the work done in material transport, and the rate of change in the Gibb's free energy of the system, G ˙, which provides the driving force for the evolution.The optimal kinetic field is that which render the variational functional stationary, ␦⌸ =0. First we consider a single planar epitaxial film of thickness h 0 ͑see Ref. 6 for morphological effects͒.This is subject to a perturbation of amplitude A͑t͒ and wavelength , such that h͑x , t͒ = h 0 + A͑t͒sin͑2x / ͒, as shown in Fig. 1.Mass conservation requires that ͓d͑j s ͒ / dx͔ + v n = 0, where j s is the material surface flux and the normal velocity of the surface v n Ϸ h ˙for small slopes ͑A Ӷ͒.The dissipation potential ͑per unit wavelength͒ is then where D s is the surface diffusion coefficient.The Gibb's free energy has surface energy and elastic strain energy contributions.For an isotropic surface energy density, ␥ 0 , and a surface elastic strain energy density, w͑x͒, one has 5 where ͑x͒Ϸ͑d 2 h / dx 2 ͒ is the surface curvature. The elasticity problem consists of two parts: global bending/stretching of the substrate by the initially planar film; and a local sinusoidal contribution from the thin film perturbation, as shown in Fig. 1.These are solved separately and combined through linear superposition.The film in Fig. 1 is subject to a ͑compressive͒ mismatch strain ⑀ m Ͻ 0 which is relaxed by bending of the assembly.Let the thickness of the substrate be 2c and assume the variation in the normal strain in the x-direction through the thickness is linear such a͒ Electronic mail<EMAIL_ADDRESS>⑀ x ͑y͒ = ⑀ 0 + Ky, where K is the ͑elastic bending͒ curvature of the assembly and ⑀ 0 is a uniform elongation.The total elastic strain energy ͑per wavelength͒ is where the normal stress in the x-direction, x = E ‫ء‬ ⑀ x , and E ‫ء‬ = ͓E / ͑1− 2 ͔͒ is given by the Young's modulus E and Poisson's ratio of the substrate and film ͑assumed to be the same͒.This is minimized when where = h 0 / 2c is the film-to-substrate thickness ratio. The total strain on the top surface of the film is now is the uniform strain due to extension/bending and ⌬⑀ x is the perturbation to the strain field due to the waviness in the film profile.This latter contribution can be modeled for small slopes as a distributed surface traction, whereby the surface shear stress component xy =−E ‫ء‬ ⑀ x + ͑dh / dx͒. 7The internal stresses must satisfy the standard biharmonic equation 8 ͑7͒ where ␣ =2 / .The constants C i are chosen to satisfy the boundary conditions y = 0 and xy =−E ‫ء‬ ⑀ x + A␣ cos͑␣x͒ on the top surface ͑at y = cЈ͒ and y = 0 and xy = 0 on the bottom surface ͑at y =−cЈ͒, where cЈ = c + ͑1 / 2͒h is the half-thickness for this assembly.Hence the strain perturbations on the upper ͑+͒ and lower ͑Ϫ͒ surfaces are where the functions with =2cЈ / , define the substrate thickness effect in relation to the dot spacing, .These functions are shown in Fig. 2. For ultrathin substrates ͑ Ӷ 1͒, g i → B i / , where B 1 =1/ 2 and B 2 =3/ 2. For thicker substrates ͑ ӷ 1͒, g i → 1 and the ATG result 5 is recovered as required. ͑11͒ where the strain energy w͑ , ͒ = w 0 f͑͒g͑͒ accounts for both the global bending/extension of the substrate, f͑͒, and the local sinusoidal deformation field due to the film perturbation, g͑͒. The variational functional ͑1͒ is defined by ͑2͒ and ͑11͒.The growth rate that minimizes this functional is A ˙= ␤A, where The observed ͑fastest growing͒ wavelength will be that which maximizes ␤.This stationary value occurs when This has no simple solution but the wavelength can be approximated in the thin and ultrathin regimes as where B =2͑B 1 + B 2 ͒ = 4 and 0 =2␥ 0 / 3w 0 is the ATG ͑thick substrate͒ wavelength.The range of validity of these approximations can be seen in Fig. 3.It suggests that taking the smaller of the two wavelengths gives a reasonable rough estimate.It can be seen that the wavelength becomes increasingly dependent on the substrate thickness as its thickness c decreases below 0 / 2f͑͒.Such a decrease in the QD spacing with decreasing substrate thickness has been observed by Ritz et al. 2 for Ge QDs on 6 and 23 nm Si nanomembranes. where ␤ 0 =81D s w 0 4 / ␥ 0 3 is the classic ATG growth rate.The analysis is now extended to consider simultaneous growth of QDs on both sides of the substrate.Introduce a film on the lower ͑Ϫ͒ surface with a surface profile h͑x , t͒ = h 0 + A͑t͒sin͑␣x + ␦͒, where ␦ is the phase difference between the top and bottom growth modes.The strain perturbation on the top surface ͑+͒ is the superposition of the two fields, ͑8͒, so for the two-sided case ͑18͒ The rest of the analysis proceeds as before.There is no bending of the assembly ͑due to symmetry͒ and hence f͑͒ =1/ ͑1+2͒ from pure extension.The strain field interactions give B = B 1 ͑1 + cos ␦͒ + B 2 ͑1 − cos ␦͒.The expected growth mode is the one that maximizes the growth rate ͑17͒ and hence maximize B. The minimum value of B = 1 occurs when the dots are vertically correlated ͑␦ =0͒.The maximum value of B = 3 occurs when the dots are anticorrelated ͑␦ = ͒.These two scenarios are shown in Fig. 4. Hence it is always expected that dots will be vertically anticorrelated when they can influence each other through the ultrathin substrate.This corresponds with experimental observations. 2It is predicted that the wavelength for two-sided growth will be 2 / ͱ 3 = 1.15 times the value in the single-sided case. In conclusion, an analytical model for the stability of epitaxial films on an ultrathin substrate has been presented.The substrate thickness effect is incapsulated in the functions f͑͒ ͑stress relief due to bending/extension͒ and g͑͒ ͑stress change due to surface modulation͒, where = h 0 / 2c is the ratio of film thickness ͑h 0 ͒ to substrate thickness ͑2c͒ and =2cЈ / depends on the ratio of the total thickness ͑cЈ͒ to the QD separation ͑͒.It is found that the substrate thickness is critical in determining this wavelength for c Ͻ/ 2. Furthermore, it is shown that vertically anticorrelated QD structures are preferred for two-sided growth and that the dot separation will be 15% greater than in the single-sided case. FIG. 1 . FIG. 1. ͑Color online͒ Geometry of the film-substrate assembly ͑top͒.The epitaxial film experiences a mismatch strain ⑀ m and a sinusoidal surface perturbation of amplitude A and wavelength .The assembly can relieve the strain by a combination of global elongation and bending ͑middle͒ with the addition of a sinusoidal component due to the local surface waviness ͑bot-tom͒.Contours show magnitude of stress x in the substrate. FIG.2.͑Color online͒ The two functions, g 1 and g 2 , define the surface strain perturbation ͑8͒ and depend on the substrate thickness relative to the dot separation, . Downloaded 27 Mar 2013 to 143.210.120.177.This article is copyrighted as indicated in the abstract.Reuse of AIP content is subject to the terms at: http://apl.aip.org/about/rights_and_permissionsSubstitution in Eq. ͑12͒ gives the optimal growth rate
2,207.8
2011-04-22T00:00:00.000
[ "Physics", "Materials Science" ]
Adiabatic optical parametric oscillators : steady-state and dynamical behavior We study singly-resonant optical parametric oscillators with chirped quasi-phasematching gratings as the gain medium, for which adiabatic optical parametric amplification has the potential to enhance conversion efficiency. This configuration, however, has a modulation instability which must be suppressed in order to yield narrowband output signal pulses. We show that high conversion efficiency can be achieved by using either a narrowband seed or a high-finesse intracavity etalon. © 2012 Optical Society of America OCIS codes: (190.4970) Parametric oscillators and amplifiers; (190.3100) Instabilities and chaos; (190.4410) Nonlinear optics, parametric processes; (190.4360) Nonlinear optics, devices; ((230.4320) Nonlinear optical devices; (320.7110) Ultrafast nonlinear optics. References and links 1. H. Suchowski, V. Prabhudesai, D. Oron, A. Arie, and Y. Silberberg, “Robust adiabatic sum frequency conversion,” Opt. Express17, 12731–12740 (2009). 2. C. R. Phillips and M. M. Fejer, “Efficiency and phase of optical parametric amplification in chirped quasi-phasematched gratings,” Opt. Lett. 35, 3093–3095 (2010). 3. C. Heese, C. R. Phillips, L. Gallmann, M. M. Fejer, and U. Keller, “Ultrabroadband, highly flexible amplifier for ultrashort midinfrared laser pulses based on aperiodically poled Mg:LiNbO 3,” Opt. Lett.35, 2340–2342 (2010). 4. M. Charbonneau-Lefort, B. Afeyan, and M. M. Fejer, “Optical parametric amplifiers using chirped quasi-phasematching gratings I: practical design formulas,” J. Opt. Soc. Am. B 25, 463–480 (2008). 5. G. Imeshev, M. M. Fejer, A. Galvanauskas, and D. Harter, “Pulse shaping by difference-frequency mixing with quasi-phase-matching gratings,” J. Opt. Soc. Am. B 18, 534–539 (2001). 6. L. Gallmann, G. Steinmeyer, U. Keller, G. Imeshev, M. M. Fejer, and J. Meyn, “Generation of sub-6-fs blue pulses by frequency doubling with quasi-phase-matching gratings,” Opt. Lett. 26, 614–616 (2001). 7. M. Charbonneau-Lefort, B. Afeyan, and M. M. Fejer, “Competing collinear and noncollinear interactions in chirped quasi-phase-matched optical parametric amplifiers,” J. Opt. Soc. Am. B 25, 1402–1413 (2008). 8. M. Charbonneau-Lefort, M. M. Fejer, and B. Afeyan, “Tandem chirped quasi-phase-matching grating optical parametric amplifier design for simultaneous group delay and gain control,” Opt. Lett. 30, 634–636 (2005). 9. K. A. Tillman and D. T. Reid, “Monolithic optical parametric oscillator using chirped quasi-phase matching,” Opt. Lett.32, 1548–1550 (2007). 10. K. A. Tillman, D. T. Reid, D. Artigas, J. Hellstrm, V. Pasiskevicius, and F. Laurell, “Low-threshold femtosecond optical parametric oscillator based on chirped-pulse frequency conversion,” Opt. Lett. 28, 543–545 (2003). 11. J. A. Armstrong, N. Bloembergen, J. Ducuing, and P. S. Pershan, “Interactions between light waves in a nonlinear dielectric,” Phys. Rev. 127, 1918–1939 (1962). 12. W. R. Bosenberg, A. Drobshoff, J. I. Alexander, L. E. Myers, and R. L. Byer, “93% pump depletion, 3.5-w continuous-wave, singly resonant optical parametric oscillator,” Opt. Lett. 21, 1336–1338 (1996). 13. C. R. Phillips and M. M. Fejer, “Stability of the singly resonant optical parametric oscillator,” J. Opt. Soc. Am. B 27, 2687–2699 (2010). 14. C. R. Phillips, J. S. Pelc, and M. M. Fejer, “Continuous wave monolithic quasi-phase-matched optical parametric oscillator in periodically poled lithium niobate,” Opt. Lett. 36, 2973–2975 (2011). 15. S. T. Yang, R. C. Eckardt, and R. L. Byer, “Power and spectral characteristics of continuous-wave parametric oscillators: the doubly to singly resonant transition,” J. Opt. Soc. Am. B 10, 1684–1695 (1993). #158303 $15.00 USD Received 16 Nov 2011; revised 23 Dec 2011; accepted 27 Dec 2011; published 19 Jan 2012 (C) 2012 OSA 30 January 2012 / Vol. 20, No. 3 / OPTICS EXPRESS 2466 16. L. E. Myers and W. R. Bosenberg, “Periodically poled lithium niobate and quasi-phase-matched optical parametric oscillators,” IEEE J. Quantum. Electron. 33, 1663–1672 (1997). 17. A. Henderson and R. Stafford, “Spectral broadening and stimulated Raman conversion in a continuous-wave optical parametric oscillator,” Opt. Lett. 32, 1281–1283 (2007). 18. J. Kiessling, R. Sowade, I. Breunig, K. Buse, and V. Dierolf, “Cascaded optical parametric oscillations generating tunable terahertz waves in periodically poled lithium niobate crystals,” Opt. Express 17, 87–91 (2009). 19. R. Sowade, I. Breunig, I. Cmara Mayorga, J. Kiessling, C. Tulea, V. Dierolf, and K. Buse, “Continuous-wave optical parametric terahertz source,” Opt. Express 17, 22303–22310 (2009). 20. A. V. Smith, R. J. Gehr, and M. S. Bowers, “Numerical models of broad-bandwidth nanosecond optical parametric oscillators,” J. Opt. Soc. Am. B16, 609–619 (1999). 21. A. V. Smith, “Bandwidth and group-velocity effects in nanosecond optical parametric amplifiers and oscillators,” J. Opt. Soc. Am. B22, 1953–1965 (2005). 22. G. Arisholm, “Quantum noise initiation and macroscopic fluctuations in optical parametric oscillators,” J. Opt. Soc. Am. B16, 117–127 (1999). 23. G. Arisholm, “General analysis of group velocity effects in collinear optical parametric amplifiers and generators,” Opt. Express15, 6513–6527 (2007). 24. G. Arisholm, G. Rustad, and K. Stenersen, “Importance of pump-beam group velocity for backconversion in optical parametric oscillators,” J. Opt. Soc. Am. B 18, 1882–1890 (2001). 25. R. White, Y. He, B. Orr, M. Kono, and K. Baldwin, “Transition from single-mode to multimode operation of an injection-seeded pulsed optical parametric oscillator,” Opt. Express 12, 5655–5660 (2004). 26. A. Yariv,Quantum Electronics, 3rd ed. (Wiley, 1989). 27. G. Agrawal,Nonlinear Fiber Optics, 4th ed. (Academic Press, 2007). 28. C. R. Phillips, C. Langrock, J. S. Pelc, M. M. Fejer, I. Hartl, M. E. Fermann, “Supercontinuum generation in quasi-phasematched waveguides,” Opt. Express 19, 18754–18773 (2011) 29. C. R. Phillips, C. Langrock, J. S. Pelc, M. M. Fejer, J. Jiang, M. E. Fermann, I. Hartl, “Supercontinuum generation in quasi-phasematched LiNbO 3 waveguide pumped by a Tm-doped fiber laser system,” Opt. Lett. 36, 3912–3914 (2011) 30. M. Conforti, F. Baronio, and C. De Angelis, “Nonlinear envelope equation for broadband optical pulses in quadratic media,” Phys. Rev. A 81, 053841 (2010). 31. R. A. Baumgartner and R. L. Byer, “Optical parametric amplification,” IEEE J. Quantum Electron. 15 (6), 432 (1979) 32. Crisp, M. D, “Adiabatic-following approximation,” Phys. Rev. A 8, 2128–2135 (1973). 33. G. Luther, M. Alber, J. Marsden, and J. Robbins, “Geometric analysis of optical frequency conversion and its control in quadratic nonlinear media,” J. Opt. Soc. Am. B 17, 932–941 (2000). 34. O. Gayer, Z. Sacks, E. Galun, and A. Arie, “Temperature and wavelength dependent refractive index equations for MgO-doped congruent and stoichiometric LiNbO 3,” Appl. Phys. B: Lasers and Optics 91, 343–348 (2008). 35. K. L. Vodopyanov “Optical THz-wave generation with periodically-inverted GaAs” Laser Photon. Rev. 2, No. 1-2, 11–25 (2008) 36. G.-L. Oppo, M. Brambilla, and L. A. Lugiato, “Formation and evolution of roll patterns in optical parametric oscillators,” Phys. Rev. A49, 2028–2032 (1994) 37. J. E. Schaar, “Terahertz Sources Based On Intracavity Parametric Frequency Down-Conversion Using QuasiPhase-Matched Gallium Arsenide,” Ph.D. thesis, Stanford University (2009) 38. M. J. Lawrence, B. Willke, M. E. Husman, E. K. Gustafson, and R. L. Byer, “Dynamic response of a Fabry-Perot interferometer,” J. Opt. Soc. Am. B 16, 523–532 (1999). Introduction Chirped (aperiodic) quasi-phasematching (QPM) gratings have received attention for many optical frequency conversion schemes including difference frequency generation (DFG), optical parametric amplification (OPA), sum frequency generation (SFG), and related applications [1][2][3][4][5][6][7][8][9][10].Their main role so far has been to broaden the phasematching bandwidth compared to conventional periodic QPM gratings, without the need to use short crystals with reduced conversion efficiency.This broadening can be understood through a simple spatial frequency argument: due to dispersion, there is a mapping between phasematched frequency and grating k-vector; in chirped QPM gratings, the grating k-vector is swept smoothly over the range of interest, thereby broadening the spatial Fourier spectrum of the grating and hence the phasematching bandwidth [1,[5][6][7]. More recently it has been shown that nonlinear interactions in chirped QPM gratings can exhibit high efficiencies due to an adiabatic following process [1,2].For three-wave mixing processes involving input pump and signal waves and a generated idler wave, the ratio of pump output and input intensities asymptotes to 0 with respect to both the input signal and pump intensities, i.e. asymptotes to 100% pump depletion.This behavior occurs for interactions that are both plane-wave and monochromatic, provided that the QPM grating is sufficiently chirped.For non-diffracting (near-field) beams, all transverse spatial components interact independently and as plane waves.Thus, for interactions involving non-diffracting beams, all spatial components of the pump beam can asymptote to 100% depletion with respect to the pump and signal powers. For conventional interactions involving birefringent phasematching or periodic QPM gratings, there is a mapping between signal and pump intensity and the propagation distance required to fully deplete the pump, defined as L NL ; after L NL , back-conversion occurs, transferring energy back to the pump from the signal and idler waves [11].As a result, complete conversion across the spatial profile of non-diffracting Gaussian beams cannot usually be achieved.Backconversion can also limit the conversion efficiency even for plane-wave interactions involving pulses with bandwidths narrow enough that group velocity mismatch (GVM) and group velocity dispersion (GVD) effects are negligible, since in this case all temporal components interact independently in a single pass through the nonlinear crystal.In the spatial domain, the conversion efficiency can be enhanced by using near-confocal focusing [12]; in the time domain, the conversion efficiency can be enhanced by using pulses with durations short enough that GVM is non-negligible.For wide (non-diffracting) beams, beam shaping (e.g.flat-top beam profiles) is required in order to yield an L NL that is independent of transverse spatial position.The use of chirped QPM gratings offers a way of removing the above limitations on conversion efficiency of pulsed beams (in both the spatial and temporal domains) without the need for small beam areas, short pulse durations, or beam shaping. One example where this efficiency enhancement could be useful is in nanosecond optical parametric oscillators (OPOs).Optical parametric oscillators have been studied extensively in many regimes including for continuous wave (CW) pumping [12][13][14][15][16][17][18][19], and for ns pump pulses [20][21][22][23][24][25].Chirped QPM gratings have been used as the gain medium in OPOs [9,10], but their properties have not yet been fully explored in the context of adiabatic conversion.In order to reach the high conversion efficiencies predicted by the plane-wave CW theory (which we discuss in Section 3 and generate clean output pulses, the OPO signal wave must also be modulationally stable against noise, so that upon successive trips around the optical cavity it converges to a pulsed beam with a near-transform-limited spatiotemporal profile.It has been shown that OPOs using periodic QPM gratings or birefringent phasematching exhibit a temporal modulation instability (MI) [13].In this paper, we show that chirped QPM OPOs are even more susceptible to modulationally instabilities, discuss the suppression of the MI with additional intracavity elements, and numerically simulate the dynamics of chirped QPM ns-pumped OPOs. Coupled wave equations In this section, we introduce coupled-wave equations suitable for analyzing the steady-state, modulation instability, and nonlinear dynamics of OPOs.The equations we use are quite general, allowing for arbitrary QPM grating profiles and including an additional backwards THz wave which, in recent work [14,18,19], has been shown to influence OPO behavior (though the MI can exist without the THz interaction).The coupled wave equations we use are similar to other formulations of pulse propagation in χ (2) media [13,26,27], and can be derived from more general forward-wave propagation equations [28,29].The MI analysis we perform follows that of Ref. [13] quite closely, but here we will consider OPOs using chirped QPM gratings as the gain medium.The coupling between pump, signal, idler and DC envelopes due to the first Fourier order of the QPM grating is given by the following frequency-domain equations, where F denotes the Fourier transform, and where ω represents optical frequency (as opposed to a Fourier transform variable centered at one of the carrier frequencies ω j ).Subscripts i, s, p and T represent quantities associated with the idler, signal, pump and DC envelopes; we use subscript "T" because including the DC envelope (centered at zero frequency) leads to phasematched THz-frequency interactions.Tilde denotes an envelope represented in the frequency domain.The envelopes A j are analytic signals ( Ã j (ω) = 0 for ω < 0), and are assumed to have non-overlapping spectra; the use of analytic signals is appropriate in general [28,30], and is especially useful for modeling interactions involving A T .With these constraints, the envelopes are fully specified via the real-valued electric field which is related to the envelopes by where c.c denotes complex conjugate.The carrier frequencies are ω j .The wavevector is given by k(ω), and carrier propagation coefficients are given by k j ≡ k(ω j ).The QPM grating kvector is given by K g (z).We define the material phase mismatch as ∆k 0 = k p − k s − k i , and the carrier phase mismatch as The linear propagation operators in Eq. ( 1) are given for the pump, signal and idler by for spatial frequencies k x and k y , and reference velocity v ref , which we choose to be the group velocity at ω s .The form of L j in Eq. ( 4) assumes paraxial diffraction in an isotropic medium. For propagation normal to the optical axis of an anisotropic medium, minor modifications are required for L j , but these do not significantly change the results of this analysis.For propagation at finite angles to the optical axis, first-order terms in k x or k y will appear and substantially alter the results; this case is beyond the scope of this paper.The linear propagation operator for the DC envelope, LT , has a similar form to L j , modified for a backwards-propagating wave, where α(ω) is the frequency-dependent power attenuation coefficient.We will neglect α(ω) at optical frequencies but not at THz frequencies.Finally, the coupling coefficients are given by γ opt (ω) = (ω/c) 2 (2d opt )/(πk(ω)) and γ THz (ω) = (ω/c) [20], Eq. ( 1) can be used to evaluate OPO dynamics with, for example, semi-classical noise seeding, as we discuss in Section 5. Steady-state solution for chirped QPM OPOs In this section, we determine the nominal steady-state operating point for CW-pumped, planewave, singly-resonant OPOs when using a chirped QPM grating as the gain medium.The condition for the steady state is that the gain should equal the total cavity losses for the resonant signal wave.We define envelopes A (0) j (z) as the steady-state, z-dependent field profiles found by solving Eq. ( 1) and imposing self-consistency after a cavity round-trip.These steady-state solutions are greatly simplified by using the asymptotic expressions for the chirped-QPM signal gain and pump depletion which apply for sufficiently strongly chirped QPM profiles [1,2,4].For such QPM profiles, the signal power gain in the undepleted-pump limit is given, approximately, by [4] where the signal gain coefficient is given by , and where ∆k ′ = ∂ ∆k/∂ z is the QPM chirp rate (units of m −2 ), assumed to be constant near ∆k(z) ≈ 0. To understand this relation, we introduce a peak gain rate The local OPA (power) gain rate is then given in terms of the local phase mismatch by g(z) = 2[g 2 0 − (∆k(z)/2) 2 ] 1/2 .This relation is identical to conventional relations for plane-wave interactions in unchirped devices at each point z [31], but is z-dependent here due to the QPM chirp.To find the total gain, this gain rate is integrated over the region for which Re[g] > 0 (i.e. the region of the grating for which the phase mismatch is small enough to allow OPA).For a linear chirp rate, this integration yields Eq. ( 6); this procedure can be made more formal via WKB analysis [4].From Eq. ( 6), the OPO threshold condition is given by Λ p,th = ln(R −1 s )/(2π), where Λ p,th is the threshold signal gain coefficient and R s ≡ 1 − a s is the effective net round-trip reflectance (defined in terms of the round-trip signal power loss, a s ). Above threshold and assuming low cavity losses so that A (0) s (0), the pump depletion is given, approximately, by [1] η p ≈ exp(−2πΛ s ), where the pump depletion coefficient is given by The pump depletion is defined as η p = |A p (L)/A p (0)| 2 .Again assuming low losses, Eq. ( 6) and ( 7) can be used to express the OPO operating point, by equating the decrement to the number of signal photons lost due to the round trip losses and the increment from the depletion of the pump, yielding an implicit equation for the pump depletion coefficient Λ s , where the pump ratio N is the ratio of pump intensity to threshold pump intensity (i.e. the number of times above oscillation threshold).Based on this equation, the circulating signal intensity (which is proportional to Λ s by definition), and hence the conversion efficiency, are predicted to increase monotonically with pump intensity.This behavior is illustrated in Fig. 1, which shows the pump depletion predicted by solving Eq. (8) for Λ s as a function of N. Since 7) and ( 8), labeled "theory" in (a) is in good agreement with the numerical solution of Eq. ( 1), labeled "simulation" in (a). conversion efficiency increases monotonically with signal and pump power in regimes with high gain and high pump depletion [2], Eq. ( 8) qualitatively describes the steady-state OPO behavior even in the presence of moderate signal losses. To test the validity of Eq. ( 8), Fig. 1(a) also shows the pump depletion predicted from direct simulations of Eq. ( 1) using a Runge-Kutta method, assuming plane-wave and CW pump, signal and idler carrier waves; the two are in good agreement, indicating that adiabatic conversion applies very well to CW OPOs.The QPM profile used for the simulation is shown in Fig. 1(b) along with the profile of the nonlinear coefficient d eff (z) (normalized to its maximum value at a 50% QPM duty cycle); the reduction in d eff (z) and increase in the chirp rate of ∆k(z) at the edges of the grating are for apodization [4], which is necessary to reach the high efficiencies predicted by Eq. ( 8).The functional form of the QPM profile is given by which corresponds to a nominally linear chirp rate ∆k ′ , with an increase in |∆k(z)| near the edges of the grating.The functional form of d eff (z) is also given by hyperbolic tangent functions.The grating length is L, and z = 0 denotes the start of the grating.The chirp rate ∆k ′ was chosen such that |∆k ′ L 2 | 1/2 = 10 for length L. This choice smoothly sweeps the carrier phase mismatch ∆k(z) through zero and is sufficient to reveal the important steady-state and MI-related effects that must be considered in the design of any low-cavity-loss OPO using chirped QPM gratings.The parameters K a , z a1 , z a2 , L a1 , and L a2 in Eq. ( 9) are chosen to smoothly turn the idler on and the pump off at the input and output of the QPM grating, respectively; the quality of this apodization is manifested in a smooth OPA gain spectrum, with low ripple amplitude [4].For Fig. 1(b), the parameters were L = 5 cm, ∆k ′ =4 cm −2 , K a = 50 cm −1 , z a1 = 0.1L, z a2 = 0.9L, L a1 = 0.05L, and L a2 = 0.05L.In Fig. 2, we show the steady-state spatial profiles of the fields in an OPO with the QPM grating given in Fig. 1(b), by plotting the intensities I j (z) for the three waves [ j = (i, s, p)], normalized to the input pump intensity I 0 ; for this example, we assume N = 6 and R s = 95%.The pump is converted primarily near the center of the QPM grating, where phasematching is satisfied.Each of the fields is almost π/2 out of phase with its nonlinear source term throughout the grating (as opposed to in phase in the case of phasematched interactions in periodic QPM gratings), corresponding to adiabatic following of local nonlinear eigenmodes [32,33]. Based on Fig. 1(a), it is possible to significantly exceed the conversion efficiency that can be obtained with Gaussian beams in periodic QPM gratings or birefringently phasematched media, without performing any beam shaping, provided that one operates far enough above oscillation threshold.In the limit of long pulses and large beams, the condition for high conversion efficiency is that N(x, y) ≫ 1 across most of the spatiotemporal profile of the pump (positions x and y).By using a signal cavity mode somewhat larger than the pump beam and a cavity lifetime comparable to or longer than the duration of the pump pulse, this condition is more easily attained. Temporal MI for chirped QPM OPO In order to achieve the high conversion efficiency predicted by Fig. 1(a), the OPO steady-state solution must be stable against the quantum noise present at all frequencies.In this section, we will show that for any pump ratio N > 1, chirped QPM OPOs exhibit a temporal modulation instability.In order to yield a single-mode or narrow-band signal, this MI must be suppressed (by an intracavity etalon, for example). A general formalism for analyzing the OPO MI is given in Appendix (A), similar to the approach detailed in Ref. [13].To calculate the MI gain, we find the z-dependent steady-state solutions of Eq. ( 1), and then use Eq. ( 1) to calculate the single-pass amplification of small sidebands, detuned from the respective carrier frequencies by an amount ±Ω, superposed on each of the envelopes.We assume that Ω is positive (without loss of generality), so these sidebands have absolute optical frequencies ω = ω j ± Ω (for j = i, s, p, corresponding to the idler, signal, and pump envelopes) and ω = Ω (corresponding to the DC envelope, A T ).While the formalism we develop can address the general case of non-collinear sidebands, in this section, we use that formalism to evaluate the MI of chirped QPM, plane-wave OPOs with collinear sidebands, i.e. for sideband spatial frequency k x = k y = 0.For simplicity we consider the profile given by Eq. ( 9).Different grating profiles will exhibit comparable behavior provided that the phase mismatch is swept smoothly, monotonically, and slowly through zero, and has sufficiently large magnitude at z = 0 and z = L. Figure 3(a) shows the frequency dependence of the net sideband gain (assuming out-coupling and cavity losses totaling 5%) associated with the steady state solutions.For the simulation, we chose λ p = 1.064 µm, λ s = 1.55 µm, and the temperature T = 150 • C, and use the nonlinear coefficients and dispersion relation of MgO:LiNbO 3 [34,35].For any given pump ratio N, there is a range of frequencies for which there is MI gain (G > 0).Therefore, this OPO would not operate in a single mode.The MI is not an artifact of the particular parameters used for Fig. 3(a), but occurs for many resonant wavelengths and chirp rates, provided that there is a sufficient grating chirp for adiabatic conversion to occur.However, the structure of the net sideband gain does depend on the material dispersion, as with conventional OPOs [13]. In Fig. 3(a), the peak around 1.38 THz is related to the strongly-absorbed backwards THz wave.At this frequency, the interaction between the A we denote this process as THz-OPA.In the limit of a large THz absorption α T , the THz-OPA gain is inversely proportional to α T [14], and the THz-OPA process is similar to stimulated Raman scattering.Since α T is large in LiNbO 3 at 1.38 THz [35], the THz peak in Fig. 3(a) is damped substantially compared to predictions if absorption was neglected.However, despite this damping, the signal sideband gain still exceeds the cavity losses.For materials with a higher THz absorption, the strength of the THz-OPA peak compared to the other features seen in Fig. 3(a) would be reduced. Away from 1.38 THz, the THz-OPA process is highly phase mismatched and can be neglected.As a result, in most spectral regions, the MI corresponds to the pump, signal and idler three-wave mixing process [13].This process is illustrated in Fig. 3(b), which shows the propagation of sidebands through the QPM grating (sidebands detuned from carrier frequencies by ±1 THz).The generation and amplification of these sidebands can be understood through phasematching arguments: the phase mismatch for an interaction between a pair of sidebands and a carrier wave can be defined as the phase accumulated by one of those sidebands relative to the phase of its driving polarization, assuming that the sidebands were to propagate linearly (i.e. with phases unperturbed by χ (2) ).Consider the interaction between an idler sideband ãi (∓Ω), a signal sideband ãs (±Ω), and the pump wave carrier A (0) p .Based on the above assumption, the phase mismatch for this process is given by where n g, j is the group index of wave j, and φ j is the phase of carrier wave A (0) j (z); higher orders of dispersion have been neglected for simplicity.The carrier phase mismatch ∆k(z) is given by Eq. (3).Similarly, for the interaction between a signal sideband ãs (±Ω), a pump Lastly, for an interaction between an idler sideband ãi (±Ω), a pump sideband ãp (±Ω), and the signal carrier wave A (0) s , the phase mismatch is given by For the example in Fig. 3(b), the sideband frequency is 1 THz, and the group indices for the idler, signal and pump waves are given 2.2081, 2.1795, and 2.2091, respectively.The carrier wave phase accumulation can often be neglected, since the steady-state fields only accumulate phase rapidly when the field amplitude is low compared to the other two waves; such fields do not contribute strongly to the sideband generation process.The apodization regions can also be neglected for simplicity, since only weak sideband generation can occur there due to the high chirp rate.With these assumptions, for the interaction between signal and idler sidebands, ∆k is,eff (z, +Ω) = 0 at z = z pm,is (+Ω) ≈ −0.3L,where the pump is undepleted [see Fig (2)]. The above considerations explain why the signal sidebands can experience gain greatly in excess of the signal carrier wave: at most points in the QPM grating, one of the pairs of sidebands is close to phasematching and is driven strongly by the corresponding steady-state field; the sidebands can thus arrange themselves for strong sideband amplification and generation over much of the grating due to the existence of multiple phasematched points for the various sideband mixing processes; this amplification can be seen in Fig. 3(b).These processes are in contrast to the interaction between the carrier waves, in which the signal is amplified only in the vicinity of the single phasematched point where ∆k(z) = 0 (and in which each of the waves is nearly π/2 out of phase with its nonlinear source term due to the adiabatic following process). The type of behavior discussed in this section, where the QPM chirp reduces the gain for the CW signal wave compared to a parasitic process (in this case, sideband amplification), has also been seen in the spatial domain when using finite-sized beams [3,7].In general, it may also be necessary to consider additional nonlinear processes such as stimulated Raman scattering (SRS) for which amplification occurs over the entire length of the QPM grating; SRS would be relevant in cases where the cavity losses are low at any Stokes-shifted wavelengths.Additional waves and nonlinear effects can be added relatively straightforwardly to Eq. ( 21) [28]. In order to build an OPO with narrow-band, low-noise output signal pulses, the modulation instability must be adequately suppressed.One way to suppress the MI is by introducing an intracavity element such as an etalon to create loss selectively at the sideband frequencies, such that G(Ω) < 1 for all sideband frequency detunings Ω.Thus, if an etalon is used, the free spectral range should be comparable to the MI gain bandwidth [as shown in Fig. 3(a) for a particular example], and the finesse must introduce sufficient loss at sideband frequencies within this spectral region that G(Ω) < 1.The required etalon facet reflectance R to fully suppress the MI can be estimated as , where G 0 denotes the peak MI gain in the absence of the etalon.In cases where the design parameters of a single intracavity etalon would be too constrained, multiple etalons or the combination of an etalon and a diffraction grating could be used.For a CW OPO, G < 1 is necessary to avoid sideband amplification in the steady-state.Based on Fig. 3(a), a high reflectance etalon (e.g. with R > 75%) is needed.This reflectance is significantly higher than that required to yield stable operation of CW-pumped OPOs using periodic QPM gratings, for which Fresnel reflections from uncoated optics are sufficient [13].For nanosecond pump pulses, the sidebands are amplified or suppressed over several cavity round-trips (corresponding to the duration of the pump), and hence the signal spectrum will continue to narrow as G is reduced.Conversely, it may not be necessary to fully suppress the MI within the etalon peak corresponding to the signal carrier frequency in order to achieve adiabatic conversion.The design issues associated with nanosecond-pumped chirped QPM OPOs are discussed in Section 5. Numerical simulation of chirped QPM OPOs The OPO efficiency enhancements made possible by chirped QPM gratings are likely to be most advantageous when using a pulsed pump, since high efficiencies are already possible for CW-pumped OPOs by appropriate near-confocal resonator design [12].The OPO configuration where Eq. ( 8) can apply most directly is for nanosecond pump pulses, for which dynamical effects including group velocity mismatch and dispersion (GVM and GVD) can, in the ideal limit, be neglected.However, to determine if this narrow-bandwidth limit applies, GVM-and GVDrelated effects must be modeled using Eq. ( 1), together with an appropriate cavity-wrapping procedure [21].In this section, we perform numerical simulations of a ns-pulse plane-wave OPO with and without an intracavity etalon, in order to see the role that the MI plays when using nanosecond pump pulses. Design considerations When using nanosecond pulses, a number of design constraints must be met, which we outline here.First, the signal must build up from quantum noise to have comparable power to the pump in a reasonably small fraction of the pump duration (since the pump remains undepleted, and hence the operation is inefficient, before signal build-up).To quantify this constraint, we first define the round-trip number as the ratio of pump duration to the cavity round-trip time, N rt = τ p /t rt .For large N rt , the time-dependent signal intensity during the build-up stage can be expressed approximately as ln where the 2πΛ p,pk is the signal gain coefficient associated with the peak of the pump pulse, and f p (t) is the pump intensity profile normalized to its peak value, and takes values between 0 and 1. t 0 is the time at which gain exceeds the cavity loss (i.e.where the integrand crosses zero), and I 0 is an effective input noise intensity.The 2πΛ p,pk factor originates from Eq. ( 6). A second OPO constraint is that the signal should have a cavity lifetime comparable to the pump duration, in order to ensure depletion of the trailing edge of the pump.This constraint is described in terms of N s , the ratio of pump duration to the signal-cavity lifetime, given by In general N s should be of order unity.A third OPO constraint is that the pump should be intense enough to support adiabatic conversion on its leading edge, based on Eq. ( 8), and hence N pk , the ratio of the signal gain at the peak of the pump pulse to the round trip cavity losses, should satisfy Equations (13)(14)(15) can be satisfied for any N rt by choosing appropriate values of Λ R,pk and R s , although a more careful analysis is needed for cases when N rt ≫ 1.Assuming N rt ≫ 1, the next consideration is the signal linewidth.For OPOs with a large N pk , the MI might not be suppressed for cavity modes near the relevant etalon peak (due to the finite etalon finesse), and in this case the signal bandwidth is comparable to the etalon bandwidth.Only one etalon peak should lie within the OPO acceptance bandwidth.This acceptance bandwidth can be approximated as 2π∆ f BW ≈ |(∆k ′ L)/(δ n g /c)|, where δ n g = n g (ω s ) − n g (ω i ) for group index n g (ω).Thus, the etalon free spectral range f fsr can be constrained according to In order to achieve adiabatic conversion in an apodized chirped grating of the kind described by Eq. ( 9), it is necessary to have |∆k ′ L 2 | ≈ 10 2 , as discussed in Section 3 (although the required grating k-space bandwidth can be reduced slightly by using nonlinear chirp profiles).Hence, in typical cases, the required free spectral range of the etalon satisfies f fsr t rt ≈ 10 3 .If the minimum etalon length is constrained (e.g. to tens of µm), then Eq. ( 16) also limits the minimum length of the QPM grating.Therefore, in the following numerical example, we will use a relatively long QPM grating (5 cm) and a pump pulse duration of 15 ns, long enough that N rt ≫ 1.The final design parameter is the etalon finesse, the effects of which can be explored numerically. Numerical example In this subsection, we show plane-wave numerical examples with a nominal OPO design chosen via the constraints discussed in Subsection 5.1.We assume a 1064-nm-wavelength Gaussian pump pulse with 1/e 2 duration of 15 ns and a peak intensity of 32 MW/cm 2 .The grating length is 5 cm with a chirp rate of 4 ×10 4 m −2 (except in the apodization regions) and a QPM period chosen to phasematch a 1550-nm-wavelength signal in the middle of the grating; the grating has a similar profile to the example shown in Fig. 1(b).The round-trip losses were taken to be 19%.With these parameters, N rt ≈ 20.5, Λ p ≈ 0.95, and hence the peak times above threshold N pk ≈ 28.For cases when an intracavity etalon is included, the etalon has a free spectral range of 2 THz and a power reflectance of 64% on each facet.The corresponding etalon bandwidth is ∆ f et ≈ 120 GHz, and the product f fsr t rt ≈ 1450.The value |(∆k ′ L 2 n g,s )/(2πδ n g )| ≈ 1200, as discussed in relation to Eq. ( 16).The simulations use a method similar to the one described in Ref. [20]; the THz wave is neglected.When the OPO is seeded with a CW signal whose intensity exceeds that of the quantum noise floor, the pump is highly depleted after the initial signal power build-up.This case is shown in Fig. 4, where we assume a signal seed intensity of 1 W/cm 2 and include an intracavity etalon.The pump is highly depleted after the initial signal build-up.The later parts of the pump pulse are also strongly depleted, since the signal-cavity lifetime is comparable to the pump duration.The inset shows the normalized output pump fluence, which we define as After the signal begins to saturate the pump, no further pump energy is transmitted, so the transmitted fluence is limited.This example demonstrates an efficiency enhancement analogous to those predicted by Fig. 1 and Eq. ( 8), but modified in the presence of a nanosecond Gaussian pump pulse instead of a CW pump.In each pass through the QPM grating, different temporal components of the signal pulse experience adiabatic conversion almost independently, since the pulse bandwidth is much narrower than the bandwidth of any effects related to GVM and GVD.Note that for very high intensities, an intracavity etalon is required in order to suppress the MI even with a CW signal seed, due to the finite bandwidth of the pump; the etalon described above was thus included in the simulations for Fig. 4. In contrast to the CW case or the pulsed case with a monochromatic seed, in a ns-pumped OPO seeded with white noise the adiabatic conversion process no longer occurs.Since there is a MI at all pump ratios N > 1, noise-seeded signal sidebands are amplified over most of the pump pulse.A typical signal spectrum corresponding to a single simulation with a white-noiseseeded signal is shown in Fig. 5(b).The spectrum fills the OPO acceptance bandwidth, which corresponds to several THz.The corresponding transmitted signal and pump intensities I j (t) are shown in Fig. 5(a).The conversion efficiency associated with this example is significantly reduced compared to Eq. ( 4) because the smooth signal phase profile required for adiabatic following is no longer present due to seeding with noise rather than a single frequency.This reduction in efficiency can be seen in the inset of Fig. 5(a), which plots W (t). From the results of Section 4, it is necessary to use an intracavity element such as an etalon to suppress the MI or limit the frequency range over which there is MI gain, and thereby allow adiabatic conversion to occur.This approach is shown in Fig. 6, where an etalon is added to the cavity: this allows the adiabatic conversion efficiency behavior to re-emerge [Fig.6(a)] even in the presence of a noise seed, by narrowing the signal bandwidth [Fig.6(b)].The noise suppression is not complete; the remaining signal noise leads to an output pump which consists of many short pulses which correspond to time intervals in which adiabatic conversion does not occur.The slight reduction in conversion efficiency associated with these output pump pulses can be seen from W (t), which is plotted in the inset of Fig. 6(a): after saturation, the (average) slope dW (t)/dt is positive but small compared to the slope shown in Fig. 5 the output pump pulses is shown in Fig. 6(c), where I p (t)/I 0 is plotted over a limited temporal region.These output pump spikes correspond to the finite bandwidth of the signal shown in Fig. 6(b), and repeat (approximately) every round-trip time.Conventional nanosecond OPOs can exhibit a similar self-pulsing behavior [20,23].Nonetheless, the pump is highly depleted over most of its temporal profile, showing that adiabatic operation of an OPO can be effective even with a noise seed, if a suitable bandwidth-limiting filter is included in the cavity. The same procedures we have discussed here (identification of an MI and its severity, and calculation of the required etalon properties) could be used at other operating points besides the case simulated in Figs.(4)(5)(6).Generally, the etalon finesse required to suppress the MI will scale with N, the suppression of spectral sidebands will scale with this finesse and with N rt , and the MI gain bandwidth (and hence the etalon's required free spectral range) will scale with the OPO signal-idler acceptance bandwidth.If the etalon has a large enough free spectral range that only one etalon peak lies within the OPO acceptance bandwidth but has an insufficient finesse to fully suppress the MI within that peak, the signal bandwidth is comparable to the etalon bandwidth ∆ f et .To suppress the pump pulsing, a higher finesse etalon could be used; to yield a single-or few-mode signal via intracavity filters with realistic parameters, multiple filters (e.g. a grating and an etalon) or injection seeding might be required. Conclusions The use of Gaussian beams and unchirped QPM or birefringently phasematched media impose a fundamental limitation to the conversion efficiency of singly-resonant OPOs in the non-diffracting regime due to back-conversion.By using a chirped QPM grating, this limitation could be evaded via the adiabatic conversion process, allowing for a broadly tunable, high-power-spectral-density mid-infrared source.However, the modulation instability we have described in this paper must be suppressed or avoided if such an OPO is to exhibit useful adiabatic pump conversion.One possible approach is to use a CW or narrowband laser as a seed.Another, simpler approach is to use a high-finesse intracavity etalon in order to suppress the unstable spectral sidebands.When operated several times above oscillation threshold with such an etalon, almost all of the pump pulse can be down-converted to the signal and idler waves, even with a Gaussian or other non-flat-top pump pulse profile.The etalon is constrained to have both a relatively high finesse and free spectral range.An alternative approach might be to use multiple intracavity elements to suppress the MI, such as a diffraction grating (to suppress the MI at high sideband frequency detunings) and a longer etalon (to yield a narrow-band signal); with this approach, there would be less stringent design constraints on the etalon. In this paper we considered the temporal MI of chirped QPM OPOs, but not the spatial MI effects, i.e. the amplification of signal sidebands with non-zero transverse spatial frequency which can also be present, even at temporal frequencies degenerate with the carrier fields [7,36].A discussion of these effects is beyond the scope of this paper.Spatial MIs can be calculated with the approach discussed in appendix (A) by setting k 2 x + k 2 y > 0. This type of MI could be suppressed by moderate signal focusing with non-planar cavity mirrors in combination with a spatial filter; for very wide beams, unstable cavity configurations might be used.Spatial MIs and other focusing effects in chirped QPM interactions will be the subject of future work. The temporal MI we have considered is directly applicable to OPOs which use a narrowband pump.In synchronously pumped OPOs with ps or fs pump pulses with bandwidths comparable to the OPO acceptance bandwidth, the MI would be altered, and may be relevant to explaining the fluctuations observed in Ref. [37].Finally, the interaction between the effects discussed here and χ (3) self phase modulation effects could lead to new and interesting types of OPO-based frequency combs. A. Modulation instability for CW OPO In this appendix, we develop a formalism for evaluating the MI of the steady-state OPO solutions found in Section 3, based on the coupled wave equations of Section 2. To calculate the MI gain, we follow the approach of Ref. [13]: we assume leading-order fields that are both planewave and monochromatic, and find the z-dependent field profiles which satisfy self-consistency in amplitude and phase after a single cavity round-trip.We then assume weak time-dependent perturbations around these zeroth-order solutions and solve the linear system which results from neglecting products of the perturbations, or sidebands, a j .The envelopes are assumed to have the form for zeroth-order fields A (0) j (z) and perturbations a j (r,t).The zeroth-order fields A (0) j have optical frequencies ω j and have spatial frequencies k x = k y = 0. For a particular input pump field and vanishing idler input, amplitude self-consistency of the zeroth-order fields requires that |A p (0)|, and hence as a function of the pump ratio N, for a given system.We assume that the signal carrier frequency corresponds to an axial mode of the cavity [15,38].Therefore, the cavity adds an additional phase such that the phase of the electric field, Ẽs (ω s ), is also self-consistent after a cavity round-trip.Although A (0) j are found numerically via Eq.( 1), the simple relations in Section 3 provide useful estimates of how these fields behave. To describe how the sideband amplitudes a j are coupled to each other in the presence of the zeroth-order fields, we define spatial frequency vector k ⊥ = k x x + k y ŷ and sideband frequency Ω = |ω − ω j |.Because we assume k x = k y = 0 for A (0) j , the a j can only be coupled together in a limited number of ways.To write down the coupling matrix, we first introduce a shorthand notation for the sidebands: a j (z; k ⊥ , Ω) ≡ a (+) j and a j (z; −k ⊥ , −Ω) ≡ a (−) j .The sideband frequency Ω ≥ 0, while k x and k y can be positive or negative.We now define a sideband vector ṽ where the dependencies of ã(±) j on z, k ⊥ , and Ω have been suppressed.For the DC envelope, only ã+ T is included in Eq. ( 19), due to the use of analytic signals for each of the envelopes; as such, it is implicitly assumed that Ω < ω j for j = (i, s, p) (an appropriate assumption when the OPO acceptance bandwidth is much less than the pump, signal and idler carrier frequencies, which is almost always the case).Propagation for this sideband vector is described by a linear system, where M(z) is a 7x7 coupling matrix which depends on the frequency-domain arguments as well as z.Due to the assumed axial symmetry of the problem, the same coupling matrix applies to both ṽ(+k ⊥ , Ω) and ṽ(−k ⊥ , Ω) for arbitrary k ⊥ .The coupling matrix M(z) has a form similar to the one introduced in Ref. [13], but with extra elements for the THz sideband a where the elements of this matrix are defined in terms of the operators and coefficients of Eq. ( 1), with κ ± j,o = γ opt (ω j ± Ω), κ ± j,T = γ THz (ω j ± Ω), κ T = γ THz (Ω), K j,± (z, Ω) = ±[k(ω j ± Ω) − k(ω j ) − (k 2 x + k 2 y )/(2k(ω j ± Ω)] − Ω/v ref ∓ ∆k(z), and K T,+ = iα T /2 − k(Ω) − (k 2 x + k 2 y )/(2k(Ω)) + Ω/v ref − K g .The diagonal elements of M are determined by the linear differential operators L j [Eqs.( 4) and ( 5)], while the off-diagonal elements determine coupling between the sideband vectors due to χ (2) interactions.For chirped QPM gratings, all the non-zero elements of M are z-dependent. The last step in calculating the MI is to calculate a total round-trip matrix which propagates the sideband vector ṽ through a full cavity round-trip.For the MI of a singly-resonant OPO where the idler and pump fields and their sidebands have zero feedback, only the 2x2 submatrix related to the signal, denoted ṽs , must be considered.The phase and amplitude response of the cavity at frequency ω s are fixed by self-consistency (in a real OPO, self-consistency would determine ω s ; for this theoretical study, it is convenient to fix ω s and slightly adjust the cavity length accordingly).Therefore, the round-trip signal-sideband matrix is given by where the state transition matrix Φ is defined using M(z) and Eq. ( 20) as ṽ(z ′ )=Φ(z ′ , z) ṽ(z).The 2x2 submatrix of Φ(L, 0) appearing in Eq. ( 22) corresponds to the outputs at the signal sideband frequencies resulting from inputs at those frequencies.The output phase of the zeroth-order signal, A (0) s (L)/A (0) s (0), is defined as φ s .To account for intracavity elements a normalized cavity transfer function h(Ω) = H(Ω)/ H(0) has been introduced in terms of the transfer function H(Ω) associated with the cavity excluding the QPM grating; h would include any intracavity etalon, for example.Note that since the THz wave propagates backwards, its absorption appears mathematically as a "gain" in Eqs.(20) and (21).As a result, it is useful numerically to first solve Eq. ( 20) by propagating backwards (finding a matrix giving ṽ(z = 0) in terms of ṽ(z = L)), and then find Φ by matrix inversion. There are two eigenvalues of Φ rt , denoted λ Φ, j for j = 1 and j = 2. Modes of the "hot" cavity are those frequencies for which λ Φ, j are real; these frequencies can differ from the frequencies of the "cold" cavity modes as a result of phase shifts due to the three-wave interaction, and due to coupling between frequencies at +Ω and −Ω.Exponential growth (MI) of such a cavity mode occurs when λ Φ, j (Ω) > 1, for some j and some Ω.We assume that the cavity modes are closely-spaced in frequency compared to the variation of the eigenvalues with sideband frequency Ω; therefore, we can define the MI condition as |λ Φ, j | > 1.A more detailed description of the MI calculation is given in Ref. [13]. #(Fig. 1 . Fig. 1.(a) Conversion efficiency (1 − η p ) as a function of pump ratio N, for the QPM grat- ing profile shown by the solid blue and red lines in (b); the simulation parameters are given in the text.The dashed straight line in (b) shows just the linear part of the ∆k profile (slope ∆k ′ ) as a guide to the eye.The analytical result from Eqs. (7) and (8), labeled "theory" in Fig. 2 . Fig.2.Steady-state intensity profiles I j (z) normalized to input pump intensity I 0 for the OPO simulated in Fig.1, for N = 6. Fig. 3 . Fig. 3. (a) Dependence of sideband gain G on pump ratio N for the OPO simulated in Fig. 1.(b) Propagation of the signal, idler and pump sidebands for the highest-gain signal eigenmode at Ω/(2π) = 1 THz and N = 6.The sidebands are normalized such that |a − s | 2 + |a + s | 2 = 1 at z = 0. T waves is phasematched, leading to optical parametric amplification at the Stokes-frequency-shifted signal sideband a (−) s ; #Fig. 4 . Fig. 4. Output pulses for a chirped QPM OPO with CW-signal-seeding.Simulation parameters are given in the text.(a) Signal and pump intensities I(t) in the time domain, normalized to the peak input pump intensity I 0 .The inset shows the normalized output pump fluence W (t), defined in Eq. (17).(b) Signal spectrum, with frequency normalized to the bandwidth of the Gaussian pump pulse (1/e 2 duration τ p ) and centered at ω c , which is defined as the centroid of the signal spectrum. Fig. 5 . Fig. 5. Output pulses for a chirped QPM OPO with the same parameters as those used in Fig. 4, but with white-noise seeding.(a) Signal and pump intensities in the time domain, I j (t) ( j = s, p), normalized to the peak intensity of the input pump pulse, I 0 .The inset shows W (t). (b) Signal spectrum, with frequency normalized to the OPO acceptance bandwidth ∆ f BW (≈ 3.35 THz in this case) and centered at ω s , the signal frequency phasematched at the center of the QPM grating. Fig. 6 . Fig. 6.Output pulses for a chirped QPM OPO with the same parameters as those used in Fig. 5, but with an intracavity etalon (free spectral range 4.15 THz).(a) Signal and pump intensities in the time domain, I j (t) ( j = s, p), normalized to I 0 .The inset shows W (t). (b) Signal spectrum, with frequency normalized to the bandwidth of the etalon peaks, ∆ f et = fsr et (1−R et )/(2π) for etalon reflectance R et and free spectral range fsr et .∆ f et ≈ 120 GHz in this case.(c) Output pump intensity I p (t)/I 0 for the case shown in Fig.6(a), plotted over a limited temporal range to show pulsing behavior; the pulses correspond to regions in which adiabatic conversion does not occur.The time axis is normalized to the signal round-trip time: the pulse pattern is slightly modified after each signal round trip through the cavity. # 158303 -$15.00USD Received 16 Nov 2011; revised 23 Dec 2011; accepted 27 Dec 2011; published 19 Jan 2012 (C) 2012 OSA 30 January 2012 / Vol. 20, No. 3 / OPTICS EXPRESS 2481 , and are determined by the relevant tensor elements and QPM orders.We assume that d opt and d THz are non-dispersive over the frequency ranges of interest.Equation (1) may be used to evaluate the single-pass propagation for arbitrary OPO configurations.With suitable cavity wrapping
12,020.4
2012-01-30T00:00:00.000
[ "Physics" ]
Comparing Machine Learning and Time Series Approaches in Predictive Modeling of Urban Fire Incidents: A Case Study of Austin, Texas : This study examines urban fire incidents in Austin, Texas using machine learning (Random Forest) and time series (Autoregressive integrated moving average, ARIMA) methods for predictive modeling. Based on a dataset from the City of Austin Fire Department, it addresses the effectiveness of these models in predicting fire occurrences and the influence of fire types and urban district characteristics on predictions. The findings indicate that ARIMA models generally excel in predicting most fire types, except for auto fires. Additionally, the results highlight the significant differences in model performance across urban districts, indicating an impact of local features on fire incidence prediction. The research offers insights into temporal patterns of specific fire types, which can provide useful input to urban planning and public safety strategies in rapidly developing cities. In addition, the findings also emphasize the need for tailored predictive models, based on local dynamics and the distinct nature of fire incidents. Introduction The United States has experienced rapid urban expansion in recent years.However, as cities expand, so does the risk of fires [1][2][3][4].These fires not only pose an acute danger to urban residents but also jeopardize the long-term health and development of cities.It is necessary to understand the patterns of these fires better as cities develop and change [5][6][7].Developing a data-driven approach to predict fire incidents is not only of paramount importance for protecting public safety, but it is also essential for understanding how human-environment interactions may impact the occurrence of urban fires [8,9]. Previous studies applied machine learning approaches to fire prediction, detection, and spread rate analysis [10][11][12][13][14].For instance, researchers used machine learning algorithms to improve the classification of burn zones, which classifies how broadly an area may burn [15,16].Machine learning models also can help identify environmental features that increase fire risks, such as extended drought conditions [17,18].Machine learning models are also used for fire incident prediction.For example, Bayes Network and Naive Bayes have been used to predict the likelihood of fire breakouts based on a probabilistic framework and the spatiotemporal information of previous incidents [12].Sevinc, Kucuk and Goltas [19] used Bayes networks to analyze the possible causes of a forest fire ignition.Another study by Szpakowski and Jensen [20] reviewed how remote sensing imagery and land use land cover data have been used in fire ecology, such as fire risk prediction, active fire detection, and burn severity assessment.Another widely used technique in fire prediction is the Random Forest model, which is a decision tree-based ensemble model that combines multiple decision trees to make predictions using features like temperature, humidity, vegetation type, and past fire occurrences.Random Forest has often been applied in the forecasting of urban fire outbreaks, and its effective performance has been reported in many instances [10][11][12]21,22].For example, Song, Kwan, Song and Zhu [23] used Random Forest to predict the occurrence of fire outbreaks in Hefei City, China.While the Random Forest accuracy assessments of the predicted fire locations fell short of the more position-dependent spatial econometric models, the Random Forest model was successful in outputting the relevant environmental variables that were attributable to the fire outbreaks [23].In another study, an evaluation of the Random Forest model was conducted in Yichun, China.The goal of this study was to assess Random Forest's ability to extrapolate risk patterns, fire outbreak drivers, and the spatial distribution of urban fire occurrences.The results showed that Random Forest was successful in making these assertions [24]. Artificial Neural Networks (ANNs) are another machine learning algorithm employed for various fire prediction studies.These methods include the ability to automatically extract relevant attributes from input data based on neural networks with various layers.For example, one wildfire prediction study used multi-year, fire incident data that were collected in the Montesinho Natural Park of Portugal.This study showed the success of ANNs locating potential sites of large-scale wildfire outbreaks but struggling with smaller-scale fire incidents [25].Another study completed in Heilongjiang, Northeast China used ANNs comparatively with logistic regression models to determine the most effective algorithm for wildfire outbreak prediction.The study used wildfire outbreak data, along with coinciding climate and topographic factors, to build and test the accuracy of the chosen predictive models.These authors found that the ANN model performed with the highest accuracy in predicting wildfire outbreaks, except in areas that were in or near urban zones [26].Apart from the aforementioned drawbacks of ANNs when it comes to small-scale and urban fire outbreaks, ANN models also are affected by the highly sensitive and often site-specific relationships of the input parameters and outputs that are difficult to untangle due to the "black box" nature of this algorithm [27,28].While it can render highly predictive accuracies in one study area, it can require additional adjustments when the study area changes, with no clear picture of the relationship between the input data and the results [29]. In addition to machine learning models, researchers also applied various time series statistical models to fire incident prediction, and Autoregressive integrated moving average (ARIMA) models are a commonly used technique in this field [13,30].ARIMA models were created in the 1970s and use mathematical laws to realize predictions from time series variables [31].This model captures temporal dependencies in data and has been widely applied in predicting forest fires.For example, Ma, Liu and Zhang [32] used a seasonally optimized ARIMA model to predict the frequency of fire outbreaks in China from 2003 to 2017.Their study reported that the SARIMA model performed well in predicting the frequency of fire outbreaks, with excellent Root Mean Squared Error values [32].Similarly, in a study by Zhang, Zhou, Weng and Zhang [33], an ARIMA model was used to predict urban fire outbreaks using fire rescue requests as a proxy for urban fire occurrences.This study found that ARIMA models, with sufficient historical data, are accurate and useful tools for informing fire departments on where and how to allocate resources to mitigate and respond to urban fire outbreaks [33].In addition, ARIMA models have been widely adopted to predict wildfires and fire occurrences within the wildland urban interface [34,35]. Although these methods have been used for both urban fires and wildfires, the majority of studies placed a focus on wildfires, and there has not been sufficient research that compares the performances of machine learning models and traditional time series models when predicting urban fire incidents.In addition, previous research also did not place a focus on how these models may perform differently for different fire types.To fill this gap, our research expands the analyses in [36] and aims to test the effectiveness of the Random Forest model and the ARIMA model in predicting urban fires.We are also interested in exploring whether and how the occurrence of fire incidents depends on a collection of variables, such as urban districts with different socioeconomic factors and the type of fires based on a dataset from the City of Austin Fire Department [36].We chose Random Forest and ARIMA, instead of other machine learning methods or time series models (e.g., STARIMA), as these were the most commonly used methods for fire predictions from each of the two categories based on our literature review.We feel that a comparison of these two basic methods, instead of their numerous variations, can provide researchers more valuable input.We chose Random Forest instead of ANNs in the category of machine learning based on previous studies showing that ANNs underperformed in urban areas [26].The research questions that we are trying to answer are two-fold: (1) whether the ARIMA model or the Random Forest model works better when predicting urban fire incidents, and (2) whether the type of fire or the specific urban district impacts the performance of fire incident predictions. Data The dataset used in this study is a collection of fire incidents within Austin, Texas that is collected and maintained by the City of Austin Fire department.This dataset collection ranges from January 2009 to December 2018 and contains information such as the time and date of the fire incident, the fire type (e.g., trash fire, grass fire, auto fire), and a latitude and longitude record for each individual incident (Table 1).Austin is one of the fastest-growing cities in the United States, and analyzing the spatial patterns of urban fire incidents is valuable for enhancing Austin's overall safety and resilience against fire-related incidents.The study area in Austin is divided into ten city council districts (Figure 1).Tables 2 and 3 display the number of fires by year and by city council district.As can be seen from Figure 1, the city center (e.g., Districts 1, 3, 9) exhibits the most significant concentration of fire incidents, likely due to the higher population densities.In contrast, the outskirts of the city, such as districts 2, 5, 6, 8, and 10, display a sparser distribution of fire reports.Although the total number of fires fluctuates over the years (Table 2), there are no substantial differences in the spatial distribution of fires across different years. Methodology The objective of this research was to evaluate the performance of Random Forest and ARIMA models in predicting urban fire incidents.Additionally, we aimed to investigate how these models may perform differently across various urban areas and with different types of fires.Therefore, as part of the data preprocessing, we first summarize the number of fire occurrences by month, fire type, and city council district.Figure 2 shows an example of the monthly data after preprocessing.The highlighted entry shows the number of trash fires in each month for city council district 3. The first component ("3") indicates the city council district, the second component ("TRASH-Trash Fire") denotes the fire type, and Methodology The objective of this research was to evaluate the performance of Random Forest and ARIMA models in predicting urban fire incidents.Additionally, we aimed to investigate how these models may perform differently across various urban areas and with different types of fires.Therefore, as part of the data preprocessing, we first summarize the number of fire occurrences by month, fire type, and city council district.Figure 2 shows an example of the monthly data after preprocessing.The highlighted entry shows the number of trash fires in each month for city council district 3. The first component ("3") indicates the city council district, the second component ("TRASH-Trash Fire") denotes the fire type, and the third component consists of a series of 120 bracketed numbers.Each number represents the monthly count of a specific type of fire in a given district, starting with January 2009 and ending December 2018.Then, we propose the following three analyses to examine the research questions proposed in Section 1 (Figure 3). Generic Analysis Based on a Random Forest Model We first constructed a Random Forest regression model based on the monthly fire incident data.Random Forest is an ensemble learning method primarily used for classification and regression tasks.It constructs multiple decision trees during the training process.Each tree is built from a sample drawn with replacement (i.e., bootstrap sample) from the training set.Compared to models based on single decision trees, Random Forest has greater accuracy and can handle datasets with more complex features [14,37].When constructing a Random Forest model, the features are the input variables that are used to predict the output.Because feature selection has a big influence on the model's performance, it is important to select the appropriate features.Additionally, features can be ranked according to their importance (i.e., the degree to which each feature improves the model's accuracy).In this research, due to availability, the features used for the Random Forest model were (1) the type of fire, (2) the city council district, and (3) the prior five years of fire incident data for a specific fire type/city council district combination based on a moving window.For example, to train the model, the number of fires in January 2016 used 60 (12 × 5) monthly fire counts from 2011-2015 as the input.The reason for constructing this model is to rank the importance of features, such as the type of fire or the city council district, when constructing the Random Forest model.Then, we propose the following three analyses to examine the research questions proposed in Section 1 (Figure 3).Then, we propose the following three analyses to examine the research questions proposed in Section 1 (Figure 3). Generic Analysis Based on a Random Forest Model We first constructed a Random Forest regression model based on the monthly fire incident data.Random Forest is an ensemble learning method primarily used for classification and regression tasks.It constructs multiple decision trees during the training process.Each tree is built from a sample drawn with replacement (i.e., bootstrap sample) from the training set.Compared to models based on single decision trees, Random Forest has greater accuracy and can handle datasets with more complex features [14,37].When constructing a Random Forest model, the features are the input variables that are used to predict the output.Because feature selection has a big influence on the model's performance, it is important to select the appropriate features.Additionally, features can be ranked according to their importance (i.e., the degree to which each feature improves the model's accuracy).In this research, due to availability, the features used for the Random Forest model were (1) the type of fire, (2) the city council district, and (3) the prior five years of fire incident data for a specific fire type/city council district combination based on a moving window.For example, to train the model, the number of fires in January 2016 used 60 (12 × 5) monthly fire counts from 2011-2015 as the input.The reason for constructing this model is to rank the importance of features, such as the type of fire or the city council district, when constructing the Random Forest model. Generic Analysis Based on a Random Forest Model We first constructed a Random Forest regression model based on the monthly fire incident data.Random Forest is an ensemble learning method primarily used for classification and regression tasks.It constructs multiple decision trees during the training process.Each tree is built from a sample drawn with replacement (i.e., bootstrap sample) from the training set.Compared to models based on single decision trees, Random Forest has greater accuracy and can handle datasets with more complex features [14,37].When constructing a Random Forest model, the features are the input variables that are used to predict the output.Because feature selection has a big influence on the model's performance, it is important to select the appropriate features.Additionally, features can be ranked according to their importance (i.e., the degree to which each feature improves the model's accuracy).In this research, due to availability, the features used for the Random Forest model were (1) the type of fire, (2) the city council district, and (3) the prior five years of fire incident data for a specific fire type/city council district combination based on a moving window.For example, to train the model, the number of fires in January 2016 used 60 (12 × 5) monthly fire counts from 2011-2015 as the input.The reason for constructing this model is to rank the importance of features, such as the type of fire or the city council district, when constructing the Random Forest model. Comparative Analysis between Random Forest and ARIMA for Different Fire Types ARIMA models are often useful in time series forecasting due to their flexibility in modeling data with both stationary and non-stationary and seasonal patterns [26,28,33,34].An ARIMA model is expressed as ARIMA (p, d, q), where the parameters p, d, and q are non-negative integers representing the autoregressive, integrated, and moving average parts of the model, respectively, and are interpreted as follows: • p (Autoregressive Parameter): This indicates the extent to which the current value of the series is linearly dependent on its previous values.For example, it shows how the value in March is related to the values in preceding months like February, January, etc. • d (Integrated Parameter): This represents the number of non-seasonal differences needed to make a time series stationary.For example, if a time series shows a linear trend, you might use d = 1 (i.e., differencing once by subtracting the previous value from each current value) to transform it into a stationary series.• q (Moving Average Parameter): This denotes the number of lagged forecast errors in the prediction equation.The parameter q can be seen as a measure of the uncertainty in the time series analysis. For each time series, we first tested whether it was stationary and determined if there was a need for differentiation.We then used the stepwise auto_arima function in the pmdarima Python package, which automatically selects the best functional form of the ARIMA model based on Akaike Information Criterion values.The construction of ARIMA models provides quantitative evidence of how the occurrence of fire incidents has changed over time, and the fitted parameters can be applied for the prediction and estimation of future patterns. To compare the performance between Random Forest and ARIMA, we constructed models for each type of fire and computed their mean absolute error (MAE).We used MAE instead of the mean absolute percentage error because there are zero values in the time series. Comparative Analysis between Random Forest and ARIMA for Different Urban Districts Similar to the previous step, to compare Random Forest and ARIMA, we constructed models for each urban district and compared their MAE.Note that both the ARIMA model and the Random Forest model use the last 12 months' data for testing and the preceding 9 years for training.However, the Random Forest model differs in that it incorporates the 5 years of data immediately preceding each data point in the training set as input features.After comparing the results using 1, 3, 5, and 7 years at the trial and error stage, we determined that utilizing five years produced the best MAE for the Random Forest model. Creating a Random Forest Model Based on Monthly Fire Data To construct a Random Forest model, as detailed in Section 2, the testing data consisted of the last 12 months of data from January 2018 to December 2018.This data encompassed ten city council districts and five main types of fire incidents, resulting in a total of 600 scenarios (twelve months × five fire incident types × ten districts) in the testing set.The five incident types are the five most common types of urban fires in our dataset, namely TRASH-Trash Fire, GRASS-Small Grass Fire, BOX-Structure Fire, AUTO-Auto Fire, and ELEC-Electrical Fire. The result showed an MAE of 2.635 for all fire types.From Figure 4, we can observe a few patterns: • Figure 4 shows that the predicted values were effective in reflecting the overall pattern of urban fire occurrences.The closeness of the two lines (expected and predicted) for the majority of samples suggests a good fit for standard scenarios.• The MAE of 2.635 reported for all fire types offers valuable insights into the perfor- mance of our predictive model.It is particularly noteworthy given the diverse set of 600 scenarios within our testing set, spanning various districts and types of fire incidents.This MAE indicates that on average, the model's predictions deviated from the actual numbers by approximately 2.635 incidents.Although this signifies a relatively low error margin across the entire dataset, it is essential to delve deeper into the distribution of these errors. • There were noticeable spikes in the expected values (i.e., sharp peaks) that the predicted values did not capture.This shows that the model fails to capture some of the extreme values, which is a common challenge in predictive modeling, especially for models that often average out the predictions like Random Forest.• The prediction errors do not appear to be uniformly distributed across all samples. There are clusters of samples with larger errors which can correspond to specific types of fires or districts.After inspecting the raw data, it seems that the clusters of high error mostly occur in the city center and during summer months when fire incidents peak.  There were noticeable spikes in the expected values (i.e., sharp peaks) that the pre-dicted values did not capture.This shows that the model fails to capture some of the extreme values, which is a common challenge in predictive modeling, especially for models that often average out the predictions like Random Forest.  The prediction errors do not appear to be uniformly distributed across all samples. There are clusters of samples with larger errors which can correspond to specific types of fires or districts.After inspecting the raw data, it seems that the clusters of high error mostly occur in the city center and during summer months when fire incidents peak. When analyzing the importance of different features in the Random Forest model, the fire type appears to be the most important feature, which suggests that the model's accuracy may improve if we construct separate models based on the fire type.In addition, we are also interested in the performance differences between ARIMA and Random Forest.Therefore, the rest of this section emphasizes the comparison of model performances by fire type and urban district. Comparing ARIMA and Random Forest by Fire Type To compare the performance between Random Forest and ARIMA, we constructed models for each type of fire and compared their MAE.Upon testing the stationarity of our time series, we found that the times series for all fire types appear stationary, so there was no need for differentiation. Also, for forecasting models, there is a notable difference between ARIMA and the Random Forest regressor when it comes to handling residuals.ARIMA relies on the When analyzing the importance of different features in the Random Forest model, the fire type appears to be the most important feature, which suggests that the model's accuracy may improve if we construct separate models based on the fire type.In addition, we are also interested in the performance differences between ARIMA and Random Forest.Therefore, the rest of this section emphasizes the comparison of model performances by fire type and urban district. Comparing ARIMA and Random Forest by Fire Type To compare the performance between Random Forest and ARIMA, we constructed models for each type of fire and compared their MAE.Upon testing the stationarity of our time series, we found that the times series for all fire types appear stationary, so there was no need for differentiation. Also, for forecasting models, there is a notable difference between ARIMA and the Random Forest regressor when it comes to handling residuals.ARIMA relies on the assumption of normally distributed residuals for accurate predictions, given its parametric nature.Hence, it is important to examine the normality of residuals of ARIMA models.On the other hand, the Random Forest regressor is not bound by such constraints and does not require the residuals to be distributed normally.Therefore, we created the residual plot in Figure 5 and the Q-Q plots for ARIMA models in Figure 6.The residuals appear to be oscillating around zero without a clear pattern, and there is no obvious trend, seasonality, or repeated structure, which would indicate that the ARIMA models have captured the underlying process well.In the Q-Q plots, it seems that the residuals for all fire types follow the red line quite closely, especially in the central quantiles.There are some deviations at the ends (tails), which suggests that there may be some outliers, or that the distributions have heavier tails than the normal distribution.This is common in real-world data.The residuals of Random Forest also follow normality in our tests, but since Random Forest models do not require the normality of residuals, only the ARIMA model residuals are included in this section. ual plot in Figure 5 and the Q-Q plots for ARIMA models in Figure 6.The residuals appear to be oscillating around zero without a clear pattern, and there is no obvious trend, seasonality, or repeated structure, which would indicate that the ARIMA models have captured the underlying process well.In the Q-Q plots, it seems that the residuals for all fire types follow the red line quite closely, especially in the central quantiles.There are some deviations at the ends (tails), which suggests that there may be some outliers, or that the distributions have heavier tails than the normal distribution.This is common in real-world data.The residuals of Random Forest models also follow normality in our tests, but since Random Forest models do not require the normality of residuals, only the ARIMA model residuals are included in this section.Table 4 shows the comparison of MAE for the Random Forest and the ARIMA models based on the testing set.As can be seen from Table 4 and Figure 7, overall, ARIMA outperformed Random Forest in predicting the occurrence of fire incidents for most fire types; however, the performance varied for different fire types.For example, in predicting trash Table 4 shows the comparison of MAE for the Random Forest and the ARIMA models based on the testing set.As can be seen from Table 4 and Figure 7, overall, ARIMA outperformed Random Forest in predicting the occurrence of fire incidents for most fire types; however, the performance varied for different fire types.For example, in predicting trash fires, ARIMA's MAE was 21.92% lower than that of Random Forest.Similarly, ARIMA outperformed Random Forest in predicting grass fires and electrical fires with a decrease in MAE of 17.77% and 18.01%, respectively.Interestingly, the Random Forest model has a significantly lower MAE for auto fires compared to ARIMA, with the MAE for ARIMA being 95.36% higher.This might be due to nonlinear relationships or interactions between variables that Random Forest captured more effectively than ARIMA did.In other words, the ARIMA model is designed to capture time-series data's temporal structures (e.g., trends and seasonality), so the model may not work as well if the frequency of auto fires does not show a strong temporal pattern.Figure 8 shows a comparison of temporal patterns of two fire types: small grass fire and auto fire.The two types of fires exhibit distinct seasonal trends.The frequency of grass fires shows a particular pattern of increase or decrease during the spring and summer months, which could be related to factors like temperature, rainfall, drought conditions, and vegetation growth.The pattern for auto fires is less seasonally dependent and more consistent month to month.Interestingly, the Random Forest model has a significantly lower MAE for auto fires compared to ARIMA, with the MAE for ARIMA being 95.36% higher.This might be due to nonlinear relationships or interactions between variables that Random Forest captured more effectively than ARIMA did.In other words, the ARIMA model is designed to capture time-series data's temporal structures (e.g., trends and seasonality), so the model may not work as well if the frequency of auto fires does not show a strong temporal pattern.Figure 8 shows a comparison of temporal patterns of two fire types: small grass fire and auto fire.The two types of fires exhibit distinct seasonal trends.The frequency of grass fires shows a particular pattern of increase or decrease during the spring and summer months, which could be related to factors like temperature, rainfall, drought conditions, and vegetation growth.The pattern for auto fires is less seasonally dependent and more consistent month to month. Figure 8 shows comparison of temporal patterns of two fire types: small grass fire and auto fire.The two types of fires exhibit distinct seasonal trends.The frequency of grass fires shows a particular pattern of increase or decrease during the spring and summer months, which could be related to factors like temperature, rainfall, drought conditions, and vegetation growth.The pattern for auto fires is less seasonally dependent and more consistent month to month.In summary, while ARIMA appears to be the better model overall for predicting fire incidents in this analysis, the choice of model should consider the specific type of fire In summary, while ARIMA appears to be the better model overall for predicting fire incidents in this analysis, the choice of model should consider the specific type of fire being predicted.The superior performance of Random Forest in predicting auto fires could imply that certain fire types may have underlying patterns better captured by the more complex, nonlinear methods employed by Random Forest algorithms.It would be beneficial to further investigate the characteristics of auto fire incidents that lead to this discrepancy in model performance and potentially explore hybrid models or feature engineering to improve predictions across all fire types. Comparing ARIMA and Random Forest by Urban District Similarly, we also compared the performance of ARIMA and Random Forest in different urban districts (Table 5).Like the residual and Q-Q plots for different fire types, Figures 9 and 10 also demonstrate randomly distributed residuals, signifying that the ARIMA models have captured the underlying process well.It can be seen from Table 5 that the Random Forest model performed better in five districts (Districts 1, 3, 8, 9, 10), where the MAE values were slightly lower than the ARIMA values.This performance difference might be attributed to Random Forest's ability to handle complex, nonlinear data patterns and interactions between multiple predictors, which can be characteristic of urban fire incidents.On the other hand, the ARIMA It can be seen Table 5 that the Random Forest model performed better in five districts (Districts 1, 3, 8, 9, 10), where the MAE values were slightly lower than the ARIMA values.This performance difference might be attributed to Random Forest's ability to handle complex, nonlinear data patterns and interactions between multiple predictors, which can be characteristic of urban fire incidents.On the other hand, the ARIMA model performed better in the other five districts.This could indicate that fire occurrences in these districts follow more predictable temporal trends, which ARIMA can capture more effectively.The models appear to show a varied performance across the ten districts (Figure 11).be better at handling the complexity and variety of urban fires due to its ability to capture nonlinear relationships and interactions between unknown variables.From Figure 11, we can see that Districts 3 and 9 (i.e., where Random Forest has a lower MAE) are located in the city center.Although Districts 1, 8, and 10 cover a large size of suburban areas, most of the fire incidents in these regions happened close to the city center (c.f. Figure 1).Suburban districts, on the other hand, might have fires that are more related to residential areas, which may follow more seasonal or temporal patterns and potentially make ARIMA more suitable.For example, Districts 2 and 4, where ARIMA performs better, might exhibit more homogeneous (possibly suburban) characteristics with fire incidents that follow a more predictable temporal pattern. To further understand the complex correlation between model performance and urban districts, it would be beneficial to consider additional data layers, such as districtspecific socioeconomic characteristics to better interpret why certain districts are better predicted by one model than the other. Limitations Although the results in Sections 3.1-3.3provide valuable insights into the predictive capabilities of ARIMA and Random Forest models for urban fire incidents, there are a few constraints.One limitation is that the historical data were collected between 2009 and 2019, which may not fully capture the changing dynamics of urban environments, especially in a fast-developing city like Austin.Changes in other factors, such as urban planning and building codes over time, can also alter the landscape of fire incidents.Therefore, it may not always be reliable to use past fire data for future predictions. Also, we chose city urban districts instead of census tracts or census block groups because the latter often contain many areas with zero incidents.By selecting city urban districts, we aimed to reduce the skewing effect of sparsely populated or less incidentprone areas that may not provide a realistic picture of fire incident patterns.Future studies can look into how the choice of spatial unit may impact the model fitting and the prediction results. Another limitation that is worth considering is that the Random Forest model tends to underestimate the values.This could be because some features are missing when training the model.Also, there was some noise and outlier data in the expected values.Random From Figure 11, the spatial pattern in terms of the performance difference is not as obvious as expected.This can be due to the underlying diversity in the characteristics and dynamics of fire incidents across the city.Various factors, such as the number of fire incidents, size of the area, and urban density, could influence the model's performance.However, we can still observe an urban/suburban difference.For example, urban districts typically have higher population densities and more infrastructure; therefore, these areas may experience a higher frequency of different types of fires, such as structural fires in multi-story buildings, compared to suburban districts.The Random Forest model might be better at handling the complexity and variety of urban fires due to its ability to capture nonlinear relationships and interactions between unknown variables.From Figure 11, we can see that Districts 3 and 9 (i.e., where Random Forest has a lower MAE) are located in the city center.Although Districts 1, 8, and 10 cover a large size of suburban areas, most of the fire incidents in these regions happened close to the city center (c.f. Figure 1).Suburban districts, on the other hand, might have fires that are more related to residential areas, which may follow more seasonal or temporal patterns and potentially make ARIMA more suitable.For example, Districts 2 and 4, where ARIMA performs better, might exhibit more homogeneous (possibly suburban) characteristics with fire incidents that follow a more predictable temporal pattern. To further understand the complex correlation between model performance and urban districts, it would be beneficial to consider additional data layers, such as district-specific socioeconomic characteristics to better interpret why certain districts are better predicted by one model than the other. Limitations Although the results Sections 3.1-3.3provide valuable insights into the predictive capabilities of ARIMA and Random Forest models for urban fire incidents, there are a few constraints.One limitation is that the historical data were collected between 2009 and 2019, which may not fully capture the changing dynamics of urban environments, especially in a fast-developing city like Austin.Changes in other factors, such as urban planning and building codes over time, can also alter the landscape of fire incidents.Therefore, it may not always be reliable to use past fire data for future predictions. Also, we chose city urban districts instead of census tracts or census block groups because the latter often contain many areas with zero incidents.By selecting city urban districts, we aimed to reduce the skewing effect of sparsely populated or less incidentprone areas that may not provide a realistic picture of fire incident patterns.Future studies can look into how the choice of spatial unit may impact the model fitting and the prediction results. Another limitation that is worth considering is that the Random Forest model tends to underestimate the values.This could be because some features are missing when training the model.Also, there was some noise and outlier data in the expected values.Random Forest models can sometimes smooth out noise, which might look like an underestimation in the presence of volatile data.In this research, we only considered factors like fire type and district characteristics due to the limitation of data, but there can be other factors, such as weather conditions or traffic patterns, that may improve the model's accuracy. In addition, this study was only conducted at a city council-district level in Austin, Texas.The models' performance in Austin may not necessarily reflect their potential accuracy in other urban areas, as different cities have unique urban layouts, socio-demographic factors, policies, etc.The delineation of city council districts can obscure localized patterns and outliers and also introduce modifiable areal unit (MAUP) problems, meaning that the results may be different if we conduct the analysis at a census tract or a census block group level.However, in the initial data exploration, we discovered that either census tract or census block group level yields too many polygons with zero fire incidents. Conclusions This study conducted a predictive analysis of two models-ARIMA and Random Forest-based on a dataset of urban fire incidents across various districts of Austin, Texas.Overall, the ARIMA model is capable of modeling how fire incidents have changed over time in a parametric way and has proven to be a dependable way to forecast future trends based on these changes.On the other hand, the Random Forest model has also shown notable effectiveness in certain city areas.This suggests that the specific kind of fire and the unique features of each urban district greatly affect how well the model works. Based on the comparative study results, we found that both the type of fire and the district have a substantial influence on the model's performance.The ARIMA model outperformed the Random Forest model for most fire types except for auto fires.The results from the city district analysis demonstrated an interesting pattern in model efficacy, with Random Forest outperforming ARIMA in five districts and vice versa in the other five.This balanced variance in predictive accuracy highlighted the importance of considering local district characteristics, including socioeconomic factors and the specific nature of fire incidents, when selecting a predictive model.Overall, this research contributes to the existing literature by filling the gap in comparative studies of machine learning and time series models for urban fire prediction.The results can provide valuable input for urban planning and public safety by creating more targeted and effective fire prevention strategies in rapidly growing urban areas like Austin. Future work can focus on including other factors, such as socioeconomic profiles, weather patterns, and land use patterns, into the predictive models.This may help to uncover complex interactions and dependencies that are not apparent from the fire incident data alone.Also, expanding the geographical scope to include other cities could help generalize the findings and allow the models to be tested and validated in diverse urban settings.Researchers can investigate the performance of other machine learning algorithms, especially algorithms with temporal capabilities, like Long Short-Term Memory networks, when handling the sequential nature of time-series data.The use of deep learning techniques could potentially reveal deeper insights into the predictive factors of urban fires.Lastly, burn severity data can be incorporated into future research to assess its impact on fire behavior and propagation, which can provide valuable insights for fire management and urban planning strategies. Figure 1 . Figure 1.Yearly fire incidents in the ten Austin city council districts. Figure 1 . Figure 1.Yearly fire incidents in the ten Austin city council districts. ISPRS Int.J. Geo-Inf.2024, 13, x FOR PEER REVIEW 5 of 16 the third component consists of a series of 120 bracketed numbers.Each number represents the monthly count of a specific type of fire in a given district, starting with January 2009 and ending December 2018. ISPRS Int.J. Geo-Inf.2024, 13, x FOR PEER REVIEW 5 of 16 the third component consists of a series of 120 bracketed numbers.Each number represents the monthly count of a specific type of fire in a given district, starting with January 2009 and ending December 2018. Figure 4 . Figure 4. Predicted values based on a Random Forest model [36]. Figure 4 . Figure 4. Predicted values based on a Random Forest model [36]. Figure 5 . Figure 5. ARIMA model residuals for all five types of fires.Figure 5. ARIMA model residuals for all five types of fires. Figure 5 . 16 Figure 6 . Figure 5. ARIMA model residuals for all five types of fires.Figure 5. ARIMA model residuals for all five types of fires. Figure 6 . Figure 6.Q-Q plots for ARIMA models by fire type. Figure 8 . Figure 8. Comparing the temporal patterns of small grass fire and auto fire. Figure 8 . Figure 8. Comparing the temporal patterns of small grass fire and auto fire. Figure 9 . Figure 9. ARIMA model residuals for all districts.Figure 9. ARIMA model residuals for all districts. Figure 10 . Figure 10.Q-Q plots for ARIMA models by district. Figure 11 . Figure 11.Comparing Random Forest and ARIMA by city council district. Figure 11 . Figure 11.Comparing Random Forest and ARIMA by city council district. Table 1 . Example records from the fire incident dataset. Table 2 . Fire incidents by year. Table 3 . Fire incidents by city council district. Table 5 . Comparing Random Forest and ARIMA at the city council district level.
9,270.8
2024-04-29T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Performance of a Rotating Detonation Rocket Engine with Various Convergent Nozzles and Chamber Lengths : A rotating detonation rocket engine (RDRE) with various convergent nozzles and chamber lengths is investigated. Three hundred hot-fire tests are performed using methane and oxygen ranging from equivalence ratio equaling 0.5–2.5 and total propellant flow up to 0.680 kg/s. For the full-length (76.2 mm) chamber study, three nozzles at contraction ratios (cid:101) c = 1.23, 1.62 and 2.40 are tested. Detonation is exhibited for each geometry at equivalent conditions, with only fuel-rich operability slightly increased for the (cid:101) c = 1.62 and 2.40 nozzles. Despite this, counter-propagation, i.e., opposing wave sets, becomes prevalent with increasing constriction. This is accompanied by higher number of waves, lower wave speed U wv and higher unsteadiness. Therefore, the most constricted nozzle always has the lowest U wv . In contrast, engine performance increases with constriction, where thrust and specific impulse linearly increase with (cid:101) c for equivalent conditions, with a 27% maximum increase. Additionally, two half-length (38.1 mm) chambers are studied including a straight chamber and (cid:101) c = 2.40 nozzle; these shortened geometries show equal performance to their longer equivalent. Furthermore, the existence of counter-propagation is minimized. Accompanying high-fidelity simulations and injection recovery analyses describe underlying injection physics driving chamber wave dynamics, suggesting the physical throat/injector interaction influences counter-propagation. Introduction Rotating detonation engines (RDEs) have recently gained substantial interest as an alternative to traditional deflagration-based propulsion systems, with the theoretical potential to achieve overall engine performance gains. Specifically, rotating detonation rocket engines (RDREs) can exhibit an increase in chamber pressure, temperature and exhaust gas velocity for a substantially lower injection pressure through a constant-volume combustion process, compared to constant-pressure devices. Recent studies have demonstrated the successful operation of RDREs using both gaseous [1][2][3][4][5][6][7] and liquid fuels [1,8]. However, insight into the optimal method to properly expand the highly unsteady exhaust flow from the RDRE is still limited, as reflected waves back upstream from a physical throat constriction can interact with the reactant fill region to disrupt the detonation zone [9]. To date, only limited experimental studies with convergent throats have been performed for RDEs, with detonative behavior ranging from increased complexity of the detonation mode structure [2,5,10,11] to a complete detonative breakdown to deflagration processes [12]. Therefore, to further understand the flow expansion processes associated with a rotating detonation engine, a detailed study using multiple convergent throat geometries is conducted in the current study. First, in addition to the straight annulus exit ( c = 1.00), three physical throat designs are considered for the full annular length geometry (l c = 76.2 mm, W c = 5 mm) that correspond to contraction ratios c = 1.23, 1.62, and 2.40. A linear performance increase is observed for a given physical throat up to 1335 N thrust at 0.680 kg/s total propellant mass flow, which is also accompanied by an increase in the prevalence of counter-propagating modal behavior. For example, the most constricted annular geometry exhibits counter-propagating behavior across all of the flow conditions investigated. This is contrary to traditional design guidelines for RDEs, as counter-propagating behavior, i.e., two opposing sets of waves moving in opposite directions, is associated with lower wave speeds and a decrease in performance due to a likely increase in parasitic deflagration [5]. In addition, a series of tests have been conducted using two half annular length configurations (l c = 38.1 mm) including a straight annulus and the most constricted c = 2.40 nozzle. Notably, engine performance is the same for the shortened straight c = 1.00 geometry compared to the full-length annulus, and the shortened annulus with the c = 2.40 nozzle yields approximately 6% higher performance. Furthermore, the detonation wave speeds are typically up to 5% faster for the shortened straight annular geometry compared to the longer configuration at the same flow condition. For the shortened c = 2.40 nozzle, detonation mode structure is significantly better ordered compared to the full-length geometry, i.e., the existence of counter-propagating behavior is significantly diminished. This result is consistent with the theory proposed herein that shock waves reflected back upstream from the annular throat can influence the quality of reactant mixedness, and thus the detonation mode structure, in the reactant fill region near the injector face. A supporting analysis describing the injector recovery process and accompanying results from high-fidelity simulations of the RDRE are also detailed in this manuscript. These analyses suggest the interaction between the observed injection response and interaction with a physical throat is one driving mechanism responsible for counter-propagating modal behavior. The results of this work should serve as a basis for further studies to optimize RDRE annular nozzle design, as well as global engine performance. Experiment Setup The modular RDRE tested in this study is the same laboratory engine used in our previous work [5,13,14]. It was originally designed and tested by Smith and Stanley [2,15] using empirical guidelines developed by Bykovskii et al. [1], and contains a 76.2 mm diameter annulus with a 5 mm annular width, and a 76.2 mm long annular channel (see Figure 1). Gaseous propellants, methane GCH 4 and oxygen GO 2 , are fed through 72 opposing fuel and oxidizer jets equally distributed around the annulus in an unlike impinging injector configuration. The baseline injector used in the previous study had a very high injector initial pressure drop across the investigated flow conditions [5]. Therefore to reduce the required drive pressures, the injector used in the present study contains larger sized holes for both the fuel and oxidizer equivalent in both cases to 1.5× the original baseline injection areas. Propellant flow rates are metered using critical flow venturis, with total mass flow rates ranging up to 0.680 kg/s. For the flow conditions investigated, fuel and oxidizer manifold pressures are high enough to cause the injector flow to be choked whenever a detonation wave is not located directly over a specific injection site. A pre-detonator tube using GCH 4 and GO 2 , and firing tangentially into the annulus near the injector face is used for ignition. A planar detonation wave is generated in the pre-detonator tube from a small volume (53 cm 3 ) of premixed gas located in an upstream reservoir that is ignited using a spark plug. Equivalent chamber pressure for the annulus of the rotating detonation rocket engine is measured through two capillary tube attenuated pressure (CTAP) static probes, the ports for which are located at 8.89 mm and 29.21 mm axially downstream from the injector face. The CTAP dimension, i.e., l/d ratio, is based on the work of Stevens et al. [16], and is high enough to filter the oscillatory pressure to provide an equivalent average chamber pressure. Three high-frequency pressure transducers are also used to measure pressure fluctuations within each plenum (2 ea.) and within the main chamber (1 ea.). The plenum pressure transducers are both PCB model 112A05 and are used to assess the degree of detonation-plenum coupling. The chamber sensor is a PCB model 123A, which uses both water-cooling and a helium bleed on its surface to increase survivability. This chamber sensor is flush mounted at the same axial position as the second CTAP sensor. All highfrequency pressure sensors are sampled at 500 kHz. Thrust measurements are also taken for each firing using a thrust stand with a 1112 N load cell. The RDRE test article on the thrust stand, as well as a side-view during a typical firing can be seen in Figure 1a Direct high-speed visible imaging into the annulus is used to observe and capture the traveling detonation waves. The high-speed camera (HSC) is a Phantom v2512 positioned 6.1 m downstream of the test article (see Figure 1c). The camera is located within a protective enclosure with a quartz window and is focused on the injection plane close to the detonation zone. A Nikon Reflex-Nikkor HN-27 lens is used in conjunction with the camera to allow the RDRE annulus to fill the entire image field-of-view. Images are captured at 200 kfps with a resolution of 256 × 256 pixels and exposure times between 1 and 3 µs. Convergent Nozzle Geometries In order to investigate the effects of various convergent nozzles for the full annular 76.2 mm length configuration, four geometries of varying contraction ratio c are tested: (1) straight annulus ( c = 1.00), (2) c = 1.23, (3) c = 1.62, and (4) c = 2.40. These four geometries correspond to annular constriction widths of 5 mm, 3.98 mm, 3.00 mm and 2.01 mm, respectively. For each of these tests, a 15 degree conical spike is added to the end of the annular geometry to help with the effective expansion of the supersonic exhaust gases. In order to avoid flow separation at the exit interface of the annulus, the nozzle throat location is effectively moved upstream with increasing contraction ratio to maintain a 15 degree contour into the conical spike (see Figure 2a). This causes the center body of the RDRE to be modular in design with various physical throat additions (Figure 2b). Similarly, two reduced annular length geometries with l c = 38.1 mm are also tested: a straight center body design and one with the most constricted c = 2.40 nozzle. It should be noted during the testing of these configurations, both the outer and center bodies have the same annular length, i.e., the center body is not recessed. As with the full annular length designs, the 15 degree conical spike is added for these shorter configurations and the axial throat contour for the c = 2.40 nozzle adjusted to maintain the 15 degree expansion contour past the throat. A summary of the dimensions for the convergent nozzles and their respective throat locations are presented in Table 1. Convergent Nozzle Study: Full Annular Length Geometries For each annular configuration, three sets of flow conditions are investigated for varying equivalence ratio φ and total propellant mass flowṁ tot . The first set varies equivalence ratio φ from ≈0.5 to 2.5 in 0.2-0.25 increments, while holding the total mass flow rate of the propellants constant at 0.272 kg/s. The other two condition sets include fixing the equivalence ratio constant at both φ = 1.15 and 1.5, while varyingṁ tot from 0.2 to 0.680 kg/s. The flow condition test matrix for this study is seen in Figure 3. In the previous experimental work, hot-fire tests of the RDRE were 1250 ms in duration [5,13]. However, due to the inclusion of the temperature-sensitive high-frequency pressure transducer, the firing times for all of these tests are reduced to 750 ms to increase the survivability of the sensor. Figure 4 compares two test firings of 1250 ms and 750 ms for the same flow condition, φ = 1.15 andṁ tot = 0.272 kg/s, showing the transient response of the test article pressures (left) and thrust (right) for the two firings. For both run durations, the upstream venturi and plenum pressures reach steady-state conditions throughout the entirety of the engine run time, whereas the oscillations present initially in the thrust measurement damp out during the last 100 ms of the firing. Therefore, the 100 ms duration bounded by the vertical dashed lines in Figure 4 is used for reporting the average run measurements. As the average test measurements captured during the 750 ms run are equivalent to the 1250 ms firing without significant noise added, this provides validation for the short-duration firing time. Measurement uncertainty for these reduced duration firings are calculated using the procedure described in Lightfoot et al. [17]. This uncertainty is presented as error bars on all of the subsequent test run measurement plots, which overall did not increase notably from previous extended firing results with 1250 ms run duration [5]. Engine Operability and Performance Global performance measurements for the equivalence ratio set of flow conditions at 0.272 kg/s total flow rate are shown in Figure 5, which compare the various full l c = 76.2 mm annular length convergent nozzle geometries. In general, detonation is achieved across a majority of the equivalence ratio conditions ranging from φ = 0.5 to 2.0 for each convergent nozzle. From φ = 2.0 to 2.5, successful operation is observed for c = 1.62 and 2.40, likely a result of modified annular flow conditions (i.e., chamber gas accumulation) conducive to engine ignition due to the physical throat addition. Maximum performance occurs from φ ≈ 1.15-1.5, and this range is not altered with increasing nozzle contraction ratio. These flow conditions correspond to a maximum thrust F of 556 N and specific impulse I s of 225 s for the most constricted c = 2.40 configuration. In addition, both performance parameters display a linear increase with increasing physical throat constriction for a specific flow condition; the c = 1.23, 1.62 and 2.40 nozzles show an overall increase of 8%, 15% and 28% from the straight annulus design, respectively. This linear relationship for these performance parameters is shown in greater detail in Figure 5c,d, where the average percent increase for thrust and specific impulse is plotted as a function of contraction ratio for the investigated equivalence ratio flow conditions. In these calculations, Chauvenet's rejection criteria [18] is implemented to remove outlying data and the Student's t-distribution for small sample size [19] is used to quantify the statistical uncertainties associated with the F and I s percent increases. As can be seen in this figure, the uncertainty for both the F and I s percent increases is fairly small across the varying contraction ratio nozzles, with the c = 2.40 nozzle having the largest error of approximately ±3%. Nevertheless, the linear trend with increasing contraction ratio for both performance parameters is clear. RDRE performance sensitivity to increasing total propellant flow rate for constant equivalence ratios equaling 1.15 and 1.5 is shown in Figure 6. Similarly to the runs over the range of equivalence ratios, successful detonation is achieved for the entire flow range froṁ m tot = 0.091 to 0.453 kg/s across all nozzle geometries. One notable observation is that tests from 0.453 to 0.680 kg/s are only shown for the straight annulus geometry and c = 1.23. This is due to nozzle erosion occurring during tests exceeding 0.454 kg/s for the c = 1.62 and 2.40 nozzles and results are therefore omitted. Measured thrust linearly increases across the whole range for each configuration, which reaches a maximum of 1334 N for c = 1.23 atṁ tot ≈ 0.680 kg/s. It should be noted that while the load cell used is rated up to 1112 N, there are a small number of tests that exceed this rating. As there is no alteration in the linear trend observed for the thrust measurements between 1110 and 1335 N, they can be considered reasonable but most likely have greater measurement uncertainty. The specific impulse rapidly increases with increasing flow rate for all geometries until 0.340 kg/s, where it continues to increase but at a lower rate. At this point, a maximum specific impulse of 250 s is achieved for c = 2.40 at φ = 1.5 andṁ tot = 0.454 kg/s. As with the equivalence ratio flow condition set ( Figure 5), at a given flow rate, the performance linearly increases with increasing constriction at the throat. Finally, the performance trends observed for both equivalence ratios are consistent with one another across the investigated flow rate range. The fuel and oxidizer pressure drop ( Figure 7) for the injector pair is calculated using the difference between the respective plenum pressures and equivalent average chamber pressure using CTAP1 (located closest to the detonation region). In general, the fuel pressure drop ranges from 780 to 2850 kPa for the equivalence ratio conditions aṫ m tot = 0.272 kg/s ( Figure 5) and 1470 to 4230 kPa for the propellant flow rate conditions at φ = 1.15 ( Figure 6). Similarly, the oxidizer pressure drop ranges from 1125 to 2160 kPa and 780 to 3540 kPa for the two flow condition sets, respectively. As the chamber pressure increases with increasing constriction at the throat, this causes the pressure drop to be decreased. Therefore, the pressure drops for c = 1.23, 1.62 and 2.40 nozzles on average are 4%, 13%, and 29% less than those for the straight annulus at similar flow conditions, but still remain choked. Equivalent average chamber pressure measurements are taken at two axial locations within the annulus. CTAP1 is located at 8.89 mm from the injection plane, whereas CTAP2 is further downstream at 29.21 mm (see Figure 8d). The average pressures measured at the CTAP1 location for the three sets of flow conditions are shown in Figure 8a-c. For each of the nozzle geometries, the CTAP measurements closely correlate with the trends observed for the thrust measurements. Therefore, maximum pressure is reached from φ = 1.15 to 1.5, and linearly increases with total propellant mass flow. For maximum performance at 0.272 kg/s, CTAP1 approaches 988 kPa for the most constricted c = 2.40 nozzle geometry, and increases to 1815 kPa at 0.453 kg/s. Overall, there is a greater than 3× static chamber pressure increase on average for the most constricted nozzle geometry compared to the straight annular geometry, as well as an approximate 2× increase for the c = 1.62 nozzle. To further analyze the effect of the various nozzle geometries on overall engine performance, it is instructive to compare the measured RDRE thrust to the theoretical thrust of an equivalent constant-pressure rocket engine with the same throat area and flow rate. The iterative approach for these comparisons is outlined in the work of Stechmann [12], and uses NASA's Chemical Equilibrium with Applications (CEA) code [20]. In summary, this approach uses the measured flow rates, CTAP1 pressure and nozzle dimensions for a single set of input conditions to calculate the characteristic velocity c * of the ideal constant-pressure device. With this complete, the theoretical chamber pressure is then calculated using p c,th = c * ṁ tot /A t . Then c * is updated as necessary using the revised theoretical chamber pressure. Once this iterative loop converges, the measured RDRE thrust is compared to the ideal thrust F th calculated from the theoretical chamber pressure using F th = C F p c,th A t , where C F is the thrust coefficient and A t is the cross-sectional throat area. Results from this analysis for all flow conditions (see Figure 9) show that the generated thrust typically ranges from F/F th = 80-95%, which for the higher performing cases is comparable to state of the art conventional thrusters that typically operate around F/F th = 90-95% [21]. Most notably, although the measured thrust is 27% higher on average for the most constricted nozzle, there is no appreciable increase when compared to the ideal thrust. In fact, for a given flow condition, there is no change among any of the nozzle geometries. This is likely due to the benefit of increased chamber pressure with increasing throat constriction, which is also accompanied with a complication/break-down of the local detonation structure due to longitudinal wave reflections emanating from the throat. Further analysis providing some physical insight into contributing factors that can lead to this break-down is given in the injector recovery analysis section of this manuscript (Section 5). Detonation Mode Characteristics The average number of waves m and wave speed U wv are determined using the image processing method reported previously [22]. This process entails automated analysis of the high-speed video images to track the integrated pixel intensity within 360 single-degree azimuthal bins around the annulus. This creates a detonation surface, which illustrates the propagation of all traveling detonations. A two-dimensional fast Fourier transform of the detonation surface data is then used to automatically extract both the number of waves m and associated operational frequency f det . Combining these two parameters, the accompanying wave speed is determined using U wv = πd f det |m| , where d is the midchannel diameter of the annulus. This process has been shown to adequately extract modal properties throughout periods of steady-state propagation [5,13], as well as during transition events and counter-propagating phenomena [14,22]. Example data for a case exhibiting counter-propagating behavior with the c = 2.40 nozzle (see Figure 10) show that there is complex wave motion in both the clockwise (CW) and counterclockwise (CCW) directions as evidenced by the image sequence ( Figure 10a) and detonation surface ( Figure 10b). Nevertheless, the image processing technique is robust enough to separate both sets of opposing waves, which shows 8 CW waves as the dominant set traveling at 1280 m/s and 9 CCW opposing waves moving at 1200 m/s. As with the hot-fire measurements, the mode properties are averaged during the last 100 ms (bounded by the vertical dashed lines) for the reported results. Average modal properties for the tests that vary equivalence ratio at constant flow rate show that the average number of waves ranges from 4 to 10 and corresponds to waves speeds from U wv ≈ 1000-1700 m/s. For a given flow condition, the total number of waves is shown to consistently increase by m = 1-2 with each increasing throat constriction across the entire equivalence ratio range (see Figure 11b). This increase in the number of waves is associated with a decrease in wave speed (see Figure 11c), as has been observed throughout the literature [1,2,5,6,12,23]. In addition, the increase in throat constriction from c = 1.23 to 2.40 corresponds with a substantial increase in counter-propagating behavior (denoted by a ( ) symbol), as all of the c = 2.40 tests exhibit this phenomenon. Comparing wave speeds from these tests to the theoretical ideal Chapman-Jouguet detonation velocities U CJ calculated using CEA [24], the relative wave speeds range from 50 to 70% (Figure 11a). Generally, the highest achieved wave speeds are for the straight annulus cases, whereas the most constricted nozzle has the lowest wave speeds at approximately 50% of ideal and are generally insensitive to the flow condition, i.e., U wv remains constant with changing φ. Furthermore, when comparing the average sound speed of combustion products c for CH 4 /O 2 to the theoretical Chapman-Jouguet detonation velocity for these flow conditions, c/U CJ is between 50 and 54%. Therefore, these cases with observed counter-propagating behavior are weaker operating modes and may be influenced by thermoacoustics. Specifically, it is possible that the counter-propagating behavior exhibited for the c = 2.40 nozzle causes detonation decoupling amongst the leading shocks and reaction zones of the traveling waves due to a combination of the fluctuation of the incoming propellant flows imposed by passing waves and longitudinal reflected shock waves local to the injection plane; this breakdown process could then cause the mode to be more thermoacoustic in nature than purely detonative, although non-ideal detonation propagation (i.e., lower strength detonation under the Chapman-Jouguet limit) for non-premixed injection has recently been shown in a complementary high-fidelity modeling effort by Lietz et al. [25]. Finally, the wave speed sensitivity to specific impulse (Figure 11d) for the various nozzle geometries demonstrates two separate trends. The first trend involves the straight annulus and c = 1.23 and 1.62 nozzles, which shows a relationship similar as the equivalence ratio sensitivity; the wave speeds on the higher and lower ends are approximately the same, with the highest specific impulse at the center of that range. The most constricted nozzle, however, exhibits a linear trend wherein the highest wave speeds (albeit lower than the other geometric cases) exhibit the highest specific impulse. Example temporal histories taken from three tests in Figure 11 measured using the high-frequency pressure transducer flush mounted in the annulus show distinct responses depending on the type of detonation behavior present. A corotating mode only has waves traveling in a single direction, and generally exhibits well-defined steep-fronted pressure traces at high amplitude. Steep-fronted non-linear waves are shown for a 4 wave corotating case captured using the straight annulus (see Figure 12a), which cause the presence of higher harmonics within the frequency spectra. For this case, the operational frequency ( f det ) of the mode is measured to be 26.4 kHz from the high-speed images, which expectedly correlates with the maximum peak located at 26.4 kHz in the pressure transducer frequency spectra. A counter-propagating case with the straight annulus again shows steep-fronted behavior (see Figure 12b), but at a lower amplitude than the corotating mode case. Aside from this, the full width at half-maximum (FWHM) of the fundamental mode (see Figure 12b) spans a larger array of frequencies than the corotating case; this is indicative of the detonation process being less defined than the corotating mode. Nevertheless, the operational frequency captured in the high-speed images and the fundamental mode taken from the high-frequency pressure sensor both measure 31.7 kHz. Finally, the temporal history for a counter-propagating mode exhibited in the most constricted c = 2.40 nozzle has a steep-fronted, but more complicated response, as shown in Figure 12c. The accompanying frequency spectra show multiple integer peaks that are related to the operational frequency at 45.9 kHz. It is also noted that there are bifurcated peaks at the fundamental and first harmonic frequencies, which indicate the secondary set of waves that exists due to the counter-propagating mode behavior. A summary of the modal properties for the range of mass flow rate conditions at φ = 1.15 generally show trends (see Figure 13) similar to the previous conditions of Figure 11. As with the equivalence ratio tests, the number of waves increases by m = 1-2 waves for a given flow condition with increasing throat constriction (see Figure 13b), and this increase in the number of waves is again accompanied by a decrease in wave speed (Figure 13c). In addition, increasing the total mass flow rate causes an increase in the total number of waves, consistent to the work of Bykovskii and Zhdan [26]. However, the modal wave speeds captured throughout theṁ tot range are fairly constant for a given annular geometry, with the straight annulus having the highest U wv /U CJ between 65 and 70% (see Figure 13a) and the most constricted c = 2.40 nozzle at a value of approximately 50%. Therefore, it appears that the overall modal wave speed is much more sensitive to equivalence ratio than incoming propellant mass flux, providing another indication that efficient mixing through injection (e.g., local equivalence ratio) is a key parameter to increasing local detonation performance, i.e., wave speed and detonation front coupling [13]. Finally, the insensitivity of the wave speed to global performance for these annular geometries with increasingṁ tot is evident in Figure 13d, as each annular design shows a vertical line with near constant U wv for increasing specific impulse. Engine Operability and Performance To investigate the effect of chamber length on the performance, operability and detonation characteristics of the engine, the same flow condition matrix as shown in Figure 3 was completed for the straight annulus and c = 2.40 nozzle configurations with the annular length reduced by half of the original, i.e., l c = 38.1 mm; these results are then compared against their complementary full-length geometries to distinguish any apparent differences. For the shortened geometries, detonation is successfully achieved across the entire flow condition matrix for both the equivalence ratio range from φ = 0.5-2.5 andṁ tot = 0.091-0.680 kg/s, indicating that there is no appreciable drop-off in engine operability for the shorter chamber length. Global engine performance for the range of equivalence ratio conditions shows that there is no significant reduction for both reduced length geometries across the whole range (see Figure 14). In fact, a few conditions at peak performance near φ = 1.15 have an 6% increase for the reduced length c = 2.40 configuration over the l c = 76.2 mm equivalent. This is one indication that the shortened annular length geometry may be closer to the optimal combustor length for effective RDRE operation using gaseous propellants. The thrust and specific impulse for the two total mass flow rate condition sets demonstrate similar trends, where there are no significant decreases noted for either the l c = 38.1 mm straight annulus or c = 2.40 nozzle design (see Figure 15). As seen previously for the full annular length results, the two l c = 38.1 mm geometries both show a linear increase in thrust fromṁ tot = 0.091-0.680 kg/s with the same respective slopes as their l c = 76.2 mm equivalent. Furthermore, specific impulse similarly increases with total mass flow rate for the shortened straight annulus and c = 2.40 nozzle, where I s begins to increase at a slower rate beginning atṁ tot equaling 0.340 kg/s. Finally, for a majority of the l c = 38.1 mm, c = 2.40 nozzle tests, there appears to be a small but notable increase in specific impulse over the l c = 76.2 mm, c = 2.40 nozzle tests. For the shortened geometries, only one CTAP sensor is present within the chamber, at the same axial location as the closest sensor to the detonation zone for the l c = 76.2 mm configurations, i.e., 8.89 mm. In general, CTAP1 measurements for both straight annulus length designs consistently have the same pressure levels for all investigated flow conditions, which again correlate well with global performance (see Figure 16). However, the CTAP1 pressure for the l c = 38.1 mm, c = 2.40 nozzle is notably higher than the l c = 76.2 mm design for all flow conditions. On average, this increase is approximately 13%, which provides one indication that the high pressure zone associated with detonation is pushed slightly downstream from the injection plane for the l c = 38.1 mm configuration. To assess the reduction in chamber length on performance efficiency, a comparison similar to the aforementioned theoretical thrust comparison is carried out for both of the l c = 38.1 mm configurations. As shown in Figure 17, there is no significant decrease in F/F th for the shortened straight annulus across the investigated flow conditions compared to the l c = 76.2 mm straight annulus. The shortened c nozzle actually provides maximum theoretical efficiency for all geometries at the equivalence ratios around peak performance (φ = 1.0-1.15), which is F/F th ≈ 90%. Therefore, this indicates that there are some potential advantages of reducing these annular configurations axially for RDEs to make them more compact. Specifically, this result indicates that reduction in annular length may lead to higher performance RDREs. Further studies to identify the limits to this trend are warranted. Detonation Mode Characteristics Average modal properties of the l c = 38.1 mm designs for the equivalence ratio conditions show that the shortened axial chamber length plays a large role (see Figure 18). Specifically, the number of waves observed using the shortened straight annulus is generally reduced by m = 1 wave compared to the longer straight annulus for φ = 0.2-2.0. This reduction is accompanied by an increase in wave speed, where a maximum wave speed is observed for the shortened straight annulus at φ = 1, which corresponds to U wv /U CJ = 70% and m = 3 waves. Similarly to the straight annular geometries, counter-propagating behavior is only observed for significantly off-stoichiometric conditions, i.e., primarily lean conditions, but also occasionally very fuel rich. Unlike the full-length annulus with the c = 2.40 nozzle, counter-propagating behavior is not regularly observed for the shortened annulus with a similar nozzle. This may be attributed to the change in detonation-injection coupling due to the physical throat location being shifted much closer to the injection plane for the l c = 38.1 mm design. If this longitudinal coupling is thermoacoustic in nature as suggested by Paxson and Schwer [9], altering the axial wave reflection plane can spatially shift the location of maximum heat release, which will either drive or damp the instability depending on its phasing with oscillatory pressure and spatial location. This in turn will alter the amount of propellant feed modulation present, affecting the mixing uniformity of the reactant fill zone, which directly influences the number of waves observed and the prevalence of counter-propagating behavior [25]. It should be noted that although the shortened c = 2.40 nozzle geometry does not exhibit primarily counter-propagation of the waves, the number of waves is generally the highest for a given flow condition, mostly between m = 9-10 waves. This causes the wave speeds to be close to that of the l c = 76.2 mm, c = 2.40 nozzle, approximating U wv /U CJ = 50%. This again suggests the possibility that these propagating modes are influenced by longitudinal thermoacoustic fluctuations, as the sound speed of the combustion products of CH 4 /O 2 falls within this range. Finally, there appears to be only a weak correlation between performance and wave speed for the shortened c = 2.40 nozzle, with performance increasing somewhat linearly with wave speed (see Figure 18d). As with the equivalence ratio conditions of Figure 17, the two mass flow rate test sets exhibit similar modal property trends. As shown in Figure 19, the shortened straight annulus typically excites modes with wave speeds that are on the same order as the full-length configuration for a given flow condition. The shorter l c = 38.1 mm, c = 2.40 nozzle geometry again does not have active counter-propagating behavior aside from the extremely low flow rate case atṁ tot less than 0.136 kg/s. As with the equivalence ratio test set, the shortened c = 2.40 nozzle typically has the highest number of waves, again typically m = 9 waves; this corresponds to observed wave speeds that approximate U wv /U CJ = 50%. Nevertheless, the combination of slightly higher performance for the shortened geometries along with the reduction in counter-propagating behavior suggests that these promising compact axial designs need to be further studied and optimized. Injector Recovery Analysis To provide more insight into the physical phenomena which can cause the detonation structure to break down due to wave reflections in the vicinity of the injectors (e.g., transverse wave reflections emanating from injection orifices or longitudinal reflections from a physcial throat), an idealized analysis is performed to illustrate how mass flow rate and equivalence ratio fluctuate due to chamber pressure oscillations local to the injection plane. This analysis uses the inlet flow conditions of a straight annulus geometry firing at φ = 1.07 andṁ tot = 0.263 kg/s; this test corresponds to m = 4 waves moving at approximately 60% of theoretical, i.e., a high performing case for the equivalence ratio conditions. A summary of these inlet conditions is presented in Table 2, which shows injector plenum feed pressures approximating 2000-2200 kPa with an average chamber pressure of 334 kPa. As mentioned earlier, the injectors normally operate at choked conditions. Under the choked condition, the mass flow rate is only a function of upstream pressure and does not change due to alterations of the pressure downstream of the orifice (as long as the choked condition persists). As such, the mass flow rate for gaseous choked flow through an injector orifice can be written as [27,28] m g = C d A inj γp pln ρ pln 2 where C d is the orifice discharge coefficient, A inj is the injector orifice cross-sectional area, γ is the specific heat ratio and p pln and ρ pln are the upstream plenum pressure and density, respectively. From classical compressible flow [29], choked flow will persist as long as the orifice pressure ratio p c /p pln is operated under a value defined as the critical pressure ratio, p crit , which is given by For the fuel and oxidizer plenum conditions of the baseline test, the critical pressure ratios are p crit = 0.53 and 0.52, respectively. However, when a wave passes over a given injection site, the flow can momentarily become unchoked due to the locally high pressure associated with the detonation. Under unchoked conditions, the mass flow rate is now affected by downstream pressure oscillations and can even result in a momentary flow reversal condition if the downstream pressure becomes sufficiently large. The mass flow rate for unchoked gaseous propellant flow can be estimated using [28] where a flow reversal event is denoted by To model the periodic pressure and temperature cycles associated with detonation wave passage, synthetic profiles of detonation waveforms ranging from low amplitude weak detonation to high amplitude detonation at the theoretical Chapman-Jouguet limit shown in Figure 20 are used. All of the simulated waveforms are steep-fronted in nature, even for the low amplitude cases as the objective of this analysis is primarily to isolate oscil-latory pressure amplitude effects on the local flow rate. Therefore, four cases are considered where the pressure amplitude rise across the detonation varies from p rise = 5-30. Using NASA's CEA code, a CH 4 /O 2 detonation for φ = 1.07 and the respective plenum conditions, the theoretical Chapman-Jouguet pressure ratio is also p rise ≈ 30 and the accompanying temperature ratio is T rise ≈ 13; the ratio between the pressure and temperature rise ratios are held constant for all four of the investigated test cases. These test cases are summarized in Table 3. Using the synthetic data generated for the four sets of test conditions, mass flow rates for individual injector orifices of the fuel and oxidizer are calculated using a combination of Equations (1) and (3), depending on the flow choke condition. The orifice pressure ratio is found at every instance throughout the temporal cycle and is used as the determination whether the choked or unchoked condition applies. In the event of a backflow event occurring, the mass flow rate is found using the same equations but switching the respective upstream and downstream conditions as necessary. The lowest amplitude detonation case p rise = 5 does not have a high enough pressure rise to unchoke the flow. This can be seen in Figure 21a, where the mass flow rates of the fuel and oxidizer injectors are constant throughout the time period. This also corresponds to the expected, constant equivalence ratio of φ = 1.07. While it may appear desirable to operate under choked conditions at all times, this counteracts one of the benefits of a detonationbased propulsion system in which the injection pressures can theoretically be significantly reduced from traditional designs [5]. Furthermore, choked flow is difficult to maintain completely during operation at high detonation amplitudes. Therefore, the flow response observed in the cases above p rise = 10 better illustrates the effect of sharp pressure rises on fuel and oxidizer flow rates, and thus on the local equivalence ratio in an RDRE. In the p rise = 10 case (see Figure 21b), periodic unchoking (wave passage) events (highlighted in yellow) occur during the peak rise events. This results in reduction of both the fuel and oxidizer flow rates from their nominal values. These flow rates eventually recover when periods of choked flow resume (highlighted in cyan) without any flow reversal present. It should be noted that the average pressure associated with the synthetic data for this case is p c,avg = 356 kPa, which is close to the baseline experiment pressure of p CTAP = 334 kPa. Therefore, this case is the most analogous to the experimental conditions reported above. Under this periodic choking and unchoking process, the local equivalence ratio for the considered injector pair oscillates up to 25% higher than the desired condition and ranges from φ = 1.07-1.34. This case illustrates that there are inherent local flow rate fluctuations present during engine operation, with accompanying variations in equivalence ratio. The two highest pressure rise cases p rise = 20 and 30 both produce flow reversal events to varying degrees (see Figure 22). The p rise = 20 case has a backflow event which is smaller in duration than the highest case. The flow reversal events (shown in magenta and red for unchoked and choked flows, respectively) sees the flow rates of both the fuel and oxidizer sharply decay to a minimum, which then recover after the wave passes. Similar behavior is exhibited for the Chapman-Jouguet condition detonation (i.e., p rise = 30 case), except for the flow rate oscillations being higher in amplitude and the recovery time being longer in duration. The corresponding equivalence ratio fluctuations span a large range from φ = 0 (i.e., no reactants present due to flow reversal) to 1.35 for the p rise = 20 case and up to φ = 1.50 for the maximum. Again, this shows that local equivalence ratio fluctuations can become very significant during operation of high-amplitude detonation propagation. As this idealized analysis demonstrates the potential for large amplitude φ oscillations present in the reactant fill region due to the passing detonations, there must be sufficient recovery time for the injector flows both to regain their expected flow rates and fully mix prior to the next wave passage event. This recovery event is crucial to effectively create a uniform reactant fill zone for the detonation to propagate through for high detonative performance, as injector geometries with intentionally poor mixing [13] have consistently demonstrated a breakdown in well-defined detonation mode structure in favor of more complicated/less periodic, counter-propagating behavior. While the passing detonations cause the need for injection recovery, it is possible that the longer annulus constricted cases have traveling longitudinal wave reflections emanating from the area constriction back to the fill region that disrupt the incoming injector flow rates during this recovery time, further striating the reactant fill zone prior to the next wave arrival. This flow rate modulation combined with the reflected longitudinal pressure waves (also having an azimuthal component) facing less resistance to reflect directly off an injection site due to the unchoked condition [30], provide a basis for opposing wave motion caused by a combination of continuous wave reflections and decoupling of the traveling detonations. This is consistent with the trends observed in detonation wave dynamics for increasing physical constriction for the l c = 76.1 mm geometry, where counter-propagating behavior is more frequent and higher in severity, and lower wave speeds are observed. For the l c = 38.1 mm case, it is possible that these wave reflections reach the fill zone at a period within the cycle that make it easier for the injectors to recover, or are of significantly lower amplitude because of a longitudinal resonance not being excited due to the shorter chamber length. Nevertheless, this analysis illustrates how high amplitude wave reflections at undesirable periods during the recovery time can drastically affect detonation propagation. Injection Recovery: Modeling and Simulation In order to further illustrate the injection recovery process and its impact on the chamber wave dynamics, high-fidelity large-eddy simulations (LES) of the RDRE geometry have been performed using AHFM (ALREST High-Fidelity Modeling). The AHFM code is an extension of the Large Eddy Simulation with Linear Eddy (LESLIE) code [31], which has been previously validated for a number of turbulent combustion applications, including highly oscillatory flow fields with combustion instabilities and detonations [32][33][34][35][36]. These fully three-dimensional simulations incorporate second-order McCormack schemes to advance the full reactive Navier-Stokes equations both temporally and spatially. Reaction chemistry is modeled using the FFCMy-12 mechanism, a 12 species/38 reaction reduced methane-oxygen mechanism tuned for high pressure combustion. The complexity of RDRE physics creates challenges to comprehensively validate every scalar field tracked by the simulation. Nevertheless, this AHFM setup has previously been shown to adequately predict chamber pressures and detonation mode parameters such as the numbers of detonation waves and wave speeds for a variety of flow conditions compared to experiments [37,38]. However, it should be noted the code does exhibit the standard simulation overprediction of engine performance metrics, such as thrust and specific impulse [37,38]. The two simulations used for the current analysis are also the subject of another recent study [39], which focuses on experimental comparison and further simulation validation for these specific cases. The simulated domain follows the experiment reported above, including distinct reactant manifolds, 72 discrete injector pairs, the combustion chamber, and an outflow plenum extending several chamber lengths downstream of the engine exhaust. Of these, the critical region of interest is the mixing zone, fully encompassing the injector plumes and the traveling detonations. Grid spacing in this annular region ranges from 50 to 60 µm, relaxing to approximately 300 µm further downstream. This yields a total cell count of 140 M hexahedrals within AHFM's block-structured system. While the spacing under-resolves boundary-layer effects, it does ensure the idealized one-dimensional length of the induction zone behind the leading shock wave is sufficiently resolved with 4-5 points. Additionally, due to the non-premixedness of the reactants in the RDRE, this critical detonation length-scale is further broadened [40]. These specific meshes have been adequately assessed in a prior work [39], and are consistent with other similar numerical studies [41,42]. Simulations for two full-length (l c = 76.2 mm) chamber geometries are performed, one matching the most constricted nozzle configuration ( c = 2.40), and the other matching the straight annulus. These two geometries are selected to show the differences between the injection recovery process between both corotating detonation mode propagation (i.e., corresponding to the unconstricted straight annulus) and counter-propagating behavior (i.e., c = 2.40 nozzle geometry). Flow conditions for both geometries are set to match cases at the intersection of the test matrix cross (Figure 3), corresponding to a total mass flow rate of 0.27 kg/s and equivalence ratio of φ = 1.1. One benefit of these large-eddy simulations is their ability to develop the steady-state wave dynamics of a detonation mode naturally, without any imposition of the number of waves. Both cases are initialized with the same type of high pressure and temperature detonation kernel, which temporarily causes a large number of detonation waves to propagate around the chamber in both directions. These waves then undergo an unsteady cascade process, characterized by a continuous change in number of waves and their wave speeds, before reaching a steady mode. Numerical convergence of the simulation is achieved when all the waves in the domain undergo a complete revolution of the chamber without a significant change in velocity once the target reactant flow rates are reached globally throughout the test article. Although this convergence criterion does not show that there are absolutely no long-scale transients left in the domain, it does ensure the engine reaches an operating mode that is stable over a complete operating period. The end of the cascade process appears in Figure 23 for both the constricted ( c = 2.40) and unconstricted geometries, showing azimuthal pressure integrated up to 1.5 cm above the injection plane to generate a similar temporal evolution of the chamber wave dynamics analogous to the integrated pixel intensities in Figure 10 (see Lietz et al. [37] for more information detailing the method used to generate these pressures from the simulation data). Note that in Figure 23a there are six discernible detonation structures at the beginning of the window, starting at 1.0 ms from the simulation initialization. Over the following 0.5 ms, the detonations exhibit a wide range of velocities (denoted by the nonlinear slopes of the pressure fronts), measuring from 1020 to 2000 m/s, which eventually stabilize by 1.63 ms from the simulation start (denoted by the linear slopes of the pressure fronts). After this time, the three remaining waves continue traveling between 1620 m/s and 1650 m/s. Similarly, the c = 2.40 simulation shown in Figure 23b stabilizes by 1.55 ms from simulation start. Results from these two simulations produce three corotating detonation waves for the straight annular geometry, and similar to the experiments, counter-propagating behavior for the c = 2.40 nozzle consisting of eight waves in both the clockwise and counterclockwise directions. Corresponding temporal histories of the oscillatory pressure within the detonation zone local to the injection plane for these simulations (see Figure 24a) show two distinct operating modes. In the case of the three-wave corotating mode (Figure 24a (left)), this pressure trace has a similar steep-fronted shape to the synthetic data generated for the injection recovery analysis (Figure 20). For the counter-propagating mode (Figure 24a (right)), the resultant oscillatory pressure, while periodic with steep-fronted waves, has an increased rate of pressure spikes with much higher variance compared to the corotating mode. Accompanying injection properties are created by spatially integrating the flow fields over the injector orifices. Combined, the injection mass flow rates, the resultant local equivalence ratio and Mach numbers detail the injection recovery processes for the corotating and counter-propagating cases, as shown in Figure 24b-d, respectively. For the straight annulus case (Figure 24b-d (left)), the injection response is very similar to the predicted synthetic data analysis (Figure 21b,d), which show periodic unchoking in both injectors without any flow reversal present prior to the injectors returning to their designed choked operation. This provides supporting evidence that the periodic unchoking process detailed in the injector recovery analysis can indeed cause non-uniformity in the reactant fill zone depending on the recovery symmetry between the fuel and oxidizer streams, as well as the ability of higher amplitude modulation due to wave reflections local to the injection plane. For the constricted annular geometry, there is a different recovery process due to the counter-propagating behavior (Figure 24b-d (right)). Interestingly, as there is a large decrease in time between wave arrival events, the injectors are not able to return to choked operating conditions. This effect is further heightened due to the reduced injection pressure drop for the constricted geometry due to the increased chamber pressure caused by the physical throat addition. This is illustrated in the fuel and oxidizer injection Mach numbers for the constricted geometry (Figure 24d (right)), which show lower overall averages compared to the straight annulus case. This lack of a choked regime during counter-propagating operation is likely what allows the counter-rotating detonations to pass through one another. Specifically, the injector response prevents any injector pair from injecting at the intended equivalence ratio (i.e., striating the reactant fill zone), and as a consequence, it becomes possible for a detonation to pass through a region without fully combusting the reactants. It is the existence of unburnt reactants which allows another detonation, traveling in the opposite direction, to continue propagating instead of encountering a region consisting entirely of spent propellant. This suggests that a core component of the mechanism sustaining counter-propagating behavior in constricted RDREs is an interaction between the constriction and the injectors. Conclusions Hot-fire test results for a 76.1 mm diameter modular rotating detonation rocket engine with various convergent nozzle designs are summarized for flow conditions ranging from equivalence ratio φ = 0.5-2.5 andṁ tot = 0.091-0.680 kg/s. Three full-length annular convergent nozzle geometries with l c = 76.1 mm at contraction ratios c = 1.23, 1.62 and 2.40 are investigated in this study. In general, engine performance increases linearly with increasing throat constriction for a given flow condition, which exhibit an overall increase of 8%, 16% and 27% for the c = 1.23, 1.62 and 2.40 nozzles, respectively, compared to the straight c = 1.00 geometry. However, the measured thrust compared to the ideal thrust for an equivalent constant-pressure engine ranges from F/F th = 80-95% and does not increase appreciably with an increasing c nozzle. Measured detonation wave speeds compared to the ideal Chapman-Jouguet values range from U wv /U CJ = 50-70% for the investigated flow conditions. From φ = 0.5-2.5, U wv follows a similar trend to performance, with the highest wave speeds observed at φ ≈ 1.5. Forṁ tot = 0.091-0.680 kg/s, wave speeds are generally insensitive to increasing flow rate and are mostly constant throughout. However, there is a greater presence of counter-propagating phenomenon with increasing c at a given flow condition. This is accompanied by an increase in the number of waves m, as well as a decrease in the average wave speed. This may be a reason why there is not a notable performance increase compared to theoretical values for the more constricted convergent nozzle geometries. In addition to the tests with the full-length annular nozzle, two reduced length geometries with l c = 38.1 mm are also investigated (straight annulus and c = 2.40 nozzle). In general, there is no reduction in either operability or performance for the two shortened geometries across the entirety of the flow condition matrix. In fact, the l c = 38.1 mm, c = 2.40 configuration actually exhibits a 6% increase in thrust and specific impulse, compared to the l c = 38.1 mm cases, near the peak performance range of equivalence ratio at φ = 1.15. This is also evident in the ideal thrust comparison for the shortened geometries, which again shows a maximum across all equivalence ratio conditions for the shortened c = 2.40 nozzle. Regarding the modal properties, the wave speeds associated with the active detonation modes for the l c = 38.1 mm straight annulus are typically the same or higher than for the full-length straight annulus geometry, where a maximum U wv /U CJ ≈ 70-75% is observed. For the short constricted nozzle design, there is significantly less prevalent counterpropagating behavior throughout the various flow conditions than is observed for the full-length nozzle configuration. This is likely due to the location of the physical throat being shifted towards the injection plane for the shortened geometry, which may alter the detonation-injection coupling. In summary, this work serves to elucidate the influence of annular length and exit constriction on RDRE operation and performance. The trends identified should serve as a foundation for future studies to optimally expand the oscillatory exit flows through these devices, and thus optimize their performance. Funding: This work has been supported by the Air Force Office of Scientific Research (AFOSR) under AFRL Lab Task 20RQCOR63 funded by the AFOSR Energy, Combustion, Non-Equilibrium Thermodynamics portfolio with Chiping Li as program manager.
12,552.4
2019-08-16T00:00:00.000
[ "Engineering" ]
Should we allocate more COVID-19 vaccine doses to non-vaccinated individuals? Following the approval by the FDA of two COVID-19 vaccines, which are administered in two doses three to four weeks apart, we simulate the effects of various vaccine distribution policies on the cumulative number of infections and deaths in the United States in the presence of shocks to the supply of vaccines. Our forecasts suggest that allocating more than 50% of available doses to individuals who have not received their first dose can significantly increase the number of lives saved and significantly reduce the number of COVID-19 infections. We find that a 50% allocation saves on average 33% more lives, and prevents on average 32% more infections relative to a policy that guarantees a second dose within the recommended time frame to all individuals who have already received their first dose. In fact, in the presence of supply shocks, we find that the former policy would save on average 8, 793 lives and prevents on average 607, 100 infections while the latter policy would save on average 6, 609 lives and prevents on average 460, 743 infections. Introduction With more than 44.7 million infections in the U.S. and 219 million worldwide, and a death toll over 721,000 in the United States and 4.55 million worldwide, the COVID-19 pandemic has profoundly altered the research agenda of the scientific community as a whole, launching an unprecedented race against the clock to develop a cure or a vaccine for the disease. To better contain the disease, and to design more efficient policies in combating it, the United States Centers for Disease Control and Prevention (CDC) has collected and combined an ensemble of models to forecast the spread of the epidemic [1]. These models range from traditional SIR and SEIR-type models, to agent-based models, mixture models, and machinelearning models. Some models, such as the DELPHI [2] model, explicitly account for the effects of government intervention, such as the implementation of social distancing policies. These models have quickly been applied practically: for example, at the clinical level, the DEL-PHI model helped to reallocate ventilators and alleviate shortages [3]; similarly, at the policy level, the DELPHI was used to propose a more efficient allocation of vaccines [4,5]. Epidemiological models have also been used to optimize the design of vaccine clinical trials, and to quantify the potential advantages of using adaptive Randomized Clinical Trials (RCTs) and Human Challenge Trials (HCTs) over traditional RCTs [6,7]. Epidemiological models and simulations have helped researchers and policymakers answer pressing questions, such as how to prioritize the delivery of vaccine across demographics and medical conditions [8], and where should vaccination clinics be located to maximize the effectiveness of the vaccination campaign [4,5]. With a rising number of infections and deaths, and the emergence of COVID-19 variants despite extended periods of lockdown, mass vaccination has become the critical pathway to alleviate the impact of the disease, as is apparent with the success of Israel's mass vaccination campaign [9]. However, producing and distributing the vaccines has become a new challenge for manufacturers. Despite promising results regarding the ability to store the Pfizer-BioN-Tech vaccine in standard freezers over periods of two weeks [10] rather than the initial storage constraint at −80˚C [11], vaccine shortages and appointment cancellations [12] have followed factory shutdowns [13], production mix-ups [14], delays in shipment [15], and power outages [16,17]. Optimizing the allocation of vaccines has become crucial not only due to the limited supply of vaccines, but also due to the fact that Pfizer-BioNTech and Moderna vaccines need to be administered twice for each individual, over a recommended time interval of 3 or 4 weeks, respectively [18]. Although supply constraints are important in the United States, they are even more binding in other regions such as Canada [12,15], Europe [13,15,19], Africa [20], Latin America [21], and India [22]. An important debate has also arisen regarding the advantages of delaying the second dose to provide more first doses to susceptible individuals [23][24][25][26][27]. While doses were held back under the Trump administration in order to guarantee a second dose to individuals who have received their first dose, the Biden administration has pledged to reverse this policy and release all available doses [28]. Other countries, such as the United Kingdom and Canada, have already adopted the policy of delaying the second dose up to three months [29,30], and Singapore is currently considering delaying the second dose up to 12 weeks [31]. However, as Texas, Washington State, and Michigan experienced in mid-February 2021, releasing too many doses for first-time users could lead to delays for individuals eligible to receive their second dose (a "second-shot crunch") [32]. Researchers, medical doctors, and clinicians have provided arguments for and against delaying the second dose [33]. On the one hand, while allocating more first doses may initially slow down the spread of the infections, and ultimately reduce the number of deaths by allowing a bigger proportion of the population to have some immunity, it is possible that protection will degrade over time, and delaying the second dose may leave at-risk individuals inadequately protected. From a disease evolutionary perspective, partial immunization could also contribute to the selection of vaccine-resistant variants of SARS-CoV-2 [34]. This point is now even more relevant with the spread of the Delta variant, currently the predominant variant in the U.S., which is twice as contagious as the original strain of the virus, yet only modestly decreases the effectiveness of the two mRNA vaccines considered [35]. On the other hand, clinical trial results and data from the Israeli mass vaccination campaign on the efficacy of the first dose tend to support the policy of delaying the second dose up to three months, especially when the supply of vaccines is constrained [36][37][38][39]. In this work, we forecast the effect of various vaccine allocation strategies on the cumulative number of infections and deaths in the United States to quantify the impact of prioritizing first doses versus second doses. In particular, we extend the DELPHI model to account for vaccines, and use a simple model of shocks to the number of vaccines supplied to account for distributional constraints. Similar questions have recently been studied by other researchers. For example, [40,41] recommend a second dose deferral strategy in order to vaccinate more people faster even if the single-dose efficacy decays over time. Likewise, [42] using the agentbased epidemics model developed in [43], suggest a 9-week delay for the second dose, although the results are mixed for the Pfizer-BioNTech vaccine when the efficacy of the first dose decays over time. While our analysis focuses on the United States, our recommendations can be generalized to other countries and especially those where the supply of vaccines is heavily limited. Furthermore, the framework provided here can be reused in the event of a future pandemic to improve the allocation of vaccines and reduce the number of infections and deaths. The remainder of the paper is structured as follows: we present the epidemiological model used to forecast the COVID-19 outbreak from October 1st, 2020 to August 1st, 2021 in Section 2, as well as the model used to account for supply shocks; our forecasts are presented in Section 3, and the policies under investigation are compared and discussed in Section 4; we conclude in Section 5. Finally, a more detailed description of our analysis is available in S1 Text. Methodology We begin by presenting the epidemiological model used to simulate the COVID-19 pandemic, the assumptions made in our forecasts, as well as the model used to simulate the supply of vaccine under random shocks. Epidemiological model Many epidemiological models have been proposed to forecast the spread of COVID-19 [1]. In particular, [2] proposes a novel SEIR-based model, called the DELPHI model, that explicitly accounts for the effects of government intervention. As shown in Fig 1, the DELPHI model categorizes individuals into eight classes: Susceptible individuals who have not been infected (S); Exposed individuals who have been infected, and are currently within the incubation period (E); Infected and contagious individuals (I), who are then categorized into the Detected Hospitalized (DH), the Detected and home-Quarantined (DQ), and the Undetected and selfquarantined (U) classes; Recovered individuals (R); and individuals Deceased from the COVID-19 (D). As we consider two hypothetical vaccines in this study (loosely modelled after the Moderna vaccine and the Pfizer-BioNTech vaccine), we augment the DELPHI model by including five vaccination categories for each vaccine brand X used: individuals receiving their first dose who respond to the first dose (V r X;1 for immediate "response"), individuals receiving their first dose who do not respond to the first dose but will respond to the second dose (V dr X;1 for "delayed response"), and individuals receiving their first dose who will neither respond to the first dose nor to the second dose (V nr X;1 for "no response"); individuals who receive their second dose and respond to the vaccine (V r X;2 ); and individuals who receive their second dose and do not respond to the vaccine (V nr X;2 ). We assume that the exposed individuals (E) are not yet contagious, and that recovered individuals (R) and vaccinated individuals from the V r X;2 group have permanent immunity to COVID-19. We further assume that the infection rate of individuals depends on a government response function (see Appendix A.1 in S1 Text) which models the effects of government intervention. The dynamics of the augmented DELPHI model are available in Appendix A.1 in S1 Text. Data and assumptions The first step of the analysis consists in fitting the original DELPHI model to historical data using the dataset developed by [2]. After estimating the parameters of the original DELPHI model for each state of the U.S., we recalibrate these parameters to allow us to simulate a discretized version of the DELPHI model using a time step of 1 day. We then ensure that the discretized model yields the same output as the original continuous-time model (see Appendix A.2 in S1 Text for more details). This step is crucial, as it considerably improves the speed of The parameters used in the augmented DELPHI model are presented in Table 1. We assume a uniform daily infection rate among individuals in each vaccination state. Individuals who respond to the first dose (the "immediate response" group V r X;1 ) remain completely susceptible to an infection in the first 14 days of their first vaccination, but become permanently immune to the disease 14 days following their first dose. Similarly, individuals in the "delayed response" group i.e., the V dr X;1 group, remain completely susceptible to an infection 35 days after receiving their first dose (i.e., 21 days to receive their second dose after their first dose, followed by 14 days to develop permanent immunity), but develop permanent immunity immediately afterwards. Individuals who neither respond to the first nor second dose i.e., the V nr X;1 group, remain permanently susceptible to an infection. Although the United States Food and Drug Administration (FDA) recommends a time interval between vaccine doses of 21 days for the Pfizer-BioNTech vaccine and 28 days for the Moderna vaccine, this difference has no impact on the analysis, as shown in Appendix B.1 in S1 Text. Finally, we assume that the immune response to a vaccine does not decay over time. Modeling the supply of vaccines To explore the effect of shocks to the supply of vaccines on the vaccination policy adopted, we decompose the vaccine rollout into two phases: during the ramp-up phase, the number of new vaccine doses supplied increases at a linear rate, until it reaches a terminal value of 1.5 million new doses per day (President Biden's target [47]); This terminal value is reached on the 90 th day, when we enter the steady-state phase in which the supply rate of new doses becomes constant. The assumed terminal value is on the conservative side, as the 7-day moving average of the number of doses administered daily (as reported to the CDC) increased from 1.5 million doses per day in February 2021 to 3 million in April 2021 [46,48,49] (a terminal rate of 3 million doses per day is explored in Appendix B.3 in S1 Text). The black curve in Fig 2 represents the daily number of new vaccine doses supplied by one vaccine company. As shown in the plot, the number of doses supplied by this company increases linearly, until it reaches a value of 0.75 million doses (one half of 1.5 million, as we consider two vaccines in this study). To model supply shocks, we assume that shock occurrences follow a Poisson process with a rate of 1 shock per 30 days. Using a Poisson process is appropriate here as we assume that shocks that occur over disjoint time intervals are independent, and that the process is memoryless. Once a shock occurs, the supply of this particular vaccine drops to zero over a length of time drawn from a uniform distribution between 0 and 14 days. The supply then picks up at the previous positive level and continues to increase linearly. Furthermore, we assume that shocks lasting 7 days or more have a 50% probability of boosting the terminal supply rate by 5%. The blue curve in Fig 2 provides an example of a supply curve with such shocks. Supply shocks can simply represent delays in production or delivery of vaccines (which would tend to last a few days), but they can also model a factory shutdown aimed at improving the production of vaccines (which would tend to last longer and may increase the terminal supply rate). Finally, each state receives a fraction of the number of available doses in proportion to its population size. We then run Monte Carlo simulations to investigate the robustness of the vaccination policy to supply shocks. The number of cumulative deaths and cumulative infections are aggregated at the country level, and are used to compare vaccination policies. Results In this section, we explore the performance of various vaccination policies, and evaluate them based on the number of cumulative deaths and cumulative infections aggregated at the country level. This helps us understand whether we should store vaccine doses in order to guarantee a second dose to individuals who received a first dose, or if it is more efficient to allocate as many first doses as possible. Storing doses ensures that individuals who received a first dose will be able to obtain their second dose according to the recommended vaccination schedule (here, 21 days) even if supply shocks occur. However, this strategy reduces the number of individuals that can be vaccinated each day, and may lead to a higher cumulative number of deaths and infections. We further assume that 1% of unused doses are lost each day in order to model spoilage or wastage due to unforeseen circumstances. Vaccination policies The policies we investigate are described below. Baseline policy. As a baseline policy, we consider the case of not vaccinating the population. This case is expected to present the highest number of cumulative infections and deaths. Policy of interest. The vaccination policy of interest consists in allocating a fixed fraction of available doses to first-time users, and allocating the remaining doses to individuals who have already received their first dose and are eligible to receive their second dose. Furthermore, unused doses are reallocated to individuals eligible to get a vaccine. For example, under a policy of interest allocating 75% of doses to first-time users, 75% of the doses available today would be administered to individuals who have not received their first dose and 25% of doses will be administered to individuals who have received their first dose at least 21 days ago; if doses are unused because we have more second doses available today than eligible individuals for a second dose, we reallocate these unused doses to first-time users; if doses are unused because we have more first doses available today than individuals eligible for their first dose, these unused doses are reallocated to individuals eligible for a second dose today. In comparison, we also consider a scenario under which we do not allow for doses reallocation. The policy of interest is then compared to the following alternatives. Alternative policy I: Strong priority scenario. Doses are allocated by prioritizing all individuals who have received a first dose and will eventually need to receive the second dose in the future. This means that all individuals who receive their first dose are guaranteed to receive their second dose within the recommended time frame. Under the strong priority scenario, a second dose will immediately be placed in storage each time an individual receives their first dose, and this dose will be administered to this individual 21 days later. Alternative policy II: Weak priority scenario. In contrast to the strong priority scenario, this policy consists of allocating doses in priority to individuals scheduled to receive their second dose on that specific day. In the weak priority scenario, doses available today are first administered to individuals eligible to receive their second dose; after clearing the second dose queue, the remaining doses are allocated to first-time users. In all the vaccination policies described above, second doses are always allocated in a First-In-First-Out (FIFO) fashion: this gives higher priority to individuals eligible for a second dose who have not been able to receive their second dose yet over individuals who only became eligible for a second dose today. A final point: it can be useful to view the weak priority scenario as a special case of the policy of interest in which we reallocate unused doses. In fact, under a policy of interest that allocates 0% of doses towards first-time users, individuals eligible to receive their second dose today will be given priority; then unused doses would be reallocated towards first-time users. Policy evaluation To compare the four policies described in Section 3, we simulate the evolution of the epidemics in the absence and in the presence of random supply shocks. In particular, we run 1,000 simulations to obtain a distribution for the cumulative number of infections and the cumulative number of deaths between October 1st, 2020 and August 1st, 2021 under random supply shocks. After comparing the outputs obtained with various number of Monte Carlo simulations, we selected the number of Monte Carlo simulations to be large enough to reflect the uncertainty in the output while being parsimonious enough to retain a practical simulation runtime. The DELPHI parameters used in all our forecasts were estimated on February 7th, 2021. We As we increase the fraction of doses allocated to first-time users, our forecasts predict a decrease in the cumulative number of infections and deaths. In fact, if we allocate all available Tables 2 and 3. In contrast, the alternative policies considered do not allocate a fixed fraction of available doses to first-time users. Instead, under the strong priority policy ( It is important to highlight here that under the policy of interest, the cumulative number of infections and deaths remains constant as we vary the fraction of doses allocated to first-time users from 0% to about 50%. This effect is due to the reallocation of unused doses modelled by our simulation. More concretely, if we allocate no doses to first-time users (i.e., we only give doses to individuals who have already received their first dose) and do not reallocate unused doses, then nobody would ever receive their first dose and hence nobody will ever be eligible to receive a second dose. Reallocating unused doses overcomes this issue. Furthermore, reallocating unused doses under a 0% first dose allocation policy exactly matches the outcome of the weak priority scenario (the orange and yellow lines in Fig 3), in which we always give priority to individuals who already received their first dose and are now eligible to receive their second dose. PLOS GLOBAL PUBLIC HEALTH Should we allocate more COVID-19 vaccine doses to non-vaccinated individuals? When unused doses are not reallocated, we obtain the forecasts displayed in Fig 4. Allocating no doses to first-time users is identical to the no-vaccination policy, while increasing the dose allocation to first-time users beyond 30% produces similar results to Fig 3. Policy comparison As expected, vaccinating the population significantly reduces the number of infections and deaths under all the policies considered. However, the forecasts presented in Section 3 allow us to immediately rule out the efficiency of the first alternative policy (i.e., the strong priority scenario) relative to the other vaccination policies presented. In fact, both the policy of interest and the weak priority scenario are significantly better than the strong priority scenario in the presence and absence of random supply shocks. This is also expected, as more individuals are able to receive their first dose under the policy of interest and the weak priority scenario and start to develop an immune response early. The magnitude of the improvement is even more striking: under supply shocks, the policy of interest allocating 50% of available doses to firsttime users is expected to save on average 33% more lives and prevents on average 32% more infections than the strong priority scenario. Nevertheless, the strong priority scenario is still important to analyze, as individuals getting a vaccine in the U.S. usually obtain an appointment for their second dose as soon as they receive their first dose, and patients requiring a second dose are given priority [50]. In the absence of supply shocks, the weak priority scenario is dominated by the policy of interest when more than 60% of available doses are allocated to first-time users. In particular, if we compare the weak priority scenario to a policy of interest allocating 85% of available doses to first-time users, our forecasts predict an increase in the number of lives saved and of the number of infected averted of 11.7% and 11.1% respectively. This result also holds in the presence of supply shocks: we forecast an increase of 12.1% in the number of lives saved and an increase of 11.5% in the number of infections averted. These differences are statistically significant as a Welch t-test yields t-statistics of 16.4 and 16, respectively, for the number of lives saved and infections averted. This is also expected, as the weak priority scenario can be viewed as a special case of the policy of interest, where less than 50% of available doses are allocated to first-time users, with unused doses being reallocated. As a consequence, the number of lives saved and infections averted will always be higher under the policy of interest. In summary, our forecasts suggest that allocating more than 50% of available doses towards first-time users, even at the cost of delaying the distribution of second doses, would be a better policy than guaranteeing a second dose within the recommended time frame to every individual receiving their first dose. The simulations also show that these results are robust to supply shocks. Although our analysis focuses on the United States, the forecasts and their interpretation can be generalized to any country. Prioritizing first doses would be even more relevant to countries where the vaccine supply is severely limited (as shown in Appendix B.3 in S1 Text). Limitations and sensitivity analysis Our forecasts are all based on an augmented version of the DELPHI epidemic model [2] that accounts for vaccinations. We should note that the model fails to account for demographics to assign different contact rates, hospitalization rates, and mortality rates across different age groups. Furthermore, some simplifying assumptions are used: for example, recovered individuals are assumed to have permanent immunity. However, among the top 10 models used by the CDC, DELPHI often displays the best performance with a low mean absolute percentage error (see https://www.covidanalytics.io/projections). A critical limitation of our model is that we assume no decay in the efficacy of the vaccine over time if an individual has received their first dose, but are still waiting for their second dose. At this point, this decay in efficacy remains an open question [51]. Although our simulations begin on October 1st, 2020 and end on August 1st, 2021, vaccinations start on December 15th, 2020. If we consider a policy of interest that allocates 100% of available doses to first-time users, it would mean that individuals receiving their first dose at the end of December 2020 would not receive their second dose by August 1st, 2021. If the efficacy of the first dose decays over time, our forecast would be overly optimistic. However, knowing this decay rate would help determine the optimal fraction of doses that need to be allocated to first-time users under the policy of interest to balance the advantages of delaying the second dose against the efficiency loss due to the delay. Finally, we find that our results remain significant as we perturb some key assumed parameters. We show in Appendix B in S1 Text the forecasts obtained as we increase the time interval between the first and second dose (from 4 weeks to 9 weeks), as we increase or decrease the efficacy of each vaccine dose, as we increase the time needed to develop permanent immunity, and as we increase the supply of vaccines. In particular, we observe that the curves obtained in Appendix B in S1 Text tend to shift upwards as we increase the time interval between the doses, increase the time needed to develop permanent immunity, decrease the supply of vaccines, or decrease the efficacy of each vaccine dose, implying an overall reduction in the number of lives saved and infections averted. In addition, the curves become flatter, implying a lower sensitivity to the chosen fraction of available doses allocated to first-time users, especially as we increase the time needed to develop permanent immunity, decrease the supply of vaccines, or decrease the efficacy of the first dose. Conclusion We have developed a systematic framework to compare the efficiency of various vaccination policies. In particular, we extend the DELPHI model [2] to account for vaccination states, and explore the impact of prioritizing vaccines to first-time users instead of guaranteeing a second dose within the recommended time frame to individuals who have already received their first dose. Our forecasts suggest that allocating more than 50% of available doses to first-time users significantly increases the number of lives saved and significantly reduces the number of COVID-19 infections. It is important to highlight here that our forecasts are not recommending individuals to skip the second dose, a trend that has already raised some concerns as the efficacy of a single dose of mRNA vaccine over a long period of time remains unclear [52][53][54]. Instead, we suggest delaying the second dose to allow more individuals to receive the first dose in order to reduce the spread of the disease faster. Supporting information S1 Text. Appendix A: Dynamics of the augmented DELPHI model. We describe here the additions made to the DELPHI model to include vaccination states as well as our discretization technique used to enhance the performance of the simulation. Appendix B: Sensitivity Analysis. We explore the sensitivity of our results to key parameters of the model and provide additional simulation results. (PDF)
6,311.2
2022-07-01T00:00:00.000
[ "Economics", "Medicine" ]
The pro-angiogenesis effect of miR33a-5p/Ets-1/DKK1 signaling in ox-LDL induced HUVECs Objective: Angiogenesis is involved in multiple biological processes, including atherosclerosis (AS) and cancer. Dickkopf1 (DKK1) plays many roles in both tumors and AS and has emerged as a potential biomarker of cancer progression and prognosis. Targeting DKK1 is a good choice for oncological treatments. Many anticancer therapies are associated with specific cardiovascular toxicity. However, the effects of DKK1 neutralizing therapy on AS are unclear. We focused on how DKK1 affected angiogenesis in AS and ox-LDL-induced human umbilical vein endothelial cells (HUVECs). Methods: ApoE-/- mice were fed a high-fat diet and then injected with DKK1i or DKK1 lentivirus to study the effects of DKK1. In vitro, promoter assays, protein analysis, database mining, dual-luciferase reporter assay (DLR), electrophoretic mobility shift assay (EMSA), chromatin immunoprecipitation (ChIP), and coimmunoprecipitation (co-IP) were used to study the mechanism of DKK1 biogenesis. Cell migration and angiogenesis assays were performed to investigate the function and regulatory mechanisms of DKK1. Results: DKK1 participated in angiogenesis both in the plaques of ApoE-/- mice by knockdown or overexpression of DKK1 and ox-LDL-induced HUVECs. DKK1 induced angiogenesis (increasing migration and capillary formation, inducing expression of VEGFR-2/VEGF-A/MMP) via the CKAP4/PI3K pathway, independent of Wnt/β-catenin. ox-LDL increased the expression and nuclear transfer of Ets-1 and c-jun, and induced the transcriptional activity of DKK1 in HUVECs. Ets-1, along with c-jun and CBP, could bind to the promoter of DKK1 and enhance DKK1 transcription. MiR33a-5p was downregulated in ox-LDL induced HUVECs and aortic artery of high-fat diet ApoE-/- mice. Ets-1 was a direct target of miR33a-5p. MiR33a-5p/Ets-1/ DKK1 axis contributed to angiogenesis. Conclusions: MiR33a-5p/Ets-1/DKK1 signaling participated in ox-LDL-induced angiogenesis of HUVECs via the CKAP4/PI3K pathway. These new findings provide a rationale and notable method for tumor therapy and cardiovascular protection. Introduction Dickkopf-1 (DKK1), a secreted inhibitor of the canonical Wnt/β-catenin pathway, plays complex cellular and biological roles in different diseases. DKK1 is overexpressed in bone pathologies and many cancers, has now emerged as a potential biomarker of cancer progression and prognosis for several types of malignancies [1], and has been shown to have immunosuppressive effects [2]. DKK1 has been widely investigated in oncology and is now considered a promising target for anticancer therapy Ivyspring International Publisher [1]. For example, DKN-01 is an IgG4 clinical stage antibody that potently and specifically neutralizes human and murine DKK1 and was used in a recently completed promising study in combination with pembrolizumab in patients with gastric/ gastroesophageal junction cancer [3]. The treatment outcomes for a wide range of malignancies have improved remarkably due to the development of many novel anticancer therapies, including vascular endothelial growth factor inhibitors (VEGFIs). However, as a side effect, oncological treatment may increase the morbidity and mortality of cardiovascular diseases (CVDs), including via acceleration of atherosclerosis (AS) [4]. We confirmed that DKK1 induced endothelial cell (EC) dysfunction and AS [5]. Other data also suggest that DKK1 is an important driver of the initiation and progression of AS and a promising target for atheroprotection [6]. DKN-01 was evaluated in a phase I multicenter study for advanced tumor therapy, and better outcomes were associated with biomarkers of angiogenesis inhibition, which indicated the potential antiangiogenic and immunomodulatory activity of DKN-01 [7]. Because there exist cross-susceptibility factors and common targets between tumors and CVDs, elucidating the regulatory effect and molecular mechanism of DKK1 in EC angiogenesis will provide a theoretical basis and clinical reference for the identification of new effective intervention targets for antitumor drugs. ECs, located on the surface of the vascular wall, are always vulnerable to various risk factors, such as hypertension and hyperlipidemia. In our previous study, we found that DKK1 induces endothelial dysfunction in plaques and human umbilical vein endothelial cells (HUVECs) [8]. Angiogenesis, which provides essential oxygen and nutrients for proliferation and metastasis, is an indispensable process for tumor growth and metastatic dissemination. Tumor angiogenesis has become a new and promising target for antitumor therapy. While angiogenesis also plays an important role in AS, we focused on the effect of DKK1 on angiogenesis. DKK1 has two cysteine rich domains (CRDs): CRD-N and CRD-C. DKK1 binds to LRP6 and antagonizes the downstream canonical Wnt pathway. In addition, the CRD-N of DKK1 binds to cytoskeleton-associated protein 4 (CKAP4), and then, the intracellular segment recruits PI3K and activates AKT. DKK1 induces angiogenesis through Wnt/βcatenin-dependent or Wnt/β-catenin-independent mechanisms in tumor cells. However, the downstream mechanisms by which DKK1 induces angiogenesis in ox-LDL-induced HUVECs are unknown. MicroRNAs are a class of noncoding singlestranded small RNA molecules that can specifically pair with the 3' untranslated region (3'-UTR) of target gene mRNAs to inhibit the expression of target genes through translational repression or mRNA degradation [17]. One miRNA can target one or more genes, and the regulatory mechanism of a miRNA may be different in different cells. MiRNAs are regulators of vascular endothelial functions and AS. With the aid of well-known programs (such as miRBase, STARBASE and TargetScan), we found that miR33a-5p was markedly downregulated under ox-LDL stimulation in HUVECs. Both DKK1 and Ets-1 are target genes of miR33a-5p. However, whether miR33a-5p can bind to the 3'-UTR of DKK1 and Ets-1 mRNA to regulate their translation is unknown. Studies have found that miR33a-5p may be related to macrophage lipid metabolism [18] and inhibition of tumor cell proliferation [19]. A previous study also found that DKK1 is regulated by miR33a in diabetic cardiomyopathy [20]. However, the mechanisms of action of miR33a-5p in ECs and the function of miR33a-5p in regulating Ets-1 and DKK1 are not yet clear. Based on these findings, we hypothesized that miR33a-5p/Ets-1 participates in the regulation of ox-LDL-induced DKK1 expression in HUVECs. To test this idea, we investigated the underlying upstream mechanisms of DKK1 expression in HUVECs angiogenesis. Identifying these pathways will improve our understanding of the regulation of tumor angiogenesis and provide new methods to address cardiovascular toxicity in antitumor therapy. An in-depth study of the effect of DKK1 on angiogenesis will provide a solid theoretical basis for improving the development of drugs to treat tumors and reduce or even protect against AS. It is hoped that the relevant new drugs will play a better synergistic role in the treatment of tumors and AS in clinical application. Materials and Methods Please see the Major Resources Table in Atherosclerosis animal model protocol and lentiviral gene transfer A total of 120 ApoE-/-mice (eight-week-old males) were purchased from Beijing HFK Bioscience Co.,Ltd. All mice were fed atherogenic chow (i.e., a high-fat diet with 0.25% cholesterol and 15% cocoa butter) at 14 weeks. The atherosclerotic model was created as previously described. We applied constrictive silica collars to the right carotid artery (RCA) to accelerate atherosclerotic lesion formation at the third week. Pentobarbital sodium was used for anesthesia via intraperitoneal injection (40 mg/kg) when placing the constrictive collars. Eight weeks after the surgery, the mice were randomly divided into four groups (n=15 each): a normal saline group (NS), an empty lentivirus group (GFP), a DKK1i lentivirus group (shDKK1), and a DKK1 lentivirus group (DKK1). A 200 µl suspension (4×10 8 TU DKK1i or DKK1 lentivirus per ml) was injected into each mouse through the tail vein. The mice were sacrificed 4 weeks posttransfection using pentobarbital sodium (50 mg/kg, i.p.) before exsanguination by perfusion via the abdominal aorta with PBS. Histopathology and immunohistochemistry The RCAs were dissected, removed, fixed in 4% formaldehyde overnight at 4 °C, and embedded in OCT compound, and 5-μm-thick sections were prepared. After blocking in 5% bovine serum albumin (BSA) in PBS, the cryosections were incubated with primary antibodies overnight at 4°C and then with an HRP Detection System (ZSGB-BIO). Detection was subsequently performed using 3, 3′-diaminobenzidine (DAB) (ZSGB-BIO). Plaques stained with picrosirius red were viewed under polarized light. Staining in the plaque was quantified using Image-Pro Plus 6.0 software (Media Cybernetics, USA) and a color CCD video microscope (Olympus, Japan). Cell culture HUVECs were obtained from ScienCell Research Laboratories (Carlsbad, CA, USA) and cultivated in endothelial cell medium (ECM) (ScienCell, Carlsbad, CA). HEK293T cells were obtained from the American Type Culture Collection (ATCC) and cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM). The medium contained 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin. Both cell lines were incubated in a humidified 5% CO2 incubator at 37 °C. siRNA and RNA interference Upon reaching 40%-60% confluence, cells were transfected with specific siRNA or plasmids (GenePharma, Shanghai) using Lipofectamine 3000 (Thermo Fisher Scientific, USA) in Opti-MEM (Gibco, Thermo Fisher Scientific, USA). At 6 h after transfection, the medium was replaced. Cells were collected for detection of the luciferase reporter or of protein expression for 24-48 h after transfection. Immunofluorescence staining and microscopy The cells were washed with PBS, fixed with 4% paraformaldehyde, penetrated with 0.5% Triton X-100 PBS, and incubated with primary antibodies at 4 °C overnight. The sections were washed with PBS, and incubated with FITC-conjugated secondary antibodies. Nuclei were stained with 4′, 6-diamidino-2-phenylindole (DAPI; 1:2000, Roche, Germany) for 5 min. The samples were rinsed three times in PBS and examined under an epifluorescence microscope, and the data were analyzed using Image-Pro Plus 6.0 software (Media Cybernetics, USA). Western blot analysis HUVECs were lysed using RIPA buffer containing 1 mM PMSF and collected by centrifugation at 14,000 × rpm for 10 min. Proteins were separated on 10% SDS-PAGE gels, transferred to PVDF membranes with a 0.45 µm pore size (Millipore, USA), and incubated with primary antibodies overnight at 4 °C. The membranes were incubated with secondary antibodies the next day for 80 min. Bands were visualized using Immobilon ECL substrate (Millipore, USA), and imaged with an LAS-4000 luminescent image analyzer (Fujifilm, USA). Protein expression was quantified using Adobe Photoshop CS6 (Adobe Systems, USA) and normalized to β-actin expression in each sample; the expression level is shown as a percentage of the control. RNA extraction and quantitative real-time PCR Total RNA was extracted from HUVECs using TRIzol reagent (Ambion, Life Technologies, USA), and reverse-transcribed into cDNA using a PrimeScript™ RT Reagent Kit (TakaRa Biotechnology, Dalian, China). cDNA (1 ng) was subjected to q-PCR using SYBR Green (TakaRa Biotechnology, Dalian, China) for the relative quantification of mRNA expression. Quantification was accomplished using the 2-ΔΔCt method. β-Actin was used to normalize mRNA levels. U6 was used to normalize microRNA levels. Dual-luciferase reporter assay Cells were seeded in 24-well plates and cotransfected with 100 ng of reporter plasmid and 20 ng of pRL-TK by using Lipofectamine 3000 (Thermo Fisher Scientific, USA) in Opti-MEM (Gibco, Thermo Fisher Scientific, USA). After 48 h, cells (HUVECs or 293T cells) were harvested for the dual-luciferase assay. A dual-luciferase reporter (DLR) assay system (Genecopeia, USA) was used to measure the luciferase activity. The firefly luciferase values were normalized to Renilla luciferase activity before statistical analyses. Electrophoretic mobility shift assay HUVECs were treated with ox-LDL for 6 h. Nuclear extracts were obtained using a NE-PER nuclear protein extraction kit (Thermo Scientific, Rockford IL, USA) according to the manufacturer's instructions. Double-stranded oligonucleotides were obtained by annealing equal amounts (0.1 mg) of the complementary single-stranded oligonucleotides by heating to 95°C for 5 min and then gradually cooling to room temperature. Then, 0.01 μmol of digoxigenin-labeled oligonucleotide probes was incubated with nuclear extracts in DNA binding buffer [10 mM Tris-HCl (pH 7.5), 1 mM MgCl2, 50 mM NaCl, 0.5 mM EDTA, 4% glycerol, and 0.5 mM 2,3-dihydroxy-l,4-dithiobutane (DTT)] and 1 µg of poly(dI-dC). A competition assay was performed using a 200-fold excess of cold probes or cold mutated probes (2 µmol), which were preincubated with the reaction mixture before the addition of biotin-labeled probes. To ascertain the specificity of the nuclear proteins bound to Ets-1 sites, a supershift assay was performed with 2 mg of Ets-1 antibody. After incubation for 30 min, DNA-protein complexes were separated by 6.0% nondenaturing PAGE (Invitrogen) and transferred to a nylon membrane. DNA was crosslinked by UV irradiation for 10 min. The nitrocellulose membrane was evaluated by the addition of a streptavidin-horseradish peroxidase conjugate and a chemiluminescent substrate. Then, the nitrocellulose membrane was imaged with an LAS-4000 luminescent image analyzer (Fujifilm, USA). A prominent single supershifted band was observed when nuclear extracts were incubated with an anti-Ets-1 antibody. Chromatin immunoprecipitation Chromatin immunoprecipitation (ChIP) analysis was performed by a ChIP kit (CST, Boston, USA), according to the manufacturer's protocols. Cells (4×10 7 ) were crosslinked with 4% formaldehyde, lysed, and enzymatically digested into 200-bp DNA fragments. The sheared chromatin was incubated with different antibodies and magnetic beads at 4°C overnight. Purified immunoprecipitated chromatin fragments from the IP samples were tested by PCR. The primers for the Ets-1 binding site in the human DKK1 promoter (-2080 to -1894) were as follows: 5′-ACACAGCTTGCAGATTTCCTAGT -3 ′ and 5 ′-TATGGTCTGTGTTCTAGTTCCTTCA -3′. qPCR was used for quantitative analysis of the ChIP enrichment efficiency and for expression analysis according to the 2 -△△CT method. Coimmunoprecipitation assay Cells (4×10 7 ) were lysed with 1 ml of RIPA for 30 min and centrifuged at 12000 × g and 4 °C for 30 min. Save 10 μl supernatant and added to 2× SDS loading buffer, denatured at 99 °C for 10 min standby. Pretreated protein A/G magnetic beads (Bimake, Shanghai, China) with 300 µl RIPA were added to 6 µg of antibody and then incubated 4 °C for 15 minutes. The mixture was placed on a magnet for 1 minute, removed the supernatant, and washed 3 times. The beads were resuspended by the sample (300 µl), then incubated at 4 °C overnight with gentle rotation. The mixture was placed on a magnet for 1 minute, removed the supernatant, and washed 3 times. The beads was added to 1× SDS loading buffer, denatured at 99 °C for 10 min, centrifuged 12000 × g for 10 min, saved supernatant and evaluated by Western blot. EdU cell proliferation assay Five-ethynyl-2'-deoxyuridine (EdU) cell proliferation assays were carried out according to the manufacturer's instructions (RiboBio, Guangzhou, China) using the Cell-Light™ EdU imaging detection kit. Transwell assay Transwell inserts are an array of 24 individual Boyden chambers with 8 µm pore size Transwell membranes (Corning, NY, USA). Cells were digested with trypsin and suspended in serum-free culture medium at 5×10 5 /well. Samples (200 µl) were placed in the upper chamber, and the lower chamber was filled with serum-free culture medium (500 µl). The cells were transfected and treated with ox-LDL at the designated concentrations and for the indicated times. The noninvading cells remained on the upper surface of the membrane and were removed with cotton swabs, whereas the cells that passed through the membrane were fixed with 4% formaldehyde, stained with 0.2% crystal violet, and then counted under an optical microscope after 24 h. Scratch wound assay A thin mark was drawn vertically with a 20-µl pipette tip in the six-well plate. The cells were then washed three times with PBS to remove the floating and detached cells. Fresh serum-free medium was added. The migratory distance was measured 0 h, 6 h, 12 h, and 24 h after wounding using IPP software. Cell migration is expressed as the percentage of the open wound area at 24 h relative to that at 0 h. In vitro angiogenesis assay Cells were transfected or treated with ox-LDL. A 12-well plate was precoated with Matrigel (BD Bioscience, Billerica, MA, USA). Cells were digested with trypsin, suspended in serum-free culture medium at 5×10 4 /well and seeded on the 12-well plates. The cells were stained by calcein-AM, and the angiogenic properties were assessed after 12 h. The tube length was measured using IPP software. Statistical analysis The data were analyzed using SPSS v23.0 (SPSS Inc., Chicago, IL) and are presented as the mean ± SEM. of at least three independent experiments. Comparisons were analyzed using Student's t test or one-way ANOVA followed by the Bonferroni post hoc test. p<0.05 was considered statistically significant. DKK1 aggravated plaque-associated angiogenesis in vivo The results of immunohistochemical analysis of DKK1 demonstrated that DKK1 protein expression was significantly lower in the shDKK1 group and higher in the DKK1 group than in the NS and GFP groups ( Figure 1A), which established that the overexpression and silencing vectors were effective. In the immunohistochemical analysis, the intraplaque expression of VEGF-A, VEGFR-2, MMP-2 and MMP-9 was intense in the control group but was significantly downregulated with DKK1 knockdown, and upregulated with DKK1 overexpression ( Figure 1B). Immunohistochemical analysis of the intraplaque expression of CD31 further confirmed that DKK1 increased plaque-associated angiogenesis ( Figure 1C). DKK1 participated in angiogenesis in ox-LDLinduced HUVECs via the CKAP4/PI3K pathway HUVECs were treated with ox-LDL for different durations (0 h, 1 h, 3 h, 6 h). Western blotting and PCR showed that HUVECs treated with ox-LDL had increased DKK1 expression (Figure 2A and 2B). Compared with the NC+ox-LDL group, the si-DKK1+ox-LDL group showed less migration and tube formation (Figure 2C-2E). siRNA transfection downregulated the expression of DKK1 and inhibited the ox-LDL-induced upregulation of VEGF-A, VEGFR-2, MMP-2 and MMP-9 ( Figure 2F). There was no difference between the NC+ox-LDL group and the si-DKK1+ox-LDL group in the EdU examination ( Figure S1A). The results indicated that DKK1 is involved in migration and angiogenesis but not proliferation in ox-LDL-treated HUVECs. IM-12 is an agonist of the canonical Wnt pathway. There was no difference in angiogenesis markers (VEGF-A, VEGFR-2, MMP-2 and MMP-9) between the DKK1 group and the DKK1+IM-12 group ( Figure S1B). In the DKK1 group, the expression of CKAP4 was upregulated ( Figure 3A). siRNA transfection downregulated the expression of CKAP4 and inhibited the ox-LDL-induced upregulation of angiogenesis markers (VEGF-A, VEGFR-2, MMP-2 and MMP-9), while 740 Y-P, a PI3K agonist, restored the upregulation ( Figure 3A). Compared with the DKK1 group, the DKK1+si-CKAP4 group showed less migration and tube formation, while 740 Y-P restored the migration and tube formation ( Figure 3B-3D). The results indicate that DKK1/CKAP4/PI3K is involved in angiogenesis in ox-LDL-treated HUVECs. Ets-1 participated in the ox-LDL-induced upregulation of DKK1 in HUVECs at the transcriptional level HUVECs were treated with ox-LDL for different durations (0 h, 1 h, 3 h, 6 h). Western blot and PCR showed that HUVECs treated with ox-LDL had higher Ets-1 expression ( Figure S2A and S2B) than the 0 h group. Immunofluorescence showed that the nuclear translocation of Ets-1 increased after ox-LDL treatment ( Figure S2C). siRNA transfection downregulated the expression of Ets-1 and inhibited the ox-LDL-induced upregulation of DKK1 ( Figure 4A and 4B). The results indicate that Ets-1 is involved in the regulation of DKK1 expression in ox-LDL-treated HUVECs. Full-length, serially truncated and deletion fragment versions of the DKK1 promoter were cloned into the luciferase reporter vector pGL3-basic to generate pGL3-DKK1-promoter vectors, which were named P0, P1-P9 and P0-del, respectively. HUVECs were transiently cotransfected with the P0 and pRL-TK vectors and then exposed to ox-LDL for 6 h. A DLR assay showed that ox-LDL significantly increased the DKK1 promoter activity in HUVECs compared with the control. This finding indicated that ox-LDL could regulate DKK1 expression at the transcriptional level ( Figure 4C). Cultured 293T cells were transiently cotransfected with the P0 and pRL-TK vectors and then transfected with PCDNA3.1-Ets-1, PCDNA3.1, negative control (NC) siRNA or Ets-1 siRNA. A DLR assay was performed and showed that Ets-1 siRNA decreased the DKK1 promoter activity in 293T cells ( Figure 4D) and that Ets-1 significantly enhanced the promoter activity in 293T cells ( Figure 4D). To identify putative cis-acting elements and transcription factors contributing to DKK1 promoter activities in the region spanning -2034 to -1834 bp, the online prediction tools JASPAR and PROMO were used. Three possible loci, namely, EBS1, EBS2, and EBS3, existed at -2034 to -2018bp, -2006 to -2000bp, and -1993 to -1987bp, respectively ( Figure 4F). To explore these possible binding sites, we performed gel-shift assays using nuclear extracts from HUVECs. As shown in Figure 4G, a strong DNA complex was observed. The gel-shift experiment revealed that ox-LDL significantly increased DNA-protein complex formation. A 200-fold excess of wild-type cold competitive Ets-1 oligonucleotides (wt) eliminated the formation of the DNA/protein complex, while mutant cold competitive Ets-1 oligonucleotides (mut1, mut2, and mut3 for mutants of EBS1, EBS2, and EBS3, respectively) partially eliminated the formation of the DNA/protein complex ( Figure 4F). As shown in Figure 4G, the EBS3 mutant nearly abolished the formation of the DNA-protein complex, whereas the EBS1 and EBS2 mutants did not. Furthermore, EBS2 was least able to eliminate the formation of the DNA/protein complex ( Figure 4G, lane 7). Therefore, the Ets-1 protein binds mainly to the EBS2 site of the DKK1 promoter. In the supershift lane, an obvious shift was found compared to the original position ( Figure 4G, lane 9). This finding indicates that Ets-1 is a potential transcription factor for the DKK1 gene. To further confirm the above result, cultured 293T cells were transiently cotransfected with the P0 vector (or P0-del), the pRL-TK vector and PCDNA3.1-Ets-1 (P0 group, P0-del group). Further deletion of 6 bp (from -2006 to -2000bp) resulted in a 50% decrease in promoter activity compared to that in the P0 group ( Figure 4H), suggesting an important role of EBS2. As expected, the crosslinked DNA-Ets-1 complexes immunoprecipitated with an Ets-1 antibody were detected by PCR amplification with primers spanning the region of the DKK1 promoter from -2080 to -1894 bp. ChIP also revealed that, compared with the control, ox-LDL significantly increased the binding activity between the Ets-1 protein and the DKK1 promoter ( Figure 4I-4L). The results indicated that Ets-1 increased the transcriptional activity of the DKK1 promoter. ox-LDL induced Ets-1, CBP, and c-jun binding to DKK1 promoter in HUVECs HUVECs were treated with ox-LDL for different durations. Western blot analysis showed that HUVECs treated with ox-LDL had higher c-jun and c-fos expression than the 0 h group (Figure S2A and S2B). Immunofluorescence analysis showed that the nuclear translocation of c-jun, not c-fos, increased after ox-LDL treatment ( Figure S2C). siRNA transfection downregulated the expression of c-jun and inhibited the ox-LDL-induced upregulation of DKK1 ( Figure S2D and S2E). Cultured 293T cells were transiently cotransfected with the P0 and pRL-TK vectors and then transfected with NC siRNA or c-jun siRNA. A DLR assay showed that c-jun siRNA decreased DKK1 promoter activity in 293T cells ( Figure S2F). 293T cells were cotransfected with P0, pRL-TK and PCDNA3.1 (control group) or cotransfected with P0-P9, pRL-TK and PCDNA3.1c-jun (group P0 -P9). A DLR assay showed that DKK1 promoter activity was increased in the P0 group and reduced from the P1 group to the P9 group ( Figure S2G). The online prediction tools JASPAR and PROMO were used to identify possible loci and binding sites (-2029 to -2023bp) ( Figure S2H). The results indicated that c-jun but not c-fos was involved in the regulation of DKK1 expression in ox-LDL-treated HUVECs. Previous studies have shown that Ets-1 can recruit the transcriptional coactivator CBP/P300 to target gene promoters and regulate gene expression. We showed that Ets-1 recruited the coactivator CBP to the DKK1 promoter. Pretreatment with a CBP/P300 inhibitor downregulated the expression of CBP/P300 and inhibited the ox-LDL-induced upregulation of DKK1 ( Figure 5A). Cultured HUVECs were transiently cotransfected with the P0 and pRL-TK vectors and then treated with the CBP/P300 inhibitor for 12 h. A DLR assay was performed to determine the DKK1 promoter activity. The CBP/P300 inhibitor significantly decreased DKK1 promoter activity in 293T cells compared with that in the control group ( Figure 5B). The results indicated that CBP but not P300 was involved in the regulation of DKK1 expression in ox-LDL-treated HUVECs. Coimmunoprecipitation (co-IP) was performed after treatment with ox-LDL. The results showed that Ets-1 interacted with the endogenous CBP and with c-jun but not with P300 ( Figure 5C). With a reverse co-IP assay, we showed that CBP also interacted with endogenous c-jun and Ets-1 (Figure 5D). After cotransfection with PCDNA3.1-c-jun and PCDNA3.1-Ets-1, DKK1 promoter activity was increased compared with that for co-transfection with PCDNA3.1-Ets-1 alone (Figure 5E). After cotransfection with PCDNA3.1c-jun and P0-del, the DKK1 promoter activity was decreased compared with that in the P0+ PCDNA3.1-c-jun group (Figure 5F). Taken together, these observations indicated that Ets-1, CBP and c-jun might form a complex to regulate DKK1 activity coordinately. Ets-1 was a functional target of miR-33a-5p and miR33a-5p eliminated angiogenesis in ox-LDL-induced HUVECs To examine potential miRNAs that negatively regulate DKK1 or Ets-1, a bioinformatics approach using multiple prediction algorithms (miRBase, PicTar, and TargetScan v6.1) was used to identify binding sites for miRNAs in the 3'-UTR of DKK1 and Ets-1. This analysis identified miR33a-5p as a potential regulator of DKK1 and Ets-1. ApoE -/mice were given atherogenic chow for 0, 4, 8 and 12 weeks, and miR33a-5p expression in the aortic artery was found to decrease with time ( Figure 6A). HUVECs were treated with ox-LDL for different durations. qRT-PCR showed that ox-LDL downregulated the miR33a-5p expression in HUVECs compared with that in the 0 h group (Figure 6B). Dicer siRNA transfection upregulated the expression of DKK1 and Ets-1 ( Figure S3A). Mimic transfection downregulated the expression of DKK1 and Ets-1, while inhibitor transfection upregulated the expression of DKK1 and Ets-1 ( Figure 6C). Transient cotransfection of miR33a-5p mimics with Ets-1 3'-UTR luciferase reporter plasmids resulted in significant repression of luciferase reporter gene expression in 293T cells, whereas cotransfection of 293T cells with NC miRNA or the mutants did not have any effect on luciferase expression. Transient cotransfection of the miR33a-5p inhibitor with Ets-1 3'-UTR luciferase reporter plasmids gave the opposite result ( Figure 6D). The results indicated that miR33a-5p was involved in the regulation of Ets-1 and DKK1 and directly bound to the 3'-UTR of Ets-1. Transient cotransfection of miR33a-5p mimics with DKK1 3'-UTR luciferase reporter plasmids did not have any effect on the expression of luciferase in 293 cells, and cotransfection of 293T cells with NC miRNA or the mutants also did not have any effect on the expression of luciferase. Transient cotransfection of the miR33a-5p inhibitor with DKK1 3'-UTR luciferase reporter plasmids gave the same result ( Figure S3B). The results indicated that miR33a-5p was involved in the regulation of Ets-1 and DKK1 and bound directly to the 3'-UTR of Ets-1. Compared with the NC+ox-LDL group, the miR33a-5p mimics+ox-LDL group showed less migration and tube formation (Figure 6E-6G). Compared with the NCI group, the miR33a-5p inhibitor group showed more migration and tube formation (Figure 6E-6G). The results indicated that miR33a-5p eliminated migration and angiogenesis of ox-LDL-induced HUVECs. Compared with the NC group, the NC+lenti-Ets-1 group showed more migration and tube formation. Compared with the NC+lenti-Ets-1 group, the DKK1 siRNA+lenti-Ets-1 group showed less migration (Figure 7D-AF). These results suggest that Ets-1 causes the migration of ECs and tube formation by inducing DKK1. Discussion In this study, we found that DKK1, a tumorigenesis associated molecule, promoted angiogenesis of carotid atherosclerotic plaques in high fat fed ApoE-/-mice. ox-LDL stimulates the secretion of TNF-α from macrophages and cytokines from ECs, indicates the starting point of atherosclerosis. To further explore the findings in vivo, ox-LDL was used to stimulate HUVECs in vitro in our study [21]. We demonstrated that the high expression of DKK1 under ox-LDL stimulus increased the migration and angiogenesis in HUVECs via CKAP4/PI3K pathway. Data from Western blotting, real-time RT-PCR, electrophoretic mobility shift assay (EMSA) and ChIP revealed that upstream nuclear transcription factor Ets-1 could bind to the DKK1 promoter region, form a complex with CBP and c-jun, increase the transcriptional activity of the DKK1 promoter and promote DKK1 expression in the ox-LDL-treated HUVECs. Meanwhile, miR33a-5p was found to directly target the 3'-UTR of Ets-1 to regulate the expression of Ets-1 and DKK1. To the best of our knowledge, this is the first study to reveal the role and the underlying mechanisms of DKK1 in angiogenesis under the treatment of ox-LDL in HUVECs. Angiogenesis, also named neovascularization, refers to the growth of new blood vessels that sprout from existing blood vessels. It's a complex process involved in differentiation, proliferation, migration and maturation of endothelial cells. Studies showed that angiogenesis intra atherosclerotic lesions plays vital roles in plaque growth and instability [22], and is of vital importance in plaque progression, plaque destabilization and thromboembolic events. Several angiogenesis-related genes, like VEGF, platelet-derived growth factor (PDGF) and tumor growth factor-β (TGF-β) in endothelial cells were demonstrated to be induced by atherosclerotic risk factors, such as oxidative stress, inflammatory factors and mechanical forces [23,24]. The VEGF family which consists of five closely related members, namely VEGF-A, B, C, D and placental growth factor, played important roles in angiogenesis. The release of VEGF-A and activation of VEGFR-2, a receptor for VEGF-A contributes significantly to promote intra atherosclerotic angiogenesis [25,26]. There were also studies showed that non-coding RNA regulated endothelial proliferation, migration and tube formation, and ultimately affected angiogenesis [27]. Researchers have explored to target the intra-plaque angiogenesis through inhibition of vascular endothelial growth factor signaling, glycolytic flux and fatty acid oxidation [28][29][30]. However, although anti-angiogenesis therapy has widely used in cancer treatment, studies on the pharmacological inhibition of this phenomenon in AS are still scare [31]. In our study, we found that DKK1 was highly expressed and promoted the angiogenesis via increasing the expression of VEGF-A, VEGFR-2, MMP-2 and MMP-9 in the atherosclerotic plaques. CD31 is a marker of angiogenesis [32]. In vivo, we detected the expression of CD31 to assess angiogenesis. The in vitro study in the ox-LDL-treated HUVECs also vertified these findings, which may provide a new potential intervention target for the anti-angiogenesis therapy in AS. DKK1, a secretory glycoprotein of the DKK family, had been found to play vital roles both in cancers and AS. At present, DKK1 has been developed as a serological marker for the diagnosis and prognosis evaluation of several cancers, and also a new target for cancer treatment. In the registered clinical trials, we found DKK1 antibody DKN-01 had been entered into clinical phase I or phase II trial in advanced biliary tract cancer [7], advanced liver cancer, cholangiocarcinoma, gastric cancer and other tumors, and completed a promising study in gastric/gastroesophageal junction cancer [3,33]. Another DKK1 antibody BHQ880 had completed the clinical phase II experiment for multiple myeloma in year 2020. Oncological treatment may increase the morbidity and mortality of cardiovascular diseases (CVDs) as a side effect [4]. Our previous studies have revealed that DKK1 could promote endothelial apoptosis [5], destroy the tight junctions of endothelium [34], disturb the lipid metabolism [35] and lead to the development of AS. Anti-DKK1 neutralizing antibodies have shown promise and might be beneficial for the treatment of CVDs [7]. Recent studies reported that angiogenesis was a link between atherosclerosis and tumorigenesis [36]. Investigating the role of DKK1 in neovascularization will benefit both for the cancer and CVD treatment at the same time. Therefore, we further revealed the role and underlying mechanisms of DKK1 in AS from the perspective of angiogenesis in this study. Our findings showed that DKK1 promoted the migration and angiogenesis via up-regulating the angiogenesisrelated molecules by activation of CKAP4/PI3K pathway, which indicated that anti-DKK1 therapy may prevent tumor and meanwhile reduce unstable plaques [37]. Furthermore, we explored the upstream regulation mechanism of DKK1. Previous studies have revealed several mechanisms upstream of DKK1, including the aspects of histone modification, transcription, posttranscriptional modification and posttranslational modification (phosphorylation, glycosylation). It was reported that transcription factors (YAP, β-catenin, etc) binded to the DKK1 promoter region directly to activate its transcription [10,38,39]. In this study, we found that nuclear transcription factors, c-jun and Ets-1, could bind to the DKK1 promoter region and activate DKK1 promoter. Using DLR, EMSA and ChIP experiments, we also clarified that Ets-1 binded to the -2006 to -2000bp region of the DKK1 promoter and c-jun binded to the -2029 to -2023bp region of the DKK1 promoter. Ets-1, a member of the Ets family of transcription factors, plays important roles in cellular proliferation, migration, vascular remodeling and apoptosis. Hypoxia, inflammatory factors and VEGF upregulate the expression of Ets-1 in ECs [14]. Previous studies showed that Ets-1 performed multiple functions in ECs: (1) Ets-1 directly regulated several vascular genes, such as Flt1, Tek Kdr, Angpt2, Nrp1, Vwf, Pecam1 and Cdh5, to promote angiogenesis [13], and (2) Ets-1 directly upregulated MMPs and β3 integrin to promote migration [15,16]. Feng, et al found that Ets-1 participated in inflammation induced carotid artery endoluminal vascular injury and promoted AS [40]. We found that the expression of Ets-1 was significantly upregulated by ox-LDL in HUVECs, Ets-1 siRNA decreased DKK1 expression and relieved the ox-LDL-induced migration of ECs and angiogenesis, indicating that ox-LDL may cause changes through the Ets-1/DKK1 regulation of EC function. These findings further deepen our understanding of the regulatory mechanism of DKK1. In addition, we also found that c-jun, one of AP-1 performs, upregulated the expression of DKK1 via increasing the promoter activity of it under the stimulus of ox-LDL. Previous studies have shown that c-jun promoted the expression of VEGF and induced angiogenesis [41]. The transcription coactivators CBP and P300, which consist of four transcription factor binding domains (TADs) to recruit transcription factors, change chromatin superstructures, and activate acetylation [42], could promote the activation of a variety of transcription factors [43], such as c-jun (which binds to the CREB binding domain) [44], c-fos (which binds to third zinc finger domains) [45], and Ets-1 (which binds to first zinc finger domains) [46]. The bindings of Ets-1, AP-1 [14,47], and the coactivator CBP/P300 promote the connection of these two transcription factors [13]. CBP/P300 is also involved in the acetylation of the DKK1 promoter in breast cancer [48]. In our studies, we found protein binding among CBP, Ets-1 and c-jun. Ets-1 increased the transcriptional activity of DKK1 with the aid of c-jun and CBP. The effect of Ets-1 on migration and angiogenesis was reversed by a CBP/P300 inhibitor. The results indicated that CBP assisted in the effects of Ets-1/DKK1 on EC function. MiRNAs are small (19-23 nucleotides) non-coding RNAs which regulate the target genes by binding to the 3' untranslated region (UTR) of mRNA post-transcriptionally. Studies have shown that miRNAs play vital roles in the development of AS and the pathological processes of ECs. Previous studies reported that the expression of Ets-1 in ECs can be inhibited by miR155 [49], miR-199a-5p [13], the miR-200 family [50] and miR-221/222 [49]. In addition, miR-217 [51], miR152 [11], miR-376a [52] and other miRNAs decreased the expression of DKK1 by inhibiting translation or decreasing mRNA stability in different diseases. A previous study found that DKK1 is regulated by miR217, miR33a, miR33b, miR103a, miR93 and miR106a in diabetic cardiomyopathy [20]. In this study, we further clarified the possible miRNAs in the regulation of Ets-1 and DKK1. Micro.org, STARBASE, TargetScan and other microRNA programs predicted that both Ets-1 and DKK1 had probable binding regions with the miR33a-5p (3'-UTR) binding sites. In a DLR assay, miR33a-5p bound to the Ets-1 3'-UTR but not the DKK1 3'-UTR, which suggested that miR33a-5p may inhibit the expression of Ets-1 and thus play a role in the regulation of DKK1. At present, there are few studies on miR33a. There are two subtypes of miR33 in humans, miR33a and miR33b [53]. Human miR33a and the only mouse homologue (miR-33) are located at intron 16 of SREBP-2. It was reported to regulate the chemoresistance, proliferation, invasion and angiogenesis in tumor [54]. MiR33a was also closely related to the lipid metabolism (inhibition of ABCA1 and cholesterol efflux, inhibition of HDL production) [18]. Horie T, et al found that miR-33 deficiency reduced atherosclerotic plaque size and lipid content suggesting that miR-33 inhibition may prevent atherosclerosis progression [53]. Talepoor, et al found that ECs exhibit a protective effect and inhibit miR-33a expression in monocytes, when cocultured with monocytes. The result indicated miR33a might play a protective role in the formation of atherosclerosis [55]. As well, the expression of miR33a-5p decreased over time in the high fat fed ApoE-/-mice in our study, which indicating down-regulation of miR33a-5p in the development of atherosclerosis. However, whether miR33a affected plaque development by affecting endothelial cell function needs further proof. The effect of EC specific miR33a on atherosclerosis is the limitation of this study. In our study, we further explored whether miR-33a exerted great effects in the angiogenesis in the ox-LDL stimulated HUVECs and found that miR33a-5p directly bound to Ets-1 3'-UTR, decreased the expression of Ets-1 and DKK1, and attenuated the migration of endothelial cells and angiogenesis. miR-33a/Ets-1/DKK1 axis exerted important effects on the migration and angiogenesis in HUVECs under the stimulus of ox-LDL. The protein binding complex by CBP, c-jun and Ets-1 facilitated this process. Deep exploration of the underlying mechanisms in the angiogenesis in atherosclerosis is essential for the designing novel therapeutic targets, and is also important for the recognition of the cardiovascular side effects in the process of related anti-tumor treatment in future.
8,502.2
2021-10-03T00:00:00.000
[ "Biology" ]
Improving Zero-Shot Cross-lingual Transfer for Multilingual Question Answering over Knowledge Graph Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access training data in a high-resource language, while need to answer multilingual questions without any labeled data in target languages. A straightforward approach is resorting to pre-trained multilingual models (e.g., mBERT) for cross-lingual transfer, but there is a still significant gap of KGQA performance between source and target languages. In this paper, we exploit unsupervised bilingual lexicon induction (BLI) to map training questions in source language into those in target language as augmented training data, which circumvents language inconsistency between training and inference. Furthermore, we propose an adversarial learning strategy to alleviate syntax-disorder of the augmented data, making the model incline to both language- and syntax-independence. Consequently, our model narrows the gap in zero-shot cross-lingual transfer. Experiments on two multilingual KGQA datasets with 11 zero-resource languages verify its effectiveness. Introduction With the advance of large-scale human-curated knowledge graphs (KG), e.g., DBpedia (Auer et al., 2007) and Freebase (Bollacker et al., 2008), question answering over knowledge graph (KGQA) has become a crucial natural language processing (NLP) task to answer factoid questions. It has been integrated into real-world applications like search engines and personal assistants, so it attracts more attention from both academia and industry (Liang et al., 2017;Hu et al., 2018;Shen et al., 2019). Recently, a rising demand of KGQA systems is to answer the multilingual questions, motivating us * Work is done during internship at Microsoft. † Corresponding authors. to focus on multilingual KGQA. However, building a large-scale KG, as well as annotating QA data, is costly for each new language, not to mention many minority languages with a few native annotators. Therefore, we adopt a zero-shot cross-lingual transfer setting -a KGQA model is developed to perform inference on multilingual questions with the only access to training data and associated KG in a high-resource language (e.g., English). Providing the success of pre-trained monolingual encoders (Peters et al., 2018;, some works (e.g., mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020)) pre-train a Transformer encoder (Vaswani et al., 2017) on largescale non-parallel multilingual corpora in a selfsupervised manner. Then given an NLP task, a general paradigm for zero-shot cross-lingual transfer is to fine-tune a pre-trained multilingual encoder on the data in a data-rich (source) language. And the fine-tuned model is generalizable enough to perform inference in other low-resource (target) languages with surprising quality of prediction. This paradigm can be adapted to KGQA to build symbolic logical forms (e.g., query graph (Yih et al., 2015)) for KG query. However, it is witnessed that there is a considerable KGQA performance gap between source and target languages, which is consistent with the empirical results on a wide range of other tasks by prior works (Conneau et al., 2020). To bridge the gap, translation approaches are proven effective on multilingual benchmarks (Hu et al., 2020;Liang et al., 2020). As a way of data augmentation, they perform source-to-target translation to obtain multilingual training data. Further with advanced techniques (Cui et al., 2019;Fang et al., 2020), they achieve state-of-the-art effectiveness. But these approaches rely heavily on a well-performing translator. The translator is not always available especially for a minority language since its training requires a large volume of parallel bilingual corpus. Therefore, to be applicable to more languages, we assume that neither translators nor parallel corpora are available in this work. In this paper, to adapt the translation approaches in our zero-resource scenario, we naturally propose to replace the full-supervised machine translator with unsupervised bilingual lexicon induction (BLI) for word-level translation. Specifically, as in prior works (Lample et al., 2018b;Artetxe et al., 2018), a BLI model is first trained on non-parallel bilingual corpora. Then, via bilingual word alignments in BLI, we map the training questions in source language into those in target languages to obtain augmented multilingual training data. Consequently, even simply learning a KGQA model on the augmented data can circumvent language inconsistency between training and inference and thus bridge the performance gap in zero-shot crosslingual transfer. To explain why BLI is competent, it is observed that KGQA mainly involves phraselevel semantics (Berant et al., 2013). Compared to other tasks depending on sentence-level contextualization, KGQA is insensitive to long-term dependency but benefits from the language consistency. Moreover, we propose an adversarial strategy to mitigate the syntax-disorder caused by BLI. Specifically, we present a discriminator on top of the encoder, which is trained to distinguish whether the input is a grammatical question in source language or a BLI-translated one in target language. Meanwhile, jointly with KGQA goal, the encoder is fine-tuned to fool the discriminator so that the questions' representations are both language-and syntax-agnostic. So the trained KGQA model is robust to syntax-disorder and becomes insensitive to the question language, leading to superior performance on multilingual KGQA. Experiments conducted on two multilingual KGQA datasets with 11 zero-resource languages verify the effectiveness of our approach. KGQA Task Definition We give a background of monolingual KGQA, followed by multilingual KGQA and its data format. Monolingual KGQA. A knowledge graph G is comprised of a set of directed triples (h, p, t), where h ∈ E denotes a head entity, t ∈ E L denotes a tail entity or literal value, and p ∈ P denote a predicate between h and t. KGQA aims at generating answers for a natural language question q based on G. Usually a model M first parses the question q into an intermediate logical form, which is then transformed into a SPARQL query, and the answer is derived by executing the SPARQL query on G. An example is shown in Figure 1: the question in the bottom, intermediate logical form in the upper right and the corresponding SPARQL query in the top. Following Maheshwari et al. (2019), we take a restricted subset of λ-calculus -query graph, as the intermediate logical form. Typically, a query graph consists of four types of nodes: grounded entity(s) (in rounded rectangle), existential variable(s) "?y" (in circle), a lambda variable "?x" (in shaded circle), and an aggregation function (in diamond). Considering entity-linking is a standalone system and there are many tools, we assume grounded entities in a question are given. This avoids uncertainty caused by entity-linking, and facilitates us to focus on the query graph construction process. Multilingual KGQA. We focus on a zero-shot cross-lingual transfer setting of KGQA. That is, we only have a labeled dataset D src = {(q src l , s src l ) N l=1 }, as well as the associated knowledge graph G, in a high-resource language src, where q src l and s src l denote a natural language question and a formal query, respectively. We will omit subscript l of example index in D src . Multilingual KGQA is to learn a model M which can answer questions q tgt in multiple target languages tgt. A recent baseline is to fine-tune pre-trained multilingual models (e.g. mBERT) in src and directly perform inference in tgt. Methodology This section starts with a base framework for monolingual KGQA, followed by our proposed multilingual solutions. Lastly, details about training and inference are elaborated. Base Monolingual Framework Following Maheshwari et al. (2019), we present a base pipeline framework as in Figure 1 to construct query graphs. It consists of three modules: 1) inferential chain ranking, 2) type constraint ranking, and 3) aggregator classification. Inferential Chain Ranking. An inferential chain (IC) refers to a sequence of directed predicate from a grounded entity to lambda variable ?x. Given an entity e grounded from the question q, we first search its chain candidates C e = (c e 1 , . . . , c e n ) by exploring legitimate predicate sequences starting from e in G. Following previous works (Yih where a e i is a score for their relatedness, and θ (IC)parameterized SemMatch(·) can be any model for pairwise relatedness, such as Co-Attention network and BERT-based Matching (Devlin et al., 2019). Finally, the resulting of this module is the top-1 ranked inferential chain, i.e., c e = arg max c e i (a e i , ∀i = 1, . . . , n). Note, if there are multiple grounded entities in q, we will predict an inferential chain for each entity. Type Constraint Ranking. Type constraints (TC) refer to the entity types specified in the question for each variable on an inferential chain. They can be used to disambiguate the entities and thus boost KGQA performance. For example, answer entity(s) to the example question in Figure 1 are constrained by type Scientist. Hence, type constraint ranking is proposed to capture such information, which is also achieved by a semantic matching model. Specifically, given the resulting inferential chainc e , we first enumerate type candidates T e y = {t e y1 , . . . } for the existential variable and T e x = {t e x1 , . . . } for the lambda variable. Then, because there is scarcely overlap of gold type constraints between the two variables, a single semantic matching model is adequate for both. Thus, we define the model to derive relatedness scores as where, ∀ * ∈ {y, x}, and ∀j = 1, . . . Finally, we get the type constraints for existential and lambda variable with a threshold γ (thresh) , i.e., Aggregator Classification Given several answer formats in the dataset, aggregator classification (AC) is presented to distinguish the format among Bool, Count and Entity(s). The principle of each is detailed in the middle right of Figure 1. Formally, a simple text classifier can satisfy, i.e., where the Classifier(·) is composed of a contextualized encoder, a pooler and an MLP with softmax. Once the above is completed, their results can compose a query graph, which is transformed into SPARQL and then executed on G for the answer. Proposed Multilingual KGQA Approach Built upon the base framework detailed before, we extend it with a multilingual inference capability, i.e., multilingual KGQA. We are in line with a recent popular zero-shot transfer paradigm (Conneau et al., 2020;Fang et al., 2020) that: a pre-trained multilingual encoder is only fine-tuned in src, and a translation-based data augmentation technique is integrated to narrow the performance gap between src and tgt. To emphasize the gap in KGQA, 65% F1 score in English (src) vs. 54% in Italian (tgt) is observed by mBERT zero-shot transfer in our pipeline without any multilingual augmenting. Distinct from prior works in this paradigm requiring well-trained translators, we propose a fully unsupervised way for wide applicability with neither tgt KGQA data nor src-tgt parallel corpora. It is natural to resort to bilingual lexicon induction (BLI) with unsupervised training and acceptable word-level translating quality. In the following, we first present a BLI-based augmentation for multilingual training data, followed by our adaptation of the monolingual base framework ( § 3.1) to the augmented data. Finally, we propose an adversarial learning strategy coupled with BLI-based augmentation for robust cross-lingual transfer. An illustration of our proposed semantic matching model with symbolic candidates is in Figure 2. BLI-based Multilingual Augmentation We leverage the BLI model by Lample et al. (2018b). First, it pre-trains monolingual word embeddings U src ∈ R d×|V src | and U tgt ∈ R d×|V tgt | in src and tgt respectively. Then, it learns a linear transformation to unsupervisedly align the word embeddings in two languages to one space, i.e., W= arg min The unsupervised alignment between k-th src word and l-th tgt word is captured by adversarial learning, and Distance(·) is implemented by crossdomain similarity local scaling (CSLS). Please refer to (Lample et al., 2018b) for its details. Based on the BLI model, we can build a word-byword translator, BLI (trans) src→tgt , from src to arbitrary tgt, as long as its monolingual corpus is available. Note, when performing word-level translation, we also employ CSLS to mitigate the hubness problem and find the most likely alignment. Then, we translate each question q src in D src to other languages: where src denotes English (en) in our experiments while tgt can be one of 11 other languages, such Farsi (fa), Italian (it), etc. Consequently, q tgt is the augmented multilingual data for model training. Remark: Although BLI provides multilingual data, open questions still remain. 1) Why is BLI competent here: It is observed KGQA mainly involves word-/phrase-level semantics of symbolic candidates, rather than sentence-level one in most other NLP tasks. As the Module 1 and 2 in Figure 1, the matching only involves morphological similarity (e.g., scientist vs. <dbo:Scientist>), synonym (e.g., won an award vs. <dbp:prizes>), etc. Thus, KGQA is less sensitive to long-term context than other tasks. This has been leveraged by Berant et al. (2013) to propose a phrase matching model for monolingual KGQA. 2) Will BLI lead to error propagation: Since BLI model achieves a high Precision@10 but a relatively low Preci-sion@1, wrong translation and the corresponding ground truth are semantically similar. Intuitively, their word embeddings are spatially close to each other, so wrong word-level translation is equivalent to applying tiny noise to word embeddings, which hardly leads to error propagation when robust pretrained Transformer-based encoder is used. Multilingual Models Symbolic Candidate Processing. For an inferential chain, we enrich each predicate on the chain by 1) transforming each camel-represented phrase into sequence-formatted words 2) prefixing +/-for directional information, and 3) concatenating topfrequent types in local closed-world assumptions (Krompaß et al., 2015). For a type constraint, we simply transform each camel-represented phrase into sequence-formatted words. In the following, we denote the text of a processed symbolic candidate as z no matter it is a chain or type. Multilingual Semantic Matching Model. As detailed in §3.1, both inferential chain ranking and type constraint ranking modules are built upon a semantic matching model between the question q and a symbolic candidate z. Note, z is always in src while q can be in either src or BLI-translated tgt. Following the common practice, we first concatenate q and z with special tokens (Devlin et al., 2019), which is passed into a pre-trained multilingual Transformer encoder, i.e., v = Pool(Transformer(text)), where Pool(·) denotes using the contextualized embedding of [CLS] to represent the entire input. In this paper, the encoder is alternative between mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020). Lastly, a 1-way multi-layer perceptron (MLP) built upon v is presented to calculate the matching score in Eq.(1) or Eq.(3). Multilingual Classification Model. As detailed in §3.1, a text classification model is required to identify aggregator. To fit into our zero-resource multilingual scenario, the model, consisting of a pre-trained multilingual encoder and an MLPbased predicting layer, can be directly fine-tuned on the augmented questions, i.e., q src and q tgt . Syntax-agnostic Adversarial Strategy Although training the KGQA model on BLIaugmented multilingual data circumvents language inconsistency, it inevitably introduces syntax disorder and grammatical problem, which could hurt the performance. We thus present an adversarial strategy in pair with BLI-augmented data to push the Transformer encoder deriving language-and syntax-independent representations. Formally, a discriminator is built upon the single vector representation v produced by the Transformer encoder: where p (src) is the probability of the question in source. The discriminator is trained to minimize L (adv) θ (dis) =−I (src) log p (src) −I (tgt) log(1−p (src) ). (10) On the contrary, the Transformer encoder is learned to fool by minimizing an adversarial loss, i.e., L (adv) I (tgt) denotes if the question in BLI-translated tgt, and θ (enc) is encoder's parameters in each module. Training Before constructing the objectives, we conduct uniform negative sampling for the two ranking models with the maximum negative number limited to 100. First, gold labels of a q for the three modules stem from the formal query s src . A margin-based hinge loss is defined for inferential chain ranking: where, D is the augmented dataset, N is a set of negative chains,ã e is derived from the gold chain andâ e i is derived from a negative chain. Similarly, the loss defined for type constraint ranking iŝ Lastly, the loss of aggregator classification iŝ where p (AC) [i=g] denotes probability corresponding to gold aggregator class. During training, the adversarial loss is added to the loss function of each module to compose the final training objective, i.e., L ( * ) =L ( * ) +αL Inference Algorithm As in Algorithm 1, we provide a detailed procedure for model inference in target language. We also provide an explanation of query graph in Figure 1. As the example query graph shown in the right of the figure: a topic entity is first grounded as e ="<dbr:Ven.-Ram>" in rounded rectangle, an existential variable in circle denotes intermediate entity set ?y = {h|(h, leaderName, e)}, a lambda variable in shaded circle denotes the answer entity set ?x = {h|(h, prizes, e) ∧ ∀e ∈?y}, and an aggregator COUNT is finally applied to ?x that is constrained by entity type "<dbo:Scientist>". Note that, the existential variable can not exist if only 1-hop relation is expressed in a question, and if multiple topic entities are grounded, multiple "?x" will be merged by intersection. Algorithm 1 Inference in Target Language. Require: : A q in tgt and its grounded topic entities E q ; KG G; Models θ (IC) , θ (T C) , θ (AC) 1: Search the chain candidates C e on G, ∀e ∈ E q 2: Rank each C e by Eq.(1), and keep top-3 in C e 3: C e ← {c e |c e ∈ C e ∧ Size(?x ∈ c e ) > 0} 4:c e ← Null 5: if Size(C e ) > 0 then c e ← the top1 inferential chain in C e 6: end if 7: Merge chains {c e |∀e ∈ E q ∧c e is not Null} 8: Rank type constraint candidates by Eq.(3) and apply the top-1 constraint w/ score > γ (thresh) 9: Generate SPARQL and execute on G for answer entity set A 10: Identify the aggregator for q by Eq.(5) 11: A ← Aggregate(A) by following Figure 1 12: return A; Datasets and Evaluation Metrics We evaluate the proposed approach on two datasets, LC-QuAD (Trivedi et al., 2017) and QALDmultilingual (Usbeck et al., 2018), both of which contain questions with corresponding SPARQL queries over DBpedia 1 . DBpedia is a largescale knowledge graph extracted from Wikipedia pages with 6 million/60 thousands/13 billion entities/predicates/triples in the English edition. LC-QuAD. LC-QuAD is a large-scale complex question answering dataset, which contains 5000 English question-SPARQL pairs 2 . We follow the official split with 1000 questions in the test set, and further split the original training set into training/valid with 3500/500 questions. To evaluate the effectiveness of multilingual KGQA, questions in the test set are translated into 10 languages (fa, de, ro, it, ru, fr, nl, es, hi, pt) 3 using Google Translator 4 . QALD-multilingual. QALD is a series of evaluation campaigns on question answering over linked data 5 . We collect all multilingual questions along with their SPARQL queries from QALD4 to QALD9 and filter out some out-of-scope ones 6 . There are overall 429 distinct question-SPARQL pairs and most are expressed in 12 languages (en, fa, de, ro, it, ru, fr, nl, es, hi_IN, pt, pt_BR). Considering the small size of this dataset, we take all QALD-multilingual questions as test set, and use the training data of LC-QuAD for model training. Evaluation Metrics. We adopt two widely-used metrics as following (Maheshwari et al., 2019), i.e., inferential chain accuracy (ICA) and macro F1 score. The former is used to measure the accuracy (i.e., Precision@1) of inferential chain model, and defined as the percent of correctly-predicted inferential chains. The macro F1 score is used to measure the performance of final answers. Please refer to (Maheshwari et al., 2019) for the details. Experimental Setting We evaluate our approach with 2 multilingual encoding models, i.e. mBERT base and XLM-R base . The embedding and hidden size in both models are set to 768. We use Adam optimizer (Kingma and Ba, 2015) to optimize the KGQA loss with the learning rate of 5 × 10 −5 and a linear warmup (Vaswani et al., 2017). The maximum training epoch, warm-up epoch, and batch size are set to 35, 3, and 32. The discriminator is trained along with each module's objective, with α set to 5 × 10 −4 for learning to fool. The discriminator is optimized via the Adam optimizer with a learning rate of 5×10 −5 . γ (thresh) for the type constraint model is set to 0.7. We follow (Maheshwari et al., 2019) and use the same values for other parameters in model training. Main Results We compare our approach with a natural, widelyused baseline, which fine-tunes a pre-trained multilingual model (e.g., mBERT, XLM-R) on source language, and then directly apply it to target languages. The comparison on QALD-multilingual and LC-QuAD with mBERT are reported in Table 1 and 2 respectively. It is showed that our approach outperforms the baseline significantly on both datasets for all languages. ICA is improved by 1%-4%, and 2.9% on average on the QALD dataset. The improvement on LC-QuAD is even larger, i.e., averaged ICA and F1 score of all languages are increased by around 7% and 4% respectively. Notably, with the BLI-augmented data and syntax- agnostic adversarial learning, the performance of source-language (i.e., English) questions are also increased by a large margin, i.e., F1 score increases from 65% to 66.7% on QALD, and from 80% to 85% on LC-QuAD. We also evaluate the propose approach using XLM-R as the multilingual encoder. The comparison on QALD-multilingual is shown in Table 3. We can observe similar improvements as in mBERT, where both averaged ICA and F1 score are increased by around 1%, verifying the effectiveness of our proposed approach. Ablation Study Our approach consists of two important components, BLI-based data augmentation and a syntaxagnostic learning strategy. We conduct an ablation study to investigate the effect of each component. Table 4 reports the averaged results of all target-languages on QALD-multilingual and LC-QuAD-multilingual. From the table we can see that, with BLI-based data augmentation, our approach increases the ICA score on QALD by 1.7%, and the syntax-agnostic adversarial learning further improves it by 1.2%. Similar improvements are observed on LC-QuAD, which verifies the effectiveness of both components in our approach. Analysis Impact of BLI Accuracy. We assess the impact of BLI accuracy on five Romance languages (i.e. it, fr, es, pt, and ro) by injecting noise into BLI results. Specifically, when mapping source-language words into a target language via BLI, we randomly replace translated words with wrong ones with a probability of p (10%, 20%, 30%, 40%, and 50%). The averaged performance of our approach on the five languages is reported in Figure 3. It is observed, with more noise added, the performance of our approach drops, which is in accordance with intuition. But even when 50% of the translated words are noisy, our method still outperforms the baseline model. For example, it is superior than the baseline by 1% in terms of ICA with 50% noise, showing the robustness of our approach. Deep Dive into Adversarial Learning. We take the inferential chain ranking model as an example, and take a deep dive into the impact of syntaxagnostic adversarial learning. The adversarial learning involves a discriminator to distinguish whether a question is grammatical or syntax-disorder, and an inferential chain ranking model to identify the gold chain. Their loss values, i.e., L L (IC) , are plot in Figure 4. We can see that the classification loss of the discriminator quickly drops and then slowly goes up, indicating that the discriminator gets good performance and then it is fooled later by the language-/syntax-agnostic embeddings generated by mBERT. Meanwhile, the inferential ranking loss drops quickly and stays very small in following epochs, showing that when mBERT is generating syntax-agnostic embeddings, it also supports the inferential chain ranking very well. Case Study We take several examples of inferential chain ranking to show how our approach works. We use t-SNE (Maaten and Hinton, 2008) to map the embedding of a question-chain pair into a twodimensional data point. A question in a specific language is paired with its golden inferential chain and top-1 ranked negative candidate. Figure 5 compares the baseline with our approach for two questions. Positive and negative examples of the same question in different languages are plot in the same figure. We can see that the baseline model can not distinguish positive inferential chains from negative ones well, while our approach can learn a language-agnostic representation that focuses more on ranking inferential chain candidates. Related Work There are mainly two categories of approaches to handle monolingual question answering over knowledge graph (KGQA) task. (1) Information retrieval-based approaches align a question with its answer candidates in the same semantic space, where the candidates usually stem from KG neighbors of the topic entity detected in the questions (Bordes et al., 2014b,a;Dong et al., 2015;Jain, 2016;Xu et al., 2016;Hao et al., 2017;. (2) Semantic parsing-based approaches first translate a question into the corresponding logical form, e.g., program (Guo et al., 2018;Shen et al., 2019) or query graph (Yih et al., 2015;Jia and Liang, 2016;Xiao et al., 2016;Dong and Lapata, 2016;Liang et al., 2017;Dong and Lapata, 2018;Maheshwari et al., 2019), and then execute the logical form over KG to derive the final answer. Note a logical form is usually composed of a series of grammars or operators pre-defined by experts. This paper is in line with the second category to generate query graph for KG execution. To the best of our knowledge, there are only few works targeting multilingual KGQA (Hakimov et al., 2017;Veyseh, 2016), which rely on extensive multilingual training data with hand-crafted features while are inapplicable to the zero-shot transfer scenario. So we adopt the pipeline by Maheshwari et al. (2019) for monolingual scenario as our base model but update the encoders with the Transformer (Vaswani et al., 2017) to strengthen their expressive power and facilitate recent pre-trained multilingual initializations. Given task-specific data in a source language, cross-lingual models are trained to perform inference in target languages in a low-or zero-resource scenario. Typically, cross-lingual models are proposed in two paradigms. 1) Universal encodingbased paradigm represents multilingual natural language text into language-agnostic embeddings the same semantic space. Early works focus on aligning multilingual word embedding (Mikolov et al., 2013;Faruqui and Dyer, 2014;Xu et al., 2018), while recent efforts are mainly made on large-scale pre-trained multilingual encoder, such as mBERT (Devlin et al., 2019), XLM (Conneau and Lample, 2019), Unicoder (Huang et al., 2019a), XLM-R (Conneau et al., 2020), InfoXLM (Chi et al., 2020), and ALM . They can perform zero-shot cross-lingual transfer by training in the source language while directly inference in target language. 2) translation-based paradigm employs well-trained machine translators to map the training or test examples in source language to those in target translation. Recent common practice tends to leverage the second paradigm to generate multilingual data to narrows the zero-shot cross-lingual performance gap in the first paradigm, which leads to state-of-the-art results on several cross-lingual benchmarks. In contrast, we consider a zero-resource scenario where translators are unavailable and we thus resort to unsupervised BLI in light of KGQA's characteristics. As a branch of universal encoding at word level, bilingual lexicon induction (BLI) (a.k.a crosslingual word embedding -CLWE) is learned to align bilingual word embeddings in the same space, where the embeddings are pre-trained on monolingual corpora and the alignment is trained in either a (semi-)supervised or unsupervised manner (Smith et al., 2017;Lample et al., 2018b;Artetxe et al., 2018Artetxe et al., , 2019Huang et al., 2019b;Patra et al., 2019;Karan et al., 2020;Zhao et al., 2020;Ren et al., 2020). To alleviate "hubness" problem (Dinu and Baroni, 2015) in BLI, alternatives of the distance measurement are proposed to substitute nearest neighbor (NN) during the alignment, such as inverted-softmax (Smith et al., 2017) and CSLS (Lample et al., 2018b). In addition to building bilingual dictionary via word-level translation, a well-trained BLI model can serve as a weak baseline of sentence-level translation (Lample et al., 2018a), a seed model for unsupervised translation (Lample et al., 2018a) or a bilingual variant of copy mechanism in summarization . Moreover, adversarial training is usually integrated into cross-lingual models for languageagnostic representation learning, such as unsupervised BLI (Lample et al., 2018b;, unsupervised translation (Lample et al., 2018a), cross-Lingual sequence labeling (Kim et al., 2017;Huang et al., 2019c) and cross-Lingual classification . In contrast, our adversarial strategy not only considers languageagnostic representations but also aims at making the model insensitive to syntax-disorder and thus competent in zero-resource scenario. Conclusion We propose a novel approach for zero-shot crosslingual transfer in multilingual KGQA, which augments training data by bilingual lexicon induction, and leverages a syntax-agnostic adversarial learning strategy to alleviate the syntax-disorder problem caused by BLI. Experimental results on two multilingual KGQA datasets in 11 zero-resource languages verify its effectiveness.
6,787.2
2021-06-01T00:00:00.000
[ "Computer Science" ]
Isolation and Characterization of the Small Subunit of the Uptake Hydrogenase from the Cyanobacterium Nostoc punctiforme* Background: Cyanobacterial uptake hydrogenases perform hydrogen oxidation in nitrogen-fixing cyanobacteria, but their biophysical properties are unknown. Results: The small subunit, HupS, from the Nostoc punctiforme uptake hydrogenase was heterologously expressed and spectroscopically characterized in different redox conditions. Conclusion: Recombinant HupS incorporates three iron-sulfur clusters with unusual iron coordination. Significance: We provide the foundation for engineering of cyanobacterial uptake hydrogenases. In nitrogen-fixing cyanobacteria, hydrogen evolution is associated with hydrogenases and nitrogenase, making these enzymes interesting targets for genetic engineering aimed at increased hydrogen production. Nostoc punctiforme ATCC 29133 is a filamentous cyanobacterium that expresses the uptake hydrogenase HupSL in heterocysts under nitrogen-fixing conditions. Little is known about the structural and biophysical properties of HupSL. The small subunit, HupS, has been postulated to contain three iron-sulfur clusters, but the details regarding their nature have been unclear due to unusual cluster binding motifs in the amino acid sequence. We now report the cloning and heterologous expression of Nostoc punctiforme HupS as a fusion protein, f-HupS. We have characterized the anaerobically purified protein by UV-visible and EPR spectroscopies. Our results show that f-HupS contains three iron-sulfur clusters. UV-visible absorption of f-HupS has bands ∼340 and 420 nm, typical for iron-sulfur clusters. The EPR spectrum of the oxidized f-HupS shows a narrow g = 2.023 resonance, characteristic of a low-spin (S = ½) [3Fe-4S] cluster. The reduced f-HupS presents complex EPR spectra with overlapping resonances centered on g = 1.94, g = 1.91, and g = 1.88, typical of low-spin (S = ½) [4Fe-4S] clusters. Analysis of the spectroscopic data allowed us to distinguish between two species attributable to two distinct [4Fe-4S] clusters, in addition to the [3Fe-4S] cluster. This indicates that f-HupS binds [4Fe-4S] clusters despite the presence of unusual coordinating amino acids. Furthermore, our expression and purification of what seems to be an intact HupS protein allows future studies on the significance of ligand nature on redox properties of the iron-sulfur clusters of HupS. Hydrogen (H 2 ) as a fuel and general energy carrier has boosted the interest in biological H 2 production as a renewable energy source (1,2). Hydrogenases are metalloenzymes that occur in a wide variety of microorganisms, which catalyze the reversible oxidation of H 2 : H 2 ª 2H ϩ ϩ 2e Ϫ . Engineering hydrogenases for applications in biotechnological H 2 production, is one strategy for increasing H 2 output that has attracted increasing research ventures in later years (2)(3)(4). Most hydrogenases fall into two main classes: the nickel-iron (NiFe) 3 hydrogenases, containing a NiFe complex in the catalytic site, and the Fe 2 hydrogenases, containing a binuclear iron complex. A third class contains a mononuclear iron center. To date, several crystal structures have been determined for different prokaryotic NiFe hydrogenases, e.g. from the Desulfovibrio genus, from Ralstonia eutropha and Allochromatium vinosum, revealing several shared features (5)(6)(7)(8)(9)(10)(11). Common for all NiFe hydrogenases are two protein subunits, referred to as the large and small subunit, respectively. The large subunit contains the active site where H 2 oxidation or production is catalyzed by an inorganic NiFe complex. The small subunit harbors, in most cases, three iron-sulfur (FeS) clusters: a proximal (closest to the active site on the large subunit) [4Fe-4S], a medial [3Fe-4S], and a distal [4Fe-4S] complex. The three FeS clusters are aligned so as to form an electron conduit between the protein surface and the active NiFe site across a total distance of ϳ30 Å (Fig. 1A). Depending on the nature of the hydrogenase, the electrons can be transported to or away from the active site. In enzymes where H 2 is oxidized, so-called uptake hydrogenases, the electrons are directed via the FeS clusters to the surface. In so-called bidirectional hydrogenases, the FeS clusters can drive electron flow from either toward or away from the active site, depending on the environmental and metabolic conditions of their host. The medial location of the [3Fe-4S] cluster in the small subunit is puzzling as this type of cluster usually presents a higher reduction potential than [4Fe-4S] clusters. It may thus act as an electron trap in the electron transfer chain (12,13). Cyanobacteria are phototrophic microorganisms that can produce H 2 from solar energy and water. They are therefore attractive targets for efforts to improve their productivity via genetic engineering. Most cyanobacteria possess at least one copy each of the uptake and bidirectional hydrogenases (14). To this date, all known cyanobacterial hydrogenases are predicted to be NiFe hydrogenases based on sequence homology (14,15). Only one cyanobacterial hydrogenase, the bidirectional hydrogenase from Synechocystis strain PCC 6803, has so far been isolated and characterized (16). Only the uptake hydrogenase, HupSL, is found in the heterocyst-forming, nitrogen-fixing cyanobacterium Nostoc punctiforme ATCC 29133 (identical with strain PCC 73102, and henceforth in this work referred to as N. punctiforme). The uptake hydrogenase in filamentous strains has been found by immunogold labeling and immunolocalization in both heterocysts, which provide a microaerobic environment, and vegetative cells under N 2 -fixing conditions (17,18), but was suggested to be in an inactive form in the vegetative cells (17). Most uptake hydrogenases investigated so far are rapidly inhibited by molecular oxygen (13), and a cyanobacterial uptake hydrogenase localized in the vegetative cells would undoubtedly be inactivated by photosynthetic oxygen evolution (14). Recently, it was demonstrated that the active enzyme is produced solely in heterocysts under N 2 -fixing conditions (19). The maturation system and genomic context of HupSL in N. punctiforme have been investigated extensively. The hupSL promoter region and binding sites for the transcriptional regulator NtcA have been identified (20); the extended hyp operon region, comprising the assembly and maturation system of HupSL, has been shown to be regulated by the transcriptional regulator CalA (19); and HupW, a protease needed for the cleavage of a C-terminal peptide from the large subunit HupL, has been shown to be transcribed in N 2 -fixing cultures (20). In a related organism, Nostoc (Anabaena) sp. PCC 7120, HupW has been shown to specifically cleave HupL (21). Because the uptake hydrogenase of N. punctiforme is an H 2 -oxidizing enzyme, the electron transfer in the small subunit, HupS, is expected to be directed away from the active site. Following oxidation of H 2 , electrons are presumed to first move from the active site to the proximal cluster, then to the medial and distal clusters, before they reach the native redox partner protein. The relative reduction potentials of the three FeS clus-ters have been suggested to play an important role in steering this directionality (9,22,23). In contrast with the structurally more well studied hydrogenases from, e.g. sulfate-reducing bacteria, the FeS clusters of cyanobacterial uptake hydrogenases share unusual FeS cluster binding motifs involving non-cysteine residues: an asparagine instead of a cysteine in the proximal cluster and a glutamine instead of a histidine in the distal one (Fig. 1B). Most studies on NiFe hydrogenase metallic centers have been either focused on characterizing the diverse states of the catalytic NiFe site in the large subunit or on cluster conversion between [3Fe-4S] and [4Fe-4S]. However, little is known about how non-cysteinyl coordination of the FeS clusters affects the activity of the enzyme. These differences, together with the fact that HupSL is the only hydrogenase present in N. punctiforme, make this organism an attractive model system for studies of how modulation of the FeS cluster environment affects the rate of hydrogen uptake. In this work, we report the cloning and heterologous expression of HupS from N. punctiforme ATCC 29133, presenting UV-visible absorption and EPR spectroscopy data. HupS was expressed as a soluble fusion protein, f-HupS, in Escherichia coli with the purpose of investigating the nature of FeS clusters in cyanobacterial uptake hydrogenases. To our knowledge, this constitutes the first report on spectroscopic data from a FeS cluster-containing subunit from a hydrogenase without the presence of the nickel-iron-containing large subunit, and therefore without overlapping interfering features from the nickeliron active site. EXPERIMENTAL PROCEDURES Cloning-A 1.3-kb fragment containing hupS and the upstream region including promoter fragment E (20) was amplified from N. punctiforme ATCC 29133 genomic DNA using PCR. After gel purification, the fragment was used in overlap extension PCR to add the sequence for a (Gly 3 Ser) 2 Gly linker and a Strep(II)-tag on the 3Ј end of hupS (using primers 5Ј-CGC CTG CAG TTC ACC TTT AAA ATC-3Ј and 5Ј-GTA CCT ATT TTT TCT AAA TTG CGG GGA CTC CAG CCA GAA CCT CCT CCA GAA C-3Ј). The 1.5-kb fused product, including an upstream PstI and a downstream SacI site, was further amplified by PCR, gel-purified, and then blunt-end ligated into pJET1.2 (Fermentas). The ligation product was transformed into TOP10 cells (Invitrogen), and these were plated on LB-agar plates containing 50 g/ml ampicillin. Selected positive clones were confirmed by sequencing (Macrogen) and digested with PstI and SacI (Fermentas) to yield a 1.4-kb fragment that was gel-purified and ligated into pSUN119 (24) using T4 ligase (Fermentas). The resulting vector, pSUN119HupSStrepII, was used as template for PCR using DreamTaq polymerase (Fermentas) to amplify a fragment containing only the sequences for HupS, linker, and Strep(II)-tag (but not promoter fragment E), flanked by a 5Ј SacI restriction site just upstream of the hupS start codon and a 3Ј HindIII site after the stop codon (using primers 5Ј-AAC AGA GCT CCC ATG ACT AAC G-3Ј and 5Ј-CTA GCG AAG CTT TTA TTT TTC AAA TTG-3Ј). After gel purification, both the resulting 1-kb fragment and pET43.1a(ϩ) (Novagen) were digested with SacI and HindIII (FastDigest, Fermentas), gel-purified, and used for ligation using the Quick Ligation kit (New England Biolabs). The construct was made so as to express the polypeptide HupS-linker-Strep(II)-tag fused to the C terminus of the solubilization protein NusA (Nus⅐Tag TM ) present in the commercial vector; the resulting fusion protein, f-HupS, contains protease recognition sites for both thrombin and enterokinase between NusA and HupS. The ligation product was transformed into DH5␣ cells, which were then plated on LB-agar plates containing 50 g/ml ampicillin. Colonies were screened for positive clones by colony PCR, which were then sequenced. A positive clone, pET431HupS, was subsequently used for protein expression of f-HupS. An overview of the cloning process is found in Fig. 2. Sequence alignment was performed using ClustalW (25). Protein Expression and Purification-All solutions used in anaerobic work were purged for at least 30 min with N 2 prior to use. Protein purification was carried out in a glove box (MBraun) under an argon atmosphere. All manipulations were done at 4°C except where otherwise stated. 50-ml pre-cultures of transformed E. coli BL21(DE3) (Novagen) were grown overnight at 37°C, 200 rpm shaking, in LB media containing 50 g/ml ampicillin, and used to inoculate 9 liters of autoinduction medium ZYP-5052 (24), 1.5 liters per 3-liter Erlenmeyer flask, also supplemented with 50 g/ml ampicillin. Cultures were grown aerobically at 37°C for 2 h and then at 20°C for another 18 -22 h. Cells were then collected by centrifugation at 6000 ϫ g, pellets were resuspended in cold buffer W (100 mM Tris-HCl, pH 7.5 ϩ 150 mM NaCl), centrifuged again at 4500 ϫ g, and frozen at Ϫ20°C. The day before EPR experiments were performed, cells were quickly thawed and resuspended in buffer W containing 1 mM MgCl 2 and protease inhibitor according to the manufacturer's instructions (cOmplete EDTA-free mixture tablets, Roche Applied Science). The buffer also contained 10 mM glucose, 0.5 units/ml glucose oxidase (Sigma-Aldrich) and 0.5 units/ml catalase (Sigma-Aldrich) to maintain anaerobic conditions (26). Cells were broken by sonication in a Sonics Vibra-Cell VCX750 at 750 watts for 10 min, using 10-s pulses at 70% amplitude. DNase I (20 g/ml; Sigma-Aldrich), RNase A (40 g/ml; Sigma-Aldrich), and avidin (0.5 mg/liter culture; ProSpec) were then added, and the resulting crude extract was centrifuged at 184,000 ϫ g for 2 h. The supernatant (soluble fraction) was immediately transferred to a vial on ice that was capped and kept anaerobic under a constant flow of N 2 for 30 min. The soluble fraction was then introduced in the glove box and applied onto a pre-equilibrated Strep-Tactin column (IBA) (in buffer W containing 0.5 mg/ml avidin) at room temperature. After application of the soluble fraction, the column was washed with five bed volumes of buffer W plus avidin, then two bed volumes of buffer W alone. The protein was eluted with three bed volumes of buffer W containing 5 mM desthiobiotin (IBA). The fraction containing the highest amount of protein was collected and transferred to EPR tubes (150 l/tube). Remaining fractions were stored anaerobically at Ϫ80°C. SDS-PAGE and Western Blotting-The protein contents of different purification fractions were analyzed by 10% SDS-PAGE minigels using Laemmli's buffer system (27) in either a SE250 Mighty Small II unit (Hoefer) or a Mini-Protean Tetra Electrophoresis system (Bio-Rad). Total protein was detected directly on gels using PageBlue protein staining (Thermo Scientific). For identification of f-HupS in the same fractions, minigels were blotted onto nitrocellulose membranes using either a TE22 Mini Tank Transfer Unit (GE Healthcare) or a Trans-Blot Turbo Transfer System (Bio-Rad). Membranes were treated with Strep-Tactin-HRP conjugate (IBA) for chemiluminiscence detection according to the manufacturer's instructions. Chemiluminiscence was detected after treatment with Immun-Star-HRP Chemiluminiscent kit (Bio-Rad) in a ChemiDoc XRS system (Bio-Rad), using exposure times between 10 s and 3 min. Protein Quantification-Protein quantification was performed using the Coomassie (Bradford) protein assay kit (Thermo Scientific), using bovine serum albumin as a standard according to the manufacturer's instructions. UV-visible Spectrophotometry-Samples were monitored in anaerobic 1-ml quartz cuvettes on a Varian Cary 50 Bio UVvisible spectrophotometer. Empty sealed cuvettes were purged with nitrogen gas for at least 10 min. f-HupS (1.9 mg protein/ ml) was then added through a rubber septum, and spectra were taken immediately. EPR Spectroscopy-Samples were investigated by continuous wave X-band EPR either directly as purified or after reduction or oxidation by addition of either sodium dithionite (Sigma-Aldrich) or potassium hexacyanoferrate(III) (ferricyanide; Merck), respectively, directly into the EPR tube. EPR tubes were capped, brought out of the glove box, and immediately frozen in liquid nitrogen; dithionite-reduced samples were left to incubate on ice up to 30 min before freezing. The protein concentration was 1 mg/ml. Final concentrations of added reagents varied between equimolar to 20-fold of protein concentration and were added so as to not change the final sample volume Ͼ5%. Measurements were performed on a Bruker ELEXYS E500 spectrometer using an ER049X SuperX microwave bridge in a Bruker SHQ0601 cavity equipped with an Oxford Instruments continuous flow cryostat and using an ITC 503 temperature controller (Oxford Instruments). Measurement temperatures ranged from 4.5 to 25 K, using liquid helium as coolant. The spectrometer was controlled by the Xepr software package (Bruker). EPR spectral simulations were run on WINEPR Sim-Fonia (version 1.26, Bruker). RESULTS Heterologous Expression of HupS-Previous attempts of expression of HupS under the T7lac promoter were not successful because the purified protein was insoluble. 4 We therefore made a construct, f-HupS, where HupS was fused to the C terminus of NusA (Nus⅐Tag), a protein tag specifically developed for its ability to solubilize difficult target proteins that are otherwise prone to aggregate (28). The construct was made by cloning the N. punctiforme ATCC 29133 hupS open reading frame into pET43.1a(ϩ) downstream of the vector-encoded Nus⅐Tag and including a C-terminal Strep(II) tag to allow purification via affinity chromatography (Fig. 2). Expression of f-HupS was confirmed by SDS-PAGE analysis of whole cell contents, as judged by the appearance of a ϳ97-kDa polypeptide after 20 h of growth (data not shown). The involvement of the Nus⅐Tag was quite successful, and f-HupS was partially soluble (ϳ40 -50%, as judged by SDS-PAGE, data not shown). It could therefore be produced in amounts allowing spectroscopic analysis. When the protein was purified anaerobically under an argon atmosphere, typically 0.6 -1.2 mg of pure, soluble f-HupS were obtained per liter of culture (Fig. 3A, lanes e and h). The purification of f-HupS could be achieved to near homogeneity, as judged by 10% SDS-PAGE (Fig. 3A, lanes g and h). The degree of purification is determined to be ϳ30-fold, compared with the crude extract. Attempts to purify f-HupS aerobically resulted in an unstable protein that suffered proteolysis, as judged by the presence of multiple bands in Western blots when using a detection system for the Strep(II)-tag (Fig. 3A, lane d). UV-visible Spectrophotometry-NusA lacks any cofactors that could interfere with visible absorption or EPR spectroscopy of the FeS clusters in HupS. The f-HupS fusion protein had a light brown color and was analyzed by UV-visible spectrophotometry as anaerobically purified (Fig. 3B). The protein presents two discernible absorption bands in the 300 -500-nm region with maxima at ϳ340 and 420 nm, respectively, as well as a broad shoulder in the 500 -600-nm region. The shape and location of these bands are typical of broad ligand-to-metal charge transfer in iron-sulfur proteins such as ferredoxins (29,30) and indicated that iron-sulfur clusters had been successfully incorporated into f-HupS. This is further substantiated by the 420/315 nm absorbance ratio of 0.43, which is comparable with the corresponding ratio 0.68 found in ferredoxin II from Desulfovibrio gigas (29). In contrast, the often used 420/280 nm ratio is not a useful measure of purity due to the presence of NusA, 4 D. Camsund and P. Lindblad, personal communication. and h). B, UV-visible absorption spectrum of the anaerobically purified f-HupS (1.9 mg protein/ ml). The spectrum was taken in a sealed cuvette, previously purged with nitrogen gas. The arrows mark absorption bands with maxima at ϳ340 and 420 nm, which are typically seen in FeS proteins. a.u.: arbitrary units. which has a large contribution at 280 nm and necessarily leads to a lower 420/280 nm absorbance ratio than in ferredoxins. EPR Spectroscopy- Fig. 4 shows EPR spectra of anaerobically purified f-HupS. f-HupS presented no distinguishable EPR features at low (8 W) microwave power when taken immediately after purification under anaerobic conditions (Fig. 4A, top). Upon oxidation with equimolar amounts of ferricyanide, a narrow g ϭ 2.023 resonance appeared in the low microwave power spectrum (Fig. 4A, bottom). This resonance was easily saturated at low microwave power between 4 and 25 K, and the signal intensity quickly decreased at above 15 K (not shown). We attribute the signal at g ϭ 2.023 to a [3Fe-4S] ϩ cluster in a low spin (S ϭ 1/2) state, in analogy to very similar spectra that have been observed for other [3Fe-4S] proteins such as ferredoxin II from D. gigas (31) and the oxidized form of Desulfovibrio africanus ferredoxin III (32). Using higher microwave power (2 milliwatts), some small features became visible in the anaerobically purified sample: a signal around g ϭ 2.02, and smaller signals at g Ն 2.04 (Fig. 4B, top). The minor signals with g Ն 2.04 are attributable to a small contamination of adventitiously bound manganese(II). Upon reduction with dithionite, the g ϭ 2.023 resonance disappeared, and instead, a wider and more complex spectrum was observed (Fig. 4B, second trace). In particular, we observed a large new resonance centered on g ϭ 1.94, with two additional signals at g ϭ 1.91 and g ϭ 1.88. By investigation of the temperature variation and microwave power saturation of this spectrum, we could distinguish between two separate spectroscopic species (Fig. 4, C and D). When the temperature was raised from 7 to 15 K, the g ϭ 1.91 resonance increased in intensity relative to the base line, whereas the g ϭ 1.88 resonance was almost unchanged (Fig. 4C). By comparing the g ϭ 1.91 and the g ϭ 1.88 intensities at different microwave powers at a fixed temperature (15 K), we could observe that the two resonances behave differently (Fig. 4D). These different behaviors in temperature and microwave power dependence demonstrate the bottom spectrum, after oxidation of the sample with ferricyanide. EPR conditions were as follows: 8 microwatts of applied microwave power; temperature ϭ 7 K. B, top spectrum, f-HupS as purified; second spectrum, after reduction with dithionite. EPR conditions: 2 milliwatts of applied microwave power; temperature ϭ 7 K. Third and fourth spectrum (Sim 1, Sim 2), simulated spectra of the two components of the reduced sample spectrum (see main text for details). Bottom spectrum (Sim1ϩsim2), mathematical addition of sim1 and sim2 simulations, weighed 50% each. C, dithionite-reduced f-HupS measured at different temperatures, at the same microwave power (2 milliwatts). Top spectrum, 15 K. Bottom spectrum, 7 K. The arrows point at spectral features belonging to the two spectral components that change differently with temperature. D, variation in EPR signal amplitudes with different applied microwave power (P) in dithionite-reduced f-HupS, measured at temperature ϭ 15 K. The amplitudes were measured from the resonances indicated with arrows in C. Gray circles, the g ϭ 1.91 resonance; Black squares, the g ϭ 1.88 resonance. The modulation amplitude in all measurements was 10 G. Protein concentration was 9 mg/ml. presence of two magnetically distinct species. This is important since it shows that our preparation of f-HupS contains two different low-potential [4Fe-4S] clusters. Electron transfer from dithionite to FeS clusters is known to be sluggish in the absence of redox mediators (27). Therefore, the sample was reduced by treatment with dithionite for varying times, from below 1 min up to 30 min. We could observe the signal of the reduced protein already after a 10-min incubation (data not shown), and its intensity increased further at the longer treatment time. At least 90% of the sample was reduced after 10 min, and a treatment of less than one minute was insufficient to reduce all FeS clusters (data not shown). We have therefore chosen to treat reduced samples with dithionite for 30 min to ensure that we could obtain the maximum possible intensity from the observed resonances. The spectrum from the reduced sample could be simulated as a superposition of two S ϭ 1 ⁄ 2 species. This was achieved by mathematical addition of two separate simulations, both for S ϭ 1 ⁄ 2 species, with a 50% contribution of each simulation to the final result (Fig. 4B, three last traces). The first simulation includes g x ϭ 1.905 (40 G width) and g y ϭ 1.946 (25 G width) components, whereas the second includes g x ϭ 1.877 (30 G width) and g y ϭ 1.940 (30 G width) components (the g z components in the acquired spectra were hidden under manganese contamination and were therefore broadened to 200 G at g ϭ 2.14). This result indicates that the experimental spectrum arises from the overlap of two different S ϭ 1 ⁄ 2 species. Thus, the simulations corroborate our conclusion that two distinct [4Fe-4S] clusters are observed in the reduced sample spectrum. When we analyzed aerobically purified f-HupS, we observed a spectral feature at g ϭ 2.023 similar to the one from the [3Fe-4S] cluster observed in the oxidized anaerobic preparations, as well as a sharp feature at g ϭ 4.3 typical of unspecifically bound mononuclear iron (data not shown). In contrast to the anaerobically purified protein, no resonances attributable to [4Fe-4S] clusters were detected in the aerobically purified sample after reduction by dithionite (data not shown). Part of the observed [3Fe-4S] signal in this case may therefore arise from [4Fe-4S] clusters that have undergone oxidation and degradation. This phenomenon is common in [4Fe-4S]-containing proteins when they are exposed to dioxygen (e.g. Refs. [33][34][35][36]. We also observed proteolysis as evaluated by SDS-PAGE and Western blotting of those samples, where several bands reacted with the antibody (Fig. 3A, lane d). These results show that HupS is an oxygen-sensitive protein, at least in the absence of HupL. However, aerobic conditions during the expression and initial purification phases were feasible due to the reducing intracellular environment of E. coli (37). This was reinforced during cell lysis by the use of a glucose oxidase/catalase system (26) as an extra precaution against the deleterious effects of dioxygen. DISCUSSION Previous studies of NiFe hydrogenases have been mainly focusing on the structure and reactivity of the catalytic NiFe site on the large subunit. More recently, some interest has also been directed to how the FeS clusters in the small subunit modulate the overall enzyme activity, mainly through mutagenesis studies targeting cluster-coordinating amino acids (9,38,39). How-ever, spectroscopic characterization of the FeS clusters in the holoenzyme is difficult due to interference from the NiFe site, both due to overlapping signals in EPR spectroscopy, and magnetic coupling between the NiFe site and the FeS clusters. In addition, the enzyme we targeted for our studies, Nostoc punctiforme HupSL, is expressed only in heterocysts, under nitrogen-fixing conditions (20,40). This precludes its purification in adequate yields for biophysical characterization. We therefore chose to express the small subunit HupS in E. coli and characterize it separately from the large subunit. By using the Nus⅐Tag, we found that heterologous expression of soluble HupS is possible when using a solubilization fusion protein. This is different from heterologous expression of the HupS alone, which rendered a non-soluble product. The fusion protein, f-HupS, yielded a partially soluble product, but part of the protein was found in insoluble fractions of the cells. Interestingly, it has been shown that heterologous expression of HupL from Lyngbya majuscula CCAP 1446/4 in E. coli also results in an insoluble product (41). Thus, it seems that both the large (HupL) and the small (HupS) subunit of HupSL are unstable on their own and difficult to obtain in soluble form when purified alone. This probably reflects that the contact interface between small and large subunits of known NiFe hydrogenases is rather large (12). In part, this was possible to overcome by use of the Nus⅐Tag to create the fusion protein f-HupS. The large extra protein obviously assists in maintaining HupS in a soluble form, accessible for spectroscopy. This is a useful result, but due to the still rather poor solubility and stability of the fusion protein, we chose to not cleave HupS from its solubilization fusion partner NusA. The preparation yielded soluble protein with a brown color reminiscent of other iron-sulfur proteins, notably ferredoxins and was studied by UV-visible and EPR spectroscopies. Both confirmed that f-HupS incorporated FeS centers. In addition, the EPR spectra of anaerobically purified f-HupS allowed us to distinguish two different types of clusters. We assigned the two species to [3Fe-4S] and [4Fe-4S] clusters, which is in accordance with structurally known hydrogenases. The EPR signature from a [3Fe-4S] ϩ cluster in f-HupS appeared upon oxidation with ferricyanide. After incubation with dithionite, two different EPR signals from two distinct [4Fe-4S] ϩ clusters were observed. In hydrogenases with known structure, the medial [3Fe-4S] ϩ cluster has been reported to have a relatively high redox potential and is EPR active in its oxidized form (9,42). The distal and proximal clusters on the other hand, have more negative potentials and would be observed by EPR only after reduction. Therefore, our results show that our purified f-HupS containts three FeS centers with similar properties as the small subunits in intact NiFe hydrogenases. No cyanobacterial uptake hydrogenase has so far been spectroscopically or structurally characterized. Two characteristic deviances in the small subunit amino acid sequence distinguish the cyanobacterial uptake hydrogenases from their more well studied cousins in e.g. the Desulfovibrio family. The first is the absence of a cysteine in the proximal cluster motif. The corresponding position of this residue would be number 15 in the N. punctiforme sequence. This residue is found to be an aspar-agine in all the cyanobacterial HupS proteins. The second difference is a glutamine instead of a histidine at position 100 (Fig. 1B, also cf. Ref. 44) in the distal cluster binding motif in N. punctiforme. From the amino acid sequence, it could be argued that the lack of conventional ligands for both the proximal and distal clusters, would lead to a different cluster composition than in previously known hydrogenases. However, we observe two distinct [4Fe-4S] cluster EPR signals from the cyanobacterial HupS, which strongly supports the existence of both a proximal and a distal cluster despite the unusual ligand configuration. In the absence of the normal cysteine ligand in position 15, the proximal cluster could be coordinated by a cysteine residue located 13 amino acids downstream of the usual motif (Cys-27). However, we find this unlikely because a comparison with known crystal structures reveals that this downstream cysteine will be located in a loop ϳ20 Å away from the proximal cluster and therefore not in a coordinating distance. Instead, we suggest that Asn-15 is a coordinating residue for the proximal cluster. In addition, we suggest that Gln-100 coordinates the distal cluster in the absence of a histidine in that position. The cyanobacterial-like uptake hydrogenase HoxKG from Acidithiobacillus ferrooxidans, a chemolithotrophic, aerobic bacterium, has been purified (43) and studied by EPR (44). The small subunit HoxK and the cyanobacterial HupS share the amino acid signatures described above in their FeS cluster binding motifs. From EPR spectroscopy in the Acidithiobacillus enzyme, the magnetic interaction between the proximal FeS cluster and the NiFe active site was found to be different from the "standard" hydrogenases from A. vinosum and D. gigas. It was suggested that this is due to an unusual ligand sphere in the A. ferrooxidans HoxKG proximal cluster, although the authors did not go so far as to suggest coordination by asparagine (44). Furthermore, the low iron:protein ratio made the existence of the distal cluster unclear, and the authors suggested that the distal cluster was lacking due to the absence of adequate protein ligands. Whether or not A. ferrooxidans HoxKG may lack one of the clusters, the situation is different in our purified f-HupS protein which clearly contains three FeS clusters. In summary, we have for the first time isolated, through heterologous expression, the small subunit of a cyanobacterial uptake hydrogenase. This enzyme is only found in the heterocysts in N. punctiforme, which limits the availability for spectroscopic characterization of this enzyme. Furthermore, heterologous expression of HupS provides a foundation for engineering of the electron transfer chain for introduction in heterocysts. The protein was found to contain three FeS clusters, in accordance with previously isolated enzymes from other bacterial species. Although the FeS binding motifs show differences from earlier studied enzymes, their EPR signatures show that the protein nevertheless contains two [4Fe-4S] and one [3Fe-4S] cluster. To clarify the role of the coordinating ligands of N. punctiforme HupS, we are currently performing mutagenesis studies. We also further investigate how different FeS cluster binding motifs affect the redox potentials of the clusters.
7,086.8
2013-05-06T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science" ]
Eight brain structures mediate the age-related alterations of the working memory: forward and backward digit span tasks Introduction Working memory (WM) as one of the executive functions is an essential neurocognitive ability for daily life. Findings have suggested that aging is often associated with working memory and neural decline, but the brain structures and resting-state brain networks that mediate age-related differences in WM remain unclear. Methods A sample consisting of 252 healthy participants in the age range of 20 to 70years was used. Several cognitive tasks, including the n-back task and the forward and backward digit span tests were used. Also, resting-state functional imaging, as well as structural imaging using a 3T MRI scanner, were performed, resulting in 85 gray matter volumes and five resting-state networks, namely the anterior and posterior default mode, the right and left executive control, and the salience networks. Also, mediation analyses were used to investigate the role of gray matter volumes and resting-state networks in the relationship between age and WM. Results Behaviorally, aging was associated with decreased performance in the digit span task. Also, aging was associated with a decreased gray matter volume in 80 brain regions, and with a decreased activity in the anterior default mode network, executive control, and salience networks. Importantly, the path analysis showed that the GMV of the medial orbitofrontal, precentral, parieto-occipital, amygdala, middle occipital, posterior cingulate, and thalamus areas mediated the age-related differences in the forward digit span task, and the GMV of superior temporal gyrus mediated the age-related differences in the backward digit span task. Discussion This study identified the brain structures mediating the relationship between age and working memory, and we hope that our research provides an opportunity for early detection of individuals at risk of age-related memory decline. Introduction Normal aging begins a series of gradual changes in the human brain (Batouli et al., 2014b).In particular, cognitive neuroscientists have reported that regardless of the conditions of dementia or mild cognitive impairment (MCI), aging is associated with a decline in a set of essential cognitive functions, especially working memory (WM) performance (Bosnes et al., Bahri et al. 10.3389/fpsyg.2024.1377342Frontiers in Psychology 02 frontiersin.org2022; Sisakhti et al., 2024;Klencklen et al., 2017;Sisakhti et al., 2023). Working memory is considered an executive function skill that has a limited capacity to store and manipulate information temporarily.To put it more clearly, working memory is considered a vital brain system that provides the storage and manipulation of information required for other complex cognitive tasks such as reasoning, comprehension, learning, and language (Baddeley, 1992). It has been reported that the degree of decline in cognitive abilities varies among older adults (Wilson et al., 2002).This means that a mild decrease in cognitive functions is observed in some older adults, while in others, a significant change in cognitive functions is observed (Cohen et al., 2019).These observed individual differences in older adults suggest that other factors mediate the age-cognition relationship.Despite existing scientific reports on age-related decline in working memory (Fabiani, 2012), several factors positively or negatively mediate the relationship between age and working memory that may increase or decrease this decline (Cansino et al., 2018).Given that brain changes are common in older age (Lockhart and DeCarli, 2014) and may affect cognition, various brain measures can be considered as mediators to clarify the relationship between age and working memory. Among the factors that can be considered as a mediator between age and working memory is the brain structure.Examinations of brain volume in elderly adults suggest that the gray matter volume (GMV) and white matter (WM) volumes of the brain decrease with age (Driscoll et al., 2009;Farokhian et al., 2017;Giorgio et al., 2010).For example, GMV has been estimated to decrease by about 3 to 5% per decade (Resnick et al., 2003;Sisakhti et al., 2022).It is noteworthy that the degree of brain atrophy, like cognitive abilities, is not a homogeneous process throughout the brain (Fjell et al., 2014).The frontal and temporal lobes, which are involved in cognitive functions, show the greatest age-related decline in GMV (Alexander et al., 2006).On the other hand, studies have reported the relationship between the reduction of cognitive functions and the atrophy of brain areas involved in these abilities (Leong et al., 2017;Lövdén et al., 2013;Ramanoël et al., 2018).Brain atrophy refers to the loss of brain cells (neurons) and the connections between them, which can lead to a decrease in brain volume (Fjell et al., 2009).Brain atrophy in aging refers to the gradual loss of brain tissue and volume that occurs naturally as people age.This process is characterized by several morphological changes, including cortical thinning, white and gray matter volume loss, ventricular enlargement, and loss of gyri (Double et al., 1996). Another important factor that is considered as a mediator between age and working memory is the resting-state brain networks (RSNs).Functional brain networks that exhibit synchronized activity during periods of rest-when an individual is not actively engaged in a specific task-are referred to as restingstate networks (Rosazza and Minati, 2011).Humans spend a significant portion of their day, estimated to be up to 50%, in mental states where their brain is not actively engaged in a specific task or cognitive activity.During these periods, the brain is essentially resting or operating in a task-free condition (Lurie et al., 2020).The patterns of resting-state functional connectivity resemble the patterns of activation seen during cognitive tasks, with up to 80% of their variation being similar (Cole et al., 2014(Cole et al., , 2016)).Evidence suggests age-related changes in the resting-state networks (Huang et al., 2015;Jockwitz et al., 2017), and almost every cognitive network has been shown to experience some degree of age-related decline (Varangis et al., 2019).This age-related decline at the network level includes a decrease in local efficiency at the network level (Song et al., 2014) and a decrease in connectivity within the network (Geerligs et al., 2015). There are reports on the relationships between age and declined working memory (Cansino et al., 2013), between age and brain structure and resting state networks (Kaup et al., 2011;Varangis et al., 2019), and between brain structure and resting state networks and working memory (MacHizawa et al., 2020;Osaka et al., 2021).For example, a study conducted by Cansino et al. (2013) demonstrated that as individuals age, their verbal and visuospatial working memory abilities decline.This decrease is more closely linked to the cognitive resources required by the task rather than the nature of the information being processed (verbal or visuospatial).On the other hand, Kaup et al. (2011) conducted a review study indicating a positive correlation between the size of the hippocampal formation and memory performance in elderly individuals.Also, Varangis et al. (2019) demonstrated a deleterious effect of age on segregation and local efficiency and within-network connectivity of resting state networks in the brain.In addition, MacHizawa et al. (2020) reported a positive relationship between gray matter volume and visual working memory, in such a way that gray matter volume in the left lateral occipital region and right parietal lobe relates to the capacity and precision of visual working memory, respectively.In another study, Osaka et al. (2021) investigated the connectivity of resting-state networks in individuals with high and low working memory capacity.The results indicated a strong connection between dorsal attention and salience networks in individuals with high working memory capacity. According to this knowledge, it can be hypothesized that the changes in the brain due to aging are responsible for the changes in working memory that occur with age.At first glance, this conclusion may appear to be correct, but simple correlations do not allow one to prove causality.Although there are several studies examining the neural correlates of age-related changes in working memory (Archer et al., 2018;Mattay et al., 2006;Rypma and D'Esposito, 2000;Schulze et al., 2011), to the best of our knowledge, previous studies were not based on testing a mediation model.The evidence for which brain structures are the neural substrates of age-related working memory decline is weak.In general, the search for the exact neural bases for working memory in normal aging has brought diverse results.Among the factors that led studies to achieve diverse results are the use of small samples or samples with a narrow age range, and the use of different cognitive tasks.Although there are mediation studies that examine the age-related difference in the tasks measuring WM (Bender and Raz, 2012;Cansino et al., 2018;Van Gerven et al., 2007;Zuber et al., 2019), to the best of our knowledge, none of these previous studies investigated the mediating role of gray matter volume and resting-state networks.For example, previous studies have examined the mediating role of factors such as inhibition (Van Gerven et al., 2007), executive functions (i.e., updating, inhibition, and shifting;Zuber et al., 2019), physiological traits, and individual characteristics (such as cultural and social activities; Cansino et al., 2018) in the relationship between age and working memory.Therefore, it has not yet been explicitly tested which brain structures and resting-state brain networks mediate the age-related decline in working memory. Given that neurological changes can occur before the onset of cognitive decline (Coupé et al., 2019), identifying the brain 2 Methods Participants and procedure We recruited 252 participants from the Iranian brain imaging database (IBID; Batouli et al., 2021).The inclusion criteria in the study were that the participants should be aged from 20 to 70 years, have completed at least 12 years of education, should be able to read, and have consent to participate in all stages of the research, in accordance with previous works (Batouli and Sisakhti, 2020).By selecting a wider age range, the study can capture a more comprehensive understanding of the trends and patterns in the population.This approach allows for the examination of age-related effects across a spectrum of adulthood rather than focusing solely on the extremes of age.Also, by not strictly dividing the participants into young and old groups, we aimed to avoid confounding factors that could arise from comparing two distinct age groups.This could lead to more nuanced findings that reflect gradual changes in health rather than sudden differences attributable to age alone.Also, they were Iranian, and Persian was their first or second language.The exclusion criteria were as follows: neurological or severe somatic disorder, pregnancy or breastfeeding, weight above 110 kg, previous use of drugs for neurological disorders, long-term history of drug use (except aspirin, vitamins, antibiotics, pain relievers, sleeping pills, anti-nausea drugs, and vaccinations), drug use or alcohol addiction (only based on the subjective report), and MRI contraindications. The participants were distributed in 5 age groups: 59 participants in the early adult group (20-30 years old, 30 females), 62 participants in the early middle-aged adult group (30-40 years old, 31 females), 55 participants in the late middle-aged adult group (40-50, 31 females), 50 participants in the late adult's group (50-60, 27 females), and 26 participants in the older adult group (60-70, 14 females).In this study, to achieve the goal of the research, we used several cognitive tasks, including the n-back task and the forward and backward digit span tests.Also, several Magnetic Resonance Imaging (MRI) protocols were performed.The procedures of data collection in the IBID study have been extensively documented in previous reports (Batouli et al., 2021).See Table 1 for demographic information of the participants by their age group.The ethical approval code for this study was IR.NIMAD.REC.1396.319,issued by the National Institute for Medical Research Development, in agreement with the Declaration of Helsinki, and informed consent was obtained from all participants. Cognitive tests In this research, the data of two widely used measures, the N-back task and the forward and backward digit span tests, were used to investigate the working memory. N-back task One of the most popular tasks in cognitive neuroscience studies to measure working memory performance is the n-back task (Owen et al., 2005).This task typically involves presenting participants with a series of stimuli, and the objective is to determine whether each stimulus matches the one presented N items prior.The processing load increases with increasing value of N, which is indicated by a decrease in accuracy and an increase in reaction time (RT; Au et al., 2015).Its greater manipulation power and less complexity than other cognitive tasks are the reasons for the wide use of this task (Conway et al., 2003).It should be noted that in this research the one-back task was used.In the condition of a one-back test, the target is any letter that is identical to the letter immediately preceding it.So in the letter sequence "N-R-Y-C…, " the participant should respond "match" if the 5th letter in the sequence were a "C" because it matches one previous letter.This task was presented on a laptop connected to a button box on which participants made their responses.All participants used their index fingers to press a specified button.Stimuli were randomly presented at a fixed central location on the computer screen.Also, Stimuli were randomly presented at a fixed central location on the computer screen for 500 ms with an inter-stimulus interval of 2,500 ms.Prior to the start of the actual task, participants were trained until they demonstrated that they understood the task and their performance stabilized.The trials used for practice were not used in the main task.The time required for this test was 10 min.At the end, the accuracy percentage score for each person's performance in this test was obtained.Increasing the difficulty of the N-back task might have been cognitively difficult for the older participants.Using a challenging task could lead to fatigue and frustration.Also, By using the 1-back task, we aimed to establish a baseline level of performance before progressing to more difficult tasks.This can provide valuable information about participants' cognitive abilities and help determine appropriate difficulty levels for future studies. Digit span test A prevalent measure for the assessment of verbal working memory is the digit span test (Ostrosky-Solís and Lozano, 2006).Digit Span test requires subjects to repeat series of digits of increasing length.This test can be used in two formats, forward digit span test (FDST) and backward digit span test (BDST).In the digit span test, first a series of numbers are presented audibly and the examiner asks the subject to repeat those digits, In the FDST, the examiner asks the subject to repeat the numbers in the same order they were read aloud.In the BDST, the examiner asks the subject to repeat the numbers in the reverse order of the numbers presented by the examiner.The presentation rate of digit spacing and pitch should be consistent with standard procedures.The presentation of digit spacing was 1 s apart.Constant pitch should be used to pronounce all digits, meaning that we did not have to change the pitch when pronouncing each digit in a sequence.Varying voice pitch may facilitate the use of a chunking strategy, which may lead to overestimation of ability.Also, repetition was not allowed in the digit span.If the subject wanted us to repeat the sequence, it should be said: "I can only say the numbers once, just make your best guess." The presentation starts with two digits in a series and the difficulty level of the test increases with the presentation of up to 9 digits in a series.It should be noted that the score in the two-digit series is not considered.In the present study, each series is repeated three times and one score is given for each correct answer. For this purpose, first, each series was repeated three times and then the difficulty of the test was increased.The series of digits in the three series were always different.It should be mentioned that a percentage correct for each trial was considered for the analyses.Due to the fact that the score is calculated from 3-digit to 9-digit series, the maximum score of the participants in each forward and backward format is 21. MRI scanning The MRI machine used in this study was a Siemens 3.0 Tesla scanner (Prisma, 2016), devoted to research, at the Iranian National Brain Mapping Lab. 1 We used a 64-channel head coil in our study.The MRI protocols were selected to match the international projects, such as the UK Biobank or the ENIGMA consortium.The MRI protocols were as follows. Resting All MRI data were visually checked for good quality, based on previous methods (Sisakhti et al., 2021(Sisakhti et al., , 2022)).This step included image information such as matrix and voxel sizes, the number of timepoints (for resting-state fMRI), and checking the images to be rightto-left oriented.Besides, the visual check was performed to spot possible macroscopic artifacts and vibration/motion evidence in images, and to check head tilt and head positioning, signal loss, ghosting, or other possible artifacts in the data. Data analysis 2.4.1 Resting fMRI data analysis Details of our data analysis were published previously (Alemi et al., 2018).In summary, first, the fMRI data underwent seven steps of preprocessing, including slice timing correction, realignment, co-registration, normalization, smoothing, segmentation, and motion correction.Slice timing section was performed using the following settings: number of slice = 43; TR = 2,500 ms; TA = 0.9768 (1-1/43).For realignment, the settings were: quality = 0.9; separation = 4; smoothing = 5; and interpolation = 5.To perform the co-registration step, we chose the T1 image as the reference image, and all volumes of the resting state images were chosen as the source images.In 1 www.nbml.irnormalization, for the image to align we selected the T1 image, and for image to write, we selected all volumes of the resting state images that were extracted from the last preprocessing step (co-registration).The setting of smoothing was: FWHM = 6; data type = same; implicit masking = none.For motion correction, the MCFLIRT toolbox, utilized in FSL, was used, and the criteria for including the data with an acceptable motion was the absolute displacement (rotation and translation) being less than 2.0 mm.Also, one preprocessing step was performed on the structural T1-weighted images, which included removing the skull and non-brain tissues from the T1-weighted brain images.FSL (FMRIB Software Library v6.0 Created by the Analysis Group, FMRIB, Oxford, United Kingdom) has a tool for this called BET (Brain Extraction Tool), and we used it with these settings: fractional intensity threshold = 0.35; bias field and neck cleanup. We used the MELODIC toolbox (Multivariate Exploratory Linear Optimized Decomposition into Independent Components), from the FSL software package (FMRIB Software Library v6.0 Created by the Analysis Group, FMRIB, Oxford, United Kingdom), in order to identify the brain activation maps during the resting state; these brain activations are referred to as independent components in the spatial ICA algorithm performed in Melodic, FSL.Independent Component Analysis is used to decompose a single or multiple 4D data sets into different spatial and temporal components. The preprocessed data were imported into MELODIC (group ICA analysis, temporal concatenation approach), in order to pick out different activation and artifactual components without any explicit time series model being specified.The settings of the MELODIC analysis included: number of inputs = 252; slice timing correction = interleaved; motion correction = MCFLIRT; spatial smoothing FWHM = 5 mm; activate intensity normalization; multi session temporal concatenation mode of analysis; and Threshold IC maps = 0.9.Running the ICA analysis on these 252 resting-state fMRI data, based on the above settings and by the temporal concatenation approach, resulted in 109 independent components for all the 252 fMRI data.Each independent component represents a particular pattern of brain activation or artifact, observed in common in all the 252 data and during the resting state. These 109 components included the maps relevant to the taskevoked activations, relevant to the intrinsic activities of the individuals during the resting-state, as well as the maps relevant to the artifacts or other confounding factors.Based on our hypothesis in this work, and based on previous works (Tang et al., 2017), we identified the following functional networks among our results, and the other steps of the analysis were merely performed on these networks: anterior and posterior default mode network (Ant-DMN & post-DMN), right and left executive control network (R-ECN & L-ECN), and salience network (SN), resulting in five networks.Identification of these networks were based on visual inspection of the output functional networks.Dual regression is a tool that we can use as part of a group-level resting state analysis to identify the subject-specific contributions to the group level Independent Component Analsis (ICA).The output of dual regression is a set of subject-specific spatial maps and time courses for each group level component (spatial map) that can then be compared across subjects/groups.All steps of dual regression were applied in FSL software.We applied dual regression on the outputs of the MELODIC ICA by a very simple code in the virtual machine of Linux in the WINDOWS environment.The Dual Regression coding (example code: dual_regression group_IC_maps des_norm design.matdesign.conn_perm output_directory input.filelist)was applied on the outputs of the MELODIC ICA step, where the inputs were the 109 components estimated for all the 252 participants.The outputs of this step were used to quantify the strength of the activation of each of the five networks in the 252 participants of the study.The strength was defined as the average z-value of the activated voxels (z-value>2.3) in the network. Volumetric data analysis Details of our volumetric analysis methods were published previously (Batouli et al., 2014a;Batouli and Saba, 2021;Keihani et al., 2017).As a summary, initially the quality of the T1-weighted scans was visually checked for a correct orientation and matrix and voxel sizes.The visual check was also performed to spot possible macroscopic artifacts and vibration/motion evidence in images, for a proper signalto-noise ratio, and to check head tilt and head positioning, signal loss, ghosting, or other possible artifacts in the data. Next, voxel-based morphometry (VBM) analysis (Ashburner and Friston, 2000) was performed as follows.The T1-weighted scans were segmented into gray matter volume (GMV), white matter (WM), and cerebrospinal fluid (CSF), using the Segment toolbox, SPM12, which created the Native Space plus Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) imported outputs (Ashburner, 2010); using the default settings of the "Run DARTEL: create template" toolbox, the accuracy of inter-subject alignment was improved by iteratively averaging the DARTEL-imported data of the GMV and WM tissue types to generate population-specific templates; and after generation of the templates, all the GMV and WM images were normalized to the Montreal Neurological Institute (MNI) standard space, using the "Normalize to MNI space" toolbox. The aim of this analysis was to estimate the volume of several brain regions, and as a result, two brain atlases were used in this section, including the Desikan-Killiany Atlas (Alexander Loh et al., 2019) and the Aseg Atlas (Fischl et al., 2002;Sederevičius et al., 2021).The atlases provided the ROIs of brain areas, and then the volume of an ROI was calculated by adding the probability estimates of the GMV and WM maps, and then multiplying the resulted value to 3.375 mm3 (the volume of one voxel), using a code written in MATLAB.In this section, the volume of 85 brain regions were estimated for all the 252 participants.Generally, 85 gray matter volumes and 5 resting-state networks were included in the statistical analysis of the study. Statistical analysis This study aimed to examine the mediating effects of the brain structures and resting-state brain networks on the relationship between age and working memory.The preliminary analysis was done using SPSS version 26 and the mediation analysis was done using AMOS version 24.The steps of statistical analysis were as follows: first, the mean and standard deviation of the study variables (including independent, and dependent variables) were estimated.Skewness and kurtosis were also reported, which indicated the normal distribution of the data.In the next step, correlation analysis was performed between all study variables.It should be noted that age was included as an independent variable, brain structures and resting-state brain networks as mediators, and working memory as the dependent variable.The correlation analyses were first performed between age and each score in the cognitive tasks, then the correlation between age and each of the brain structures and resting-state brain networks was performed; and then, the correlation between brain imaging measures and each score in cognitive tasks was calculated.It should be mentioned that to reduce the risk of false positive discoveries due to multiple comparisons effect, the study utilized the Bonferroni approach as a subset of the Family-Wise Error Rate (FWER) multiple comparison corrections, setting the adjusted significance level at 0.00052.This level of p-value was applied to all our analyses, and therefore the reported results are FWER-corrected. Finally, the mediation effects of the brain structures and restingstate brain networks on the relationship between age and each score in cognitive tasks were investigated.To identify the direct and indirect effects of age on working memory, the correlation between the paths of the hypothetical model was calculated and the non-significant paths were removed step by step.It is noteworthy that years of education were included as the control variable in the path analysis.Controlling for years of education helps us provide more accurate and meaningful insights into the relationships between variables and ensures differences in outcomes are not simply due to variations in years of education.Finally, the acceptable empirical model was examined. Descriptive statistics First, the mean, standard deviation (SD), and range of the age, n-back test, forward digit span task and backward digit span task tests were computed.The mean accuracy in the one-back task for all the participants was 90.98 ± 12.79%.The mean total forward and backward digit span task for all the participants were 8.20 ± 1.94% and 6.21 ± 1.99%, respectively.The ranges for the kurtosis and skewness of each data ranged from −1 to +1, indicating that age, n-back test, FDST and BDST were normally distributed (Hair et al., 2022).The results by age groups are given in Table 2. Subsequently, the intervening role of years of education and gender on variables was examined.The Pearson correlation analysis revealed that there was a significant correlation between years of education and the score of the n-back test, forward and backward DST (r = 0.30, p < 0.01; r = 0.36, p < 0.01; r = 0.44, p < 0.01, respectively).The correlation coefficients are presented in Table 3.Also, the T-test analysis showed no significant differences between males and females in the n-back test [t (df = 250) = −1.03,p > 0.05], FDST [t (df = 250) =0.97, p > 0.05], and BDST [t (df = 250) = −0.72,p > 0.05].The analyses indicated that years of education could play the role of an intervening variable in working memory, but not gender. Age associations The results of the Pearson correlation analysis indicated that there was no significant correlation between age and the score of the n-back test (r = 0.06, p > 0.05).However, both the FDST and BDST were significantly correlated with age (r = −0.41,p < 0.01; r = −0.42,p < 0.01; respectively), suggesting that an increased age was associated with a poorer working memory performance. According to the VBM analysis, 85 cortical and subcortical gray matter volumes were obtained in this study.The Pearson correlation analysis revealed a significant and negative correlation between age and 80 of those volumes, suggesting a decreased GMV with increasing age. In our RSN analysis, five resting-state networks, namely the anterior and posterior default mode network (DMN), right and left executive control network (ECN), and salience network (SN) were selected.The association of the level of the activity of these networks with age showed that the anterior default mode network, left executive control network, and salience network exhibited a negative correlation with age (r = −0.26,p < 0.01; r = −0.23,p < 0.01; r = −0.22,p < 0.01; respectively).To address the issue of false-discovery bias when conducting multiple comparisons, the Bonferroni correction was employed, which adjusted the significance level to 0.00052. The neural correlates of working memory To investigate the neural correlates underlying working memory, correlation analysis was performed between GMVs and the RSNs with the working memory measures.The correlation analysis showed that the GMV in 56 and 31 brain structures significantly correlated with the FDST and BDST, respectively.On the other hand, no significant correlation was observed with the one-back test results.Similarly, the activity levels of the RSNs did not show a significant correlation with the WM measures.The results of this section are provided in Table 4. The higher level (mediation) analysis The mediation model was conducted to clarify the mediation role of the structural and functional brain measures in the relationship between age and working memory.Age was the independent variable of the path analysis, brain structure and resting state networks as the mediators, working memory as the dependent variable, and years of education as the control variable.Based on the preliminary analyses performed above, only the structural and functional brain measures which showed a significant correlation with age and the cognitive tests results were included in the model.Based on that, the measures of the one-back test were not involved in the model.The brain structures were divided into seven categories based on the parcellation of AAL-Atlas, including the frontal, parietal, temporal and occipital lobes, insula and cingulate, posterior fossa, and central structures (Rolls et al., 2020), and seven separate models were designed and tested. To elucidate the direct and indirect pathways, a path analysis was conducted to test whether structures and networks could mediate the relationship between age and working memory.As provided in Figure 1 Also, the results of path analysis on the BDST, as provided in Figure 2, indicated that age through the temporal lobe (right superior temporal volume; β = 0.01, p = 0.017) had an indirect effect on the BDST.None of the resting state networks had an indirect effect on these tests.As a result, seven brain regions mediated the relationship of age with the FDST, and one brain structure mediated between age and the BDST. Summary of the results The purpose of this study was to determine which brain structures or resting-state brain networks mediate age-related differences in working memory.First, the correlation analysis between age and working memory tasks showed that there is a significant negative correlation between age and FDST and BDST, but no significant correlation was observed between age and the n-back task.Then, according to the Pearson correlation analysis, correlations between age and 80 brain structures were significantly negative, suggesting lower GMVs with increasing age.In addition, there was a significant correlation between age and three networks (anterior default mode network, left frontal network, and salience network).In the next step, the correlation analysis showed that there is a relationship between 56 and 31 gray matter volumes with forward and backward digit span tasks, respectively.Furthermore, we found that none of the resting state networks had a significant correlation with working memory tasks. Finally, the mediation analysis showed that the GMV in the right medial orbitofrontal, left precentral, right parietooccipital, left amygdala, left middle occipital, left posterior cingulate and left thalamus mediate the age-related differences in the forward digit span task.Furthermore, GMV in the right superior temporal mediated the age-related differences in the backward digit span task.Neither of the resting-state networks had an indirect effect on the forward and backward digit span tasks. Working memory alters with age Consistent with our hypothesis, age negatively correlated with the score in the forward and backward digit span tasks.The present results are consistent with prior literature relating age and working memory (Fabiani, 2012;Gajewski et al., 2018).Bosnes et al. (2022) found that working memory performance of healthy older adults is associated with the process of aging well.Similarly, in a study to investigate age-related changes in spatial working memory (Klencklen et al., 2017), two groups of adults aged 20-30 and 65-75 were compared.The results found that older adults performed less well on working memory tasks than younger adults. Also, our research showed that there is no correlation between the age and performance of the participants in the one-back test.Consistent with the results, Cansino et al. (2013) investigated how the difficulty of a working memory task may affect age-related decline.They used the N-back task with two levels of difficulty in their research and showed that with increasing age, working memory accuracy decreased in 2-back tasks compared to one-back tasks.However, Mattay et al. (2006) showed that older subjects performed as well as younger subjects in the one-back task, and therefore it can be concluded that the effect of aging on working memory is dependent on the cognitive load of the task, and thus, when the cognitive demand of a task decreases, it is less affected with increasing age. Age differences in brain measures The present study showed that most volumes of gray matter in different brain structures have a negative correlation with aging.Consistent with our result, previous studies also have reported brain changes with age (Driscoll et al., 2009;Giorgio et al., 2010;Huang et al., 2015;Jockwitz et al., 2017;Varangis et al., 2019).For example, by examining the differences in brain volume among four groups of male and female older adults, Farokhian et al. (2017) found that GMV in the frontal, insular, and cingulate cortices was reduced in older adults compared to younger adults in both genders. In the network-level analysis, we found that activity in three resting state networks, including the anterior default mode network, left frontal network, and salience network were correlated with aging.Consistent with our findings, studies have shown that increasing age is associated with a diminished activity in the ant-DMN and post-DMN networks (Damoiseaux et al., 2008;Jones et al., 2011;Koch et al., 2010;Sambataro et al., 2010).In addition, consistent with our result, studies have shown age-related changes and abnormalities in the frontal (Fujiyama et al., 2016) and salience (He et al., 2014) networks in normal aging.Also, a recent systematic review of largescale resting-state networks in aging found that the brain of older adults is less efficient and modular at rest (Deery et al., 2023).Overall, the variations seen in brain gray matter volume during normal aging may explain the difference in cognitive performance among older individuals. The mediation effect on brain measures At the cerebral level, correlation analysis revealed that worse performance in working memory is associated with significantly smaller GMV in multiple brain structures, especially in the frontal, temporal, and parietal regions.These regions are known to be involved in the working memory performance (Emch et al., 2019;Nissim et al., 2017;Rottschy et al., 2012).Finally, the path analysis results showed that the frontal lobe (right medial orbitofrontal volume, left precentral volume), parietal lobe (right parietooccipital volume), temporal lobe (left amygdala), occipital lobe (left middle occipital volume), Insula and cingulate (left posterior cingulate volume) and central structure (left thalamus) mediate adult life span differences in the forward digit span task.Furthermore, we found that the temporal lobe (right superior temporal volume) mediates adult life span differences in the backward digit span task. Consistent with our result, Nissim et al. (2017) in a research aimed at determining the neural correlates of reduced working memory performance in the frontal lobes, compared two groups of healthy elderly people with high and low working memory in terms of cortical thickness and cortical surface area.The results showed that the cortical surface area in the medial orbital frontal gyrus, inferior frontal gyrus, and superior frontal gyrus is significantly reduced in subjects with a low performance in working memory.In another study, Schulze et al. (2011) aimed to investigate working memory performance in healthy elderly using multimodal imaging techniques, comparing two groups of young adults (20-30 years of age) and the older (60+ years of age).The results showed a negative correlation between gray matter volume and reduced working memory performance in older adults.Also, the results showed that with increasing working memory load and increasing age, a significant increase in activation was observed in the left dorsal and ventral lateral prefrontal cortex.In another study, greater activity in the dorsolateral prefrontal cortex was observed in younger adults than in older adults during memory retrieval, which suggests that the dorsolateral prefrontal cortex mediates the age-related decline in working memory performance (Rypma and D'Esposito, 2000).Mattay et al. ( 2006) also showed lower performance with increased working memory load in older people compared to younger ones, and at the same time, showed less activity in the prefrontal regions.A recent study also reported that increasing age is associated with a linear decrease in the neural activation during spatial working memory performance in the related regions (Archer et al., 2018).In general, with a decrease in behavioral performance in the active memory, the neural activity in the related areas decrease, and with an increase in behavioral performance, the neural activity increases in the same areas in a corresponding manner. Consistent with our results, studies have specifically reported age-related decreases in gray matter volume in the neocortex, including prefrontal, parietal, and temporal cortices (Giorgio et al., 2010), as well as deep structures such as the thalamus (Fama and Sullivan, 2015) and amygdala (Zanchi et al., 2017).The weakening of these regions, which can be the neural substrates of cognitive function (Smith et al., 2023), may be the basis of the observed differences in working memory.For example, MacHizawa et al. (2020) showed in a study that a greater volume of gray matter in the left lateral occipital region is associated with better visual working memory performance.It has also been reported that memory performance in older adults is significantly related to the gray matter volume of the middle frontal gyrus and several regions of the temporal lobe (Van Petten et al., 2004).Inconsistent with our results, Piras et al. (2010) reported that there is no significant relationship between thalamic gray matter volume and WM performance.In contrast, Van De Mortel et al. (2021) reported a reduction in thalamus volume as one of the earliest signs of cognitive decline in Mild cognitive impairment.It should be noted that our study aimed to identify for the first time the mediating role of gray matter volume in certain areas of the brain in the relationship between age and working memory. It is important to highlight that in our work differential brain structures mediate the relationship between age and the forward versus backward digit span task.One explanation for the differential brain structures between forward and backward digit span is that these two tasks require different cognitive demands.Overall, the backward digit span involves more spatial processing and higher cognitive control compared to the forward digit span.For example, the backward digit span is associated with greater activation of the left occipital visual area, left prefrontal cortex, right dorsolateral prefrontal cortex, frontal eye field, frontal operculum cortex, anterior insular cortex, and dorsal anterior cingulate cortex (Donolato et al., 2017).In our work, the right superior temporal volume mediates the relationship between aging and the backward digit span task.One possible interpretation for our finding is the involvement of this region in processing both object-and space-related information.Therefore, its role in the backward digit span task is to be expected. Limitations The present study has a number of limitations.First, two tasks (n-back task and digit span test) were adopted to evaluate working memory.It is noteworthy that the selection of different tasks may produce different results.To measure working memory more accurately, it is suggested to use various working memory tasks both in terms of difficulty level and type (visual and verbal) in future studies.Moreover, in the present study, resting-state networks and voxel-based morphometry were used to investigate the neural correlates related to working memory.It is suggested that multimodal brain imaging measures can be used in future studies to obtain a more accurate measure of neural correlates related to working memory.Thirdly, considering that the cross-sectional study does not provide any information about the changes in gray matter volume and the decrease of working memory over time, it is suggested that future studies use a longitudinal approach to investigate the extent of GMV changes corresponding to working memory.Also, we tested the mediation role of the variables, although selecting an approach for actually testing the causality between the measures would be preferable. In summary, we successfully demonstrated that GMV in multiple brain structures mediate age-related differences in working memory performance.Our findings go beyond previous research on age-related WM decline.WM as an executive function is crucial for learning, working, and managing daily life.Our results are consistent with the reports regarding the decrease in GMV with age and its effect on cognitive performance such as working memory.In general, our results support the view that some specific brain structures can be the basis of specific cognitive functions.We conclude that identifying brain structures mediating the relationship between age and working memory may provide an opportunity for early detection of individuals at risk for age-related memory decline, as well as an opportunity to design strategies aimed at reducing or preventing age-related memory decline. FIGURE 1 FIGURE 1 The mediation model (path analysis) between the brain volumes, age, and the forward digit span task scores.The solid lines indicate the statistically significant paths, and the dashed line indicates non-significant paths.The path values show the standardized beta weights and p-values.Pink rectangles indicate significant mediating variables.The confidence interval (CI) indicates 95% confidence interval for the indirect and total effects. FIGURE 2 FIGURE 2 The mediation model (path analysis) between the brain volumes, age, and the backward digit span task scores.The solid lines indicate the statistically significant paths, and the dashed line indicates non-significant paths.The path values show the standardized beta weights and p-values.Pink rectangles indicate significant mediating variables.The confidence interval (CI) indicates 95% confidence interval for the indirect and total effects. TABLE 1 The demographic information of the participants; the participants are divided into five groups based on their age.The number of participants in each group are provided.YoE, Years of Education, provided as mean (±std); std., standard deviation. TABLE 2 The descriptive statistics of the study variables.The participants are divided based on their age, and the mean and standard deviation of the cognitive measures for the one-back task, and the forward and backward digit span tasks, are provided.SD, Standard Deviation; FDST, Forward Digit Span Task; BDST, Backward Digit Span Task. TABLE 3 The correlation coefficients between age and education and cognitive tests, and the t-test results for gender differences in the study variable; YoE, Years of education; **p-value < 0.01. TABLE 4 The coefficients of the correlation between the brain volumes and age, FDST and BDST measures.
9,723
2024-09-03T00:00:00.000
[ "Biology" ]
Intratumoral diversity of telomere length in individual neuroblastoma tumors. The purpose of the work was to investigate telomere length (TL) and mechanisms involved in TL maintenance in individual neuroblastoma (NB) tumors. Primary NB tumors from 102 patients, ninety Italian and twelve Spanish, diagnosed from 2000 to 2008 were studied. TL was investigated by quantitative fluorescence in situ hybridization (IQ-FISH) that allows to analyze individual cells in paraffin-embedded tissues. Fluorescence intensity of chromosome 2 centromere was used as internal control to normalize TL values to ploidy. Human telomerase reverse transcriptase (hTERT) expression was detected by immunofluorescence in 99/102 NB specimens.The main findings are the following: 1) two intratumoral subpopulations of cancer cells displaying telomeres of different length were identified in 32/102 tumors belonging to all stages. 2) hTERT expression was detected in 99/102 tumors, of which 31 displayed high expression and 68 low expression. Alternative lengthening of telomeres (ALT)-mechanism was present in 60/102 tumors, 20 of which showed high hTERT expression. Neither ALT-mechanism nor hTERT expression correlated with heterogeneous TL. 3) High hTERT expression and ALT positivity were associated with significantly reduced Overall Survival. 4) High hTERT expression predicted relapse irrespective of patient age. Intratumoral diversity in TL represents a novel feature in NB.In conclusion, diversity of TL in individual NB tumors was strongly associated with disease progression and death, suggesting that these findings are of translational relevance. The combination of high hTERT expression and ALT positivity may represent a novel biomarker of poor prognosis that deserves further investigation. INTRODUCTION Telomeres are specific DNA regions at the ends of chromosomes that prevent DNA damage and promote genomic stability [1,2].Telomeric DNA consists of tandem repeats of TTAGGG and is bound to a six subunit protein complex, referred to as shelterin or telosome, composed of TRF1, TRF2, TIN2, POT1, TPP1 and hRap1 [3].Telomeres shorten with each round of DNA replication until a critical phase when they become dysfunctional, resulting in genomic instability [1][2][3][4][5][6].Genomic alterations observed in cancers can be caused by inappropriate DNA repair at dysfunctional telomeres leading to chromosomal rearrangements, aneuploidy, and repression of DNA damage checkpoints [7].To proliferate beyond the senescence checkpoint, cells must restore their telomere length (TL) [8].Tumor cells maintain TL by reactivating human telomerase reverse transcriptase (hTERT), a ribonucleoprotein that catalyzes the synthesis and elongation of telomeres using an RNA template [8].Moreover, an intratelomeric recombination mechanism known as alternative lengthening of telomeres (ALT) may be employed by tumor cells in order to ensure their replicative potential [9].Telomeres in ALT cells are heterogeneous in length due to rapid deletions and elongations, which are thought to occur through high rates of inter-chromosomal recombination including a process termed telomere sister chromatid exchange (TSCE) [10].In some primary tumors and cancer cell lines ALTmechanism may substitute for or coexist with hTERT [11][12][13]. Here we have investigated TL and the involvement of hTERT and ALT-mechanism in TL maintenance in a series of NB tumors. Diversity of Telomere Length in individual Neuroblastoma tumors The IQ-FISH procedure on tissue sections is a suitable approach for the assessment of TL in relation to cell type, and in the context of tissue architecture [30][31][32][33][34][35][36].TL was measured in 102 primary NB tumors by a modified IQ-FISH assay with ploidy correction that showed high sensitivity (0.1 kb of telomere repeats) and accuracy (99%) (Fig 1 A-C).Seventy NB cases (68.6%) displayed homogeneous TL, of which 42 were short, 25 long, and 3 normal.In the remaining 32 NB cases (31.4%), single cell analysis revealed the coexistence in the same tumor of two cancer cell subpopulations with differing TL, and namely i) normal and short telomeres, or ii) long and short telomeres, or iii) normal and long telomeres.These 32 NB cases displaying heterogeneous TL showed stage and age distribution, risk group, favorable or unfavorable histology, frequency of MNA and ploidy comparable to cases with homogeneous TL (Table S1). NB cases with heterogeneous TL belonging to group In contrast, significantly better EFS and OS were detected in cases with short (group 1) vs normal (P B =0.007 and 0.001, respectively) (group 3) or vs long (group 4) TL (P B <0.0001 and 0.028, respectively).Finally, EFS and OS curves of cases with heterogeneously (group 4) or homogeneously (group 5) long TL were super-imposable (P B =0.96 and 0.99, respectively) (Fig 2A and 2B). Based upon the above results, cases belonging to groups 1 and 2 (i.e.short TL) or to groups 3, 4 and 5 (i.e.long/normal TL) were clustered in two groups in order to assess the prognostic impact of hTERT expression and ALT mechanism in relation to TL (see below). Detection of ALT and h-TERT in tumor tissues Telomere elongation is operated by telomerase and ALT mechanism, that is based on recombination of telomeric sequences and might cause heterogeneous TL in single cancer cells [10][11][12].ALT was detected by FISH analysis [37][38][39] Event-Free and Overall Survival analysis EFS and OS analyses, with details about incidence rates of relapses and/or death, with 95% CI, HRs and P values are shown in Table 1.There was a significant relationship between all variables reported in Table 1 and the occurrence of relapse or death.In particular, normal TL patients showed higher incidence rate of relapse/death (27.7× 1000 pm) with respect to short TL patients (3.3 1000 pm) (P<0.0001); the same higher incidence rate was observed in long TL patients (22.7× 1000 pm). The same variables that were related to EFS influenced OS, with the exception of ploidy that was not statistically significant.In contrast, ALT-mechanism emerged as a significant prognostic factor since OS was reduced in ALT positive vs ALT negative patients (P=0.035)(Table 1). Prognostic impact of hTERT expression and ALT mechanism in relation to Telomere Length Positivity or negativity for ALT-mechanism had no effect on TL-related EFS, as assessed by Kaplan-Meyer analyses (Table 1).EFS of NB patients with long/normal TL (Fig 3D) was significantly reduced when hTERT expression was high vs low (P=0.036),whereas EFS of patients with short TL (Fig 3E) was unaffected by hTERT expression levels (P=0.18).We next subdivided NB patients into four groups based upon positivity/negativity for ALT and high/low hTERT tumor expression, i.e. i) hTERT low/ALT-negative, ii) hTERT low/ALT-positive, iii) hTERT high/ALT-negative, and hTERT high/ALTpositive. hTERT expression combined with ALT-mechanism influenced significantly OS, as assessed by Kaplan-Meier analyses (P=0.0026)(Fig 4A).Thus, NB patients with high hTERT/ALT positivity showed significantly lower OS than patients with low hTERT/ALT-negativity (P B =0.017).Finally, OS of patients with low hTERT expression did not significantly differ in ALT positive vs negative cases. Since age is the most important prognostic factor in NB [18], we next evaluated EFS of the following patient groups, i) hTERT low/age ≤18 months, ii) hTERT low/ age >18 months, iii) hTERT high/age ≤18 months, and iv) hTERT high/age >18 months.Patients older than 18 months with high hTERT had a significantly reduced EFS compared to patients <18 months with low hTERT (P B =0.0004) (Fig 4B ). Finally, we performed best fitted Cox regression model for EFS and OS of our NB patient cohort.Long or normal TL was the best independent predictor of relapse (P=0.003),followed by tumor stage 4 (P=0.03)and high hTERT expression (P=0.03).Stage 4 and MNA were the best independent predictors of relapse (P<0.0001 and 0.0002, respectively), followed by high hTERT expression (P=0.002)(Table 2). DISCUSSION We have shown that one third of NB tumors contained two cancer cell subpopulations with different TL, as assessed by NB-tailored IQ-FISH with data normalization for DNA ploidy [33].This latter step is critical to minimize misinterpretations due to the abnormal number of chromosomes possibly present in the nucleus of cancer cells.Our results are consistent with a model whereby genomic crisis generated due to telomere attrition induces subclonal heterogeneity, potentially leading to TL heterogeneity [40].The translational impact of the results obtained is highlighted by the finding that patients with predominantly (>50%) long/normal TL had the same unfavorable EFS and OS as patients with homogenously long/normal TL.Thus, the former patients might belong to a novel risk category.Failure of previous studies on NB patients to identify heterogeneous TL in individual tumors likely depends on the need for i) single cells analysis using methods as IQ-FISH or flow FISH, and ii) ploidy normalization. Telomeres in ALT cells are highly heterogeneous in length and are maintained through a mechanism involving recombination [10], whose pathway remains to be elucidated.We investigated ALT-mechanism and hTERT expression, that correlates with the catalytic activity of telomerase [24], in NB cases with heterogeneous vs homogeneous TL.ALT-mechanism, that was detected in more than half of the tumors investigated, was unrelated to TL.Other investigators have correlated ALT-mechanism with long TL in NB, but the patient groups analyzed were too small to draw any definitive conclusion [29]. In our study, ALT-mechanism and hTERT operated independently each other.Presence or absence of ALTmechanism had no prognostic relevance in NB cases with low hTERT expression, but coexistence of high hTERT and ALT reduced significantly OS compared to cases with high hTERT and absence of ALT.These latter findings suggest that hTERT and ALT may cooperate in promoting NB progression. Here we show for the first time that ALT-mechanism and hTERT were co-expressed in approximately 60% of individual NB tumors.Whether ALT-mechanism and hTERT expression occur in mutually exclusive tumor cell subsets or rather in the same cell population warrants further investigation.In this respect, ALT-mechanism and hTERT expression were detected in discrete subpopulations of primary osteosarcoma cells [41]. Cox regression analysis showed that high hTERT expression was a robust independent predictor of EFS and OS for our NB patients, consistent with most, but not all, previous reports [27,[41][42][43][44]. Nonetheless, we found that high hTERT expression showed only a moderate correlation with TL (R=0.48),suggesting that the unfavorable prognosis of NB patients with high hTERT expression may be related to telomerase functions other than telomere elongation [6], such as i) transcriptional modulation of Wnt/ β-catenin signaling pathway [45]; ii) enhancement of cell proliferation and/or resistance to apoptosis [45]; iii) involvement in DNA-damage repair [46]; iv) activity as RNA-dependent RNA polymerase [47].Moreover, when telomeres become critically short, they activate a DNA damage response and trigger the induction of replicative cellular senescence that can be suppressed by over-expression of hTERT [3]. We finally investigated the prognostic impact of high/low hTERT expression in relation to age at diagnosis lower or higher than 18 months [18].Patients older than 18 months with high hTERT expression had worse EFS than patients of the same age with low hTERT expression.Likewise, patients younger than 18 months with high hTERT expression showed worse EFS than patients of the same age group with low hTERT expression.Taken together, these results demonstrate that high hTERT expression represents an unfavorable prognostic factor irrespective of patient age. In conclusion, diversity of TL in individual NB tumors was strongly associated with disease progression and death.High hTERT associated with ALT-mechanism may represent a novel biomarker of poor prognosis. Patients and Clinical Follow-up A retrospective series of primary tumors from 102 NB patients was collected at the Istituto Giannina Gaslini, Genova (90 patients), Italy and at the Medical School of the University of Valencia, Spain (12 patients), from January 2000 to December 2008.Table S1 shows the demographic characteristics of the patients investigated.The study was approved by the Institutional Review Boards of the two participating Institutions and informed consent was obtained from patients or their legal guardians. Patients were classified according to the International Neuroblastoma Staging System [14] and to the International NB Risk Group (INRG) [22] classifications.Eligibility criteria for inclusion in the analytic cohort were diagnosis of bona fide NB and lack of any treatment at study.Twenty-eight patients died of disease.Seventy-four survivors were followed-up and categorized at the time of their last clinical examination.Clinical follow-up was performed for all patients with a median follow-up time of 3.6 years, with a minimum follow-up duration, in surviving patients, of 3.1 months.Event-Free Survival (EFS) was calculated from diagnosis to last follow-up or event (first occurrence of relapse, progression, or death).Overall Survival (OS) was calculated from diagnosis to last follow-up or death. Tumor Specimens Formalin-fixed, paraffin-embedded tissue sections from 102 NB tumors were studied.Each tumor area tested for TL contained malignant cells, as assessed by histological examination.Quantification of telomere fluorescence intensity was performed on serial tumor tissue sections, thus allowing telomere quantification in tumor areas selected by the pathologist.Tumor cells were distinguished in the samples using NB-specific marker NB84 [47].All tumors were evaluated at the time of diagnosis prior to any treatment other than surgery. Telomere and centromere fluorescence signals were automatically quantified on serial tumor tissue sections selected by the pathologist by using the fluorescencebased microscopic scanning system E-1000 Nikon with appropriate filters set and a high-resolution CCD camera, and the image analysis software Genikon (Nikon, Tokyo, Japan).A nuclear area for each cell was manually selected by the operator to measure centromere and telomere fluorescence intensities in the FITC and Cy3 images, respectively (Fig 1A).These latter fluorescence intensities were analyzed by scanning three consecutive serial images in order to avoid the loss of portions of the nucleus.The slide scanning and cell analysis procedures were performed by using a 100x objective (Nikon).We measured Cy3 pan-telomeric probe and chromosome 2 FITC centromeric probe fluorescence signal intensities in single nuclei and expressed the ratio between the former and the latter intensity values arbitrary fluorescence ratio units (FRU).A minimum of 20 nuclei were scanned and the mean value of the FRU was calculated.FRU values corresponded to TL and were corrected for ploidy as reported [33]. In order to define cut-off points for TL measurement by IQ-FISH, NB cell lines (IMR32, SHSY-5Y, GILIN and HTLA-230) [27] and fetal adrenal medulla samples were used as long telomere controls (coded as 1), HeLa and MCF-7 cell lines as short telomere controls [34], PBMCs from adult healthy donors and adult adrenal medulla as normal telomere controls.We determined the minimum and maximum cut-off values of FRU as 411.9 and 503.3, respectively (Fig 1B).Cut-off points were determined by means of the ROC curve analysis in two steps; the first cut-off was determined defining as "abnormally long" TL populations coded as 1, and the remaining cells lines (short and normal) coded as 0, obtaining, in the first ROC curve, the value of 503.3.The second cut-off was determined defining as "abnormally short" HeLa and MCF-7 cell lines (coded as 1) versus all the remaining cell lines, obtaining the value of 411.9. Effects of the fixation procedure on determination of FRU To evaluate the potential deleterious effect of nuclear truncation induced by cut sections [31,32] we compared TL assessed by IQ-FISH on intact nuclei from four paraformaldehyde-fixed tumor touch preparations and four paired paraffin-embedded tissue sections to simulate standard pathology slide preparation procedures.IQ-FISH gave strong nuclear signals on both tissue sections and touch preparations.The IQ-FISH coefficient of variation (CV) ranges were 5.79%-8.95%for formalin-fixed tissue sections and 3.3%-13.9%for paraformaldehyde-fixed touch preparations.These differences were not significant, indicating that the fixation procedure did not affect FRU. Inter-assay variation To estimate the reproducibility of IQ-FISH, serial tissue sections from the same specimen were processed in different experiments at different time points.The best CV observed was 2.3% and the worst was 15.5% with a median CV of 7.1%, indicating a good reproducibility of the assay. Telomere Length Measurement by two independent investigators We compared the FRU values (i.e. the ratio between telomere and centromere fluorescence intensities indicating TL) of all specimens analyzed and of all controls determined by two investigators using the Bland and Altman's plot.Except for four data points (4/141; 2.8%), all values fell within 95% of the limits of agreement (Fig 1C) and the bias value was good (Bias=-0.22).The Intra-class Correlation Coefficient was excellent (ICC=0.999). Quantification of hTERT Expression Paraffin-embedded NB tissue sections were stained overnight at 4° C using indirect immunofluorescence with a monoclonal antibody anti-hTERT (1:100; Lab Vision, Fremont, CA, USA).The slides were incubated with FITC-labeled secondary antibody (1:1000) at 37° for 1 h.hTERT fluorescence intensity was quantified using the Genikon software.Results were expressed as mean fluorescence intensity (FI) from at least 20 nuclei. Statistical Analysis Descriptive statistics were firstly performed and data were reported in terms of median values and 1 st and 3 rd quartiles (1 st -3 rd q) for quantitative variables, in terms of absolute frequencies and percentages for categorical variables.The IQ-FISH inter-assay variation was estimated calculating the coefficient of variation (CV), i.e. standard deviation divided by the mean Fluorescence Ratio Unit (FRU) and multiplied by 100.The FDA recommended limit for CV% is <15% [48].Bland and Altman's plot was used to assess the agreement between the two investigators' readings of FRU; this plot shows the differences of the two measurements (Y-axis) with respect to the their means (X-axis).The bias (mean of all differences) should be close to zero.Moreover the agreement between the two readings was evaluated by means of the Intra-class Correlation Coefficient (ICC) [49]. Categorical data were reported in terms of absolute frequencies and percentages (Table 1) and compared by the Chi-square test or by the Fisher's Exact test whenever expected frequencies were less than 5. Receiver operating characteristic (ROC) curves were used to determine the best cut-off point for defining high/low h-TERT expression using event-free status as the main outcome variable; a value=0.398was obtained; a second cut-off point for h-TERT expression (obtained by the same ROC curve method) was calculated for OS (considering only life status as outcome of interest); in this case a value=0.205was obtained.Two-ways analysis of variance was used to evaluate telomere length in relation to presence/absence of ALT and high/low hTERT expression.As some tumors showed heterogeneous TL, the weighted mean of the FRU values was calculated and used for the analyses reported in Fig 3 C, D, and E, as well as in survival analyses. EFS and OS curves were drawn categorizing for a series of demographic and clinical variables; these curves were estimated using the Kaplan-Meier method and compared by the log-rank test with P<0.05 considered statistically significant. For each category of the demographic and clinical variables, the absolute number of relapses or deaths, the incidence rates expressed × 1000 person-months (pm) with 95% Confidence Intervals (95% CI), Hazard Ratios (HRs) and statistical significance obtained from the Log Rank test were calculated and reported.Factors significantly associated with higher probability of observing relapse or death were then tested in a Cox proportional hazards regression model.The Log-Likelihood Ratio test (LR test) was used for comparisons. The statistical packages used were the Statistica (version 9.0, StatSoft Corp., Tulsa, OK, USA) for bivariate analyses and the Stata release 7 (Stata Corporation, Texas, USA) for multivariate analyses. Figure 1 : Figure 1: A: Interphase Quantitative Fluorescence in situ Hybridization (IQ-FISH) using pan-telomere (red) and chromosome 2 centromeric (green) peptide nucleic acid (PNA) probes in paraffin embedded NB tissue section.The nuclei are stained with DAPI (blue).Magnification x100.B: Calibration of IQ-FISH for TL measurements using four NB cell lines (IMR32, SHSY-5Y, GILIN and HTLA-230) and fetal adrenal medulla as long telomere controls, HeLa and MCF-7 cell lines as short telomere controls, peripheral blood mononuclear cells from adult healthy donors as well as adult adrenal medulla as normal telomere controls.We defined the minimum and maximum cut-off values of fluorescence ratio units (FRU) as 412 and 503, respectively.C: FRU values analyzed by two readers by means of the Bland and Altman's plot.Except for four data points (4/141; 2.8%) all values fell within 95% of the limits of agreement. Figure 2 : Figure 2: A: Kaplan-Meier Event-Free Survival curve for five patient groups based upon telomere length (TL).B: Kaplan-Meier Overall Survival curve for five groups based upon TL. Figure 3 : Figure 3: A: ALT positive NB cells showing ALT-associated bright intra-nuclear foci of telomere FISH signals (red) (arrows).Nuclear DNA was counterstained with DAPI (blue).B: Immunofluorescence nuclear labeling for the catalytic subunit of telomerase hTERT (green).C: Variance analysis of TL in relation to presence or absence of ALT and to high or low hTERT expression.D: Kaplan-Meier Event-Free Survival curve for long/normal TL and hTERT expression.E: Kaplan-Meier Event-Free Survival curve for short TL and hTERT expression. Table 1 : Incidence Rates and Hazard Ratios (HR) for Event-Free Survival (EFS) and Overall Survival (OS); (N=102) 2 (i.e.predominance of short TL) had EFS and OS superimposable to those of cases with homogeneously short TL belonging to group 1 (P B =0.90 and 0.99, respectively) (Fig 2Aand 2B).Likewise, NB cases with heterogeneous TL belonging to group 4 (i.e.predominance of long TL) had EFS and OS super-imposable to those of cases with homogeneously long TL belonging to group 5 (P B =0.96 and 0.99, respectively) (Fig 2Aand 2B). Table 2 : Best fitted Cox regression model for Event-Free Survival (EFS) and Overall expression (58.8%) were ALT positive, indicating lack of correlation between ALT mechanism and hTERT expression levels (P=0.59).The weighted means of the FRU values detected in each of the 32 tumors showing heterogeneous TL were used for the analyses reported in Fig3 C, D, and E, as well as for Cox regression model.Two-ways analysis of TL variance in relation to presence or absence of ALT and high or low hTERT expression showed that the two mechanisms of telomere elongation operated independently each other (Fig 3C).A statistically non significant trend to longer telomeres in ALT positive vs ALT negative cases independent of the levels of hTERT expression was observed (Fig3C). Figure 4: A: Kaplan-Meier Overall Survival curve for ALT positivity/negativity and hTERT high/low expression.B: Kaplan-Meier Event-Free Survival curve for hTERT high/low expression and age at diagnosis (> or ≤18 months).hTERT
4,946.2
2014-06-18T00:00:00.000
[ "Biology" ]
Classical Mobility of Highly Mobile Crystal Defects Highly mobile crystal defects such as crowdions and prismatic dislocation loops exhibit an anomalous temperature independent mobility unexplained by phonon scattering analysis. Using a projection operator, without recourse to elasticity, we derive analytic expressions for the mobility of highly mobile defects and dislocations which may be efficiently evaluated in molecular dynamics simulation. The theory explains how a temperature-independent mobility arises because defect motion is not an eigenmode of the Hessian, an implicit assumption in all previous treatments. Plastic deformation of crystals is effected by the motion of dislocations and point defects [1].Away from shock loading and the melting temperature this motion is usually modeled with the viscous damping law _ x ¼ γ −1 • f, employing a matrix of friction or drag coefficients γ, which set the time scales of defect dynamics [2].To reproduce the stochastic trajectories of highly mobile defects seen in experiment [3,4] this mobility law has been supplemented with a stochastic force to give the Langevin equation [5] [6,7].The stochastic force is usually more significant for small dislocation loops and point defects because the configurational force f λ is determined only by gradients in the stress field.For larger extended defects the configurational force usually dominates over the stochastic force.In both cases γ controls the rate of important microstructural processes such as swelling and post-irradiation annealing [8], but no universal theory for γ exists. In this Letter we use the Zwanzig projection technique [18] to show that γ ¼ γ 0 þ k B Tγ w , in quantitative agreement with MD simulations of defects and dislocations.γ 0 arises because the defect displacement vector is not an eigenvector of the Hessian, so that thermal vibrations can induce a force on defects to linear order.This is missed in previous treatments [11,19] as by perturbing a quadratic integrable Hamiltonian one implicitly assumes that defect motion is an eigenmode, an assumption that we explicitly show to be false.Violation of this assumption is the origin of the anomalous mobility. Defect coordinates.-Wedescribe a crystal using a 3Ndimensional vector of atomic positions X ∈ R 3N and velocities _ X ∈ R 3N .In this treatment crystal defects are not elastic singularities but localized deformations, which may be assigned a set of M ≪ N "position" labels x λ ∈ R 3M and "velocity" labels _ x λ ∈ R 3M to characterize the state of a defective crystal.Common methods for determining x λ , _ x λ include analysis of the atomic disregistry [20] or an energy filter [7], though in the following the only requirement is a repeatable protocol.By definition, the zero temperature configurations X ¼ Uðx λ Þ of the crystal potential energy VðXÞ may be entirely characterized by the parameters x λ , while variation of Uðx λ Þ with x λ can be determined through nudged elastic band calculations [21] or simply a finite difference derivative in the case of a defect with a wide core.To complete the discrete representation of a crystal at finite temperature, we must include displacements due to thermal vibrations Φ ∈ R 3N .The crystal configuration X at any given instant can now be expressed as . By introducing a defect position and velocity the coordinate set To rectify this we require the vibrational displacements Φ to be independent to the displacements caused by defect motion ∂ λ U, giving the 6M constraints [12] Published by the American Physical Society under the terms of the Creative Commons Attribution 3.0 License.Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. To obtain a dynamical equation for x λ , it now suffices [22] to project the exact equation of motion m ̈X ¼ −∇VðXÞ onto the direction ∂ λ U orthogonal to the crystal vibrations.Defining an effective mass tensor m ¼ m∂ λ U • ð∂ λ UÞ T , we exploit the time invariance of (2) to obtain . Similar equations of motion are standard in dynamical quasiparticle theories [12,22] and, in common with other authors, we will neglect the "hydrodynamic" term − _ . This is justified as we consider the motion of only subsonic defects, and it can be shown that these terms are of order j _ x λ j=c ≪ 1, where c is the speed of sound.As a result, the defect coordinates evolve according to where we have defined the instantaneous defect force f λ as the projection of the total force −∇V in the direction of defect motion ∂ λ U.The vibrational coordinates evolve in the subspace orthogonal to Removing the vibrational coordinates.-Fromthe form of the potential energy V½Uðx λ Þ þ Φ, it is clear that the evolution of the defect and vibrational coordinates are coupled, as they must be for a frictional force to exist.However, for highly mobile subsonic defects, which necessarily possess a wide defect core [23], the defect coordinates may be considered as slowly varying compared to the vibrational coordinates, a conclusion which will be explicitly demonstrated in molecular dynamics simulation below.Over a Debye period τ D ∼ a=c ∼ 0.1 ps, where a is the lattice parameter, the displacements of any atom due to thermal vibrations will approximately average to zero, with an oscillation amplitude of ∼τ D ffiffiffiffiffiffiffiffiffiffiffiffiffiffi k B T=m p .Since the defect speed will be approximately _ ≪ c, the displacement of any one atom due to defect motion in a time interval τ D will be at most , where jj∂ λ Ujj ∞ is the largest component of ∂ λ U.These calculations imply that if jj∂ λ Ujj ∞ ≪ j∂ λ Uj, then the displacement due to defect motion will be much less than the magnitude of displacements due to thermal motions, which implies that the Φ are effectively ergodic [24] over a time scale ∼τ D , where the defect coordinates are essentially stationary.But the condition ∥∂ λ U∥ ∞ ≪ j∂ λ Uj amounts to a requirement that the deformation associated with the defect is spread over many atomic sites, which is always satisfied by highly mobile defects with a wide core.We therefore assume that vibrational displacements average to zero over periods of ∼0.1 ps while the defect remains effectively stationary, an assumption that we will test explicitly when calculating the defect force autocorrelation. We can exploit this separation of time scales to remove thermal vibrations from the defect equation of motion using the formalism of dimensional reduction by Zwanzig [18,25].In this formalism the solution of the "fast" equation of motion for Φ is substituted into the "slow" equation of motion for x λ .It may be shown, to order τ 3 D , that Φ, _ Φ are adiabatic with respect to x λ , _ x λ and ergodic over the partial Gibbs distribution where Zðx λ Þ ¼ exp½−βFðx λ Þ is the partial partition function and we integrate on the hypersurface defined by (2). The defect coordinates now evolve on a coarse time scale τ D and follow the stochastic equation of motion It is usual in dislocation dynamics to neglect the inertial term m • ẍλ ðtÞ, which is valid when the potential energy landscape is slowly varying over the thermal length ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi k B T= mjγj p [5].In (5) we have introduced the expected force hf λ i ¼ −h∂ λ Vi ¼ −∂ λ F, the stochastic force ηðtÞ, where hηðtÞ ⊗ ηðt 0 Þi ¼ 2k B Tγδðt − t 0 Þ, and our central quantity, the friction matrix γ.In this timescale separated regime it is a standard result that γ is proportional to the time integral of the force autocorrelation CðτÞ, namely, where when x λ ∈ R, CðτÞ ≡ hf λ ðτÞf λ ð0Þi − hf λ ð0Þi 2 and may be expressed by ergodicity as We evaluate CðτÞ, and hence γ, in two ways: first by deriving in closed form the thermal averages (4) and second by numerical calculation of f λ ðtÞ in MD simulation.Analytic derivation.-Toderive an expression for γ we expand the potential energy V and the defect force f λ in powers of Φ.For the evaluation of the partition function the constraints (2) and the requirement that the Uðx λ Þ describes the zero temperature configurations results in an expansion where all inner products are with respect to Φ and all partial derivatives are evaluated at there is no restriction on the existence of mixed derivatives ∂ λ ∇ n Φ V ≠ 0. This is important as these mixed derivatives couple the defect and vibrational coordinates, as can be seen in the defect force expansion While we retain anharmonicity in the defect force, in order to perform analytical evaluation of expectation values we truncate V to quadratic order in Φ in the Gibbs distribution (4), allowing us to explicitly evaluate the expectation values in terms of the 3ðN − MÞ dimensional vibrational eigenset fω l ; v l g, where ∇ 2 Φ • v l ¼ mω 2 l v l .This truncation neglects any thermal expansion arising from the purely vibrational anharmonicities ∇ 3 Φ V and ∇ 4 Φ V.In the Supplemental Material [26] we systematically include these terms to produce an expression for γ up to linear order in temperature.It is shown that the anomalous temperature independent mobility γ 0 is unaffected by these additional terms.Using a quadratic Gibbs distribution, the expected force is found to be hf λ i ¼ −∂ λ ðV − TSÞ, where S is the harmonic entropy k B P l log ω l [27]; to evaluate CðτÞ we evolve the vibrational coordinates Φ from a given x λ .This is justified by the time scale separation and achieved by evaluating propagator terms of the form As appropriate for nonconservative dynamics, the propagator is evaluated using only the initial conditions hΦð0Þ ⊗ Φð0Þi ¼ P l k B T=mω 2 l v l ⊗ v l and, consequently, is closely related to the retarded Green's function GðtÞ ¼ ΘðtÞðk B TÞ −1 hΦðtÞ ⊗ Φð0Þi [28].All that now remains is to perform elementary Gaussian integrations to obtain our main result We see that the friction coefficient takes the form γ ¼ γ 0 þ k B Tγ w , with the new temperature independent γ 0 a function of the mixed quadratic derivative ∂ λ ∇ Φ V, and the temperature dependent k B Tγ w a function of the mixed cubic and quartic derivatives ∂ λ ∇ 2 Φ V and ∂ λ ∇ 3 Φ V.These terms may, in principle, be evaluated after diagonalizing ∇ 2 Φ V to obtain fω l ; v l g and computing the tensorial derivatives ∂ λ ∇ n Φ V.However, in common with modern methods to evaluate dispersion relations [29], we have found dynamical measurement of the thermal averages to be much more efficient than complete diagonalization of the vibrational Hessian ∇ 2 Φ V. Numerical evaluation.-Wehave developed a method to calculate f λ ðtÞ by MD simulation, which yields CðτÞ and hence γ, yielding a numerical evaluation of the analytic expressions (11).In an ensemble of MD runs, with no stress applied, we time average the output for each run XðtÞ using a coarse-grained time step between τ D =4 and τ D to give hXi.To eliminate any errors, we find the zero temperature configuration U λ which minimizes j∂ λ hXi − ∂ λ Uj 2 .The calculated ∂ λ U is then used to project out the defect force f λ ðtÞ ¼ −∂ λ U • ∇V½XðtÞ over the same averaging time interval that produced hXi.We have found this method to be robust to variation in the averaging period and especially efficient for short line segments or nanoscale defects, where the zero temperature structures are typically related by rigid translation [30].An example of such calculations is shown in Fig. 1 for a 7 atom SIA cluster in tungsten, which exhibits the anomalous temperature independent mobility γ ¼ γ 0 [17], and in Fig. 2 for a highly mobile edge dislocation in iron, which exhibits a mixed temperature dependence γ ¼ γ 0 þ k B Tγ w [15].In both cases we see that CðτÞ loses all coherence after the first zero at ∼τ D =2, over which time the defect is observed to be essentially stationary.This validates our assumption of time scale separation between thermal vibrations and defect motion.We identify the subsequent force autocorrelation (FAC) signal as noise because it flattens with the system and ensemble size, limiting the integration CðτÞ only to the first zero.As shown in the figures, this method gives values in excellent agreement with conventional trajectory analysis.We also calculated the FAC for the 7-atom SIA 11) for a 7 atom SIA cluster in tungsten using LAMMPS [31] and an interatomic potential by Marinica et al. [32].We see a very similar peak in all methods which loses coherence after a time period ∼τ D =2, and we approximate the time integral in (11) Φ V. We find excellent agreement with the dynamical method, as shown in Fig. 1. Discussion.-Terms similar to (11) appear in phonon scattering predictions of γ, where they may be interpreted diagrammatically, with ∂ λ ∇ n Φ V approximately representing a vertex of one defect with n phonons [11,34].In this continuum picture, defects and phonons are separable to harmonic order, conserving energy and momentum in collisions.As a result, each term in (11) becomes dependent on the phase space available for the scattering process it represents.The anomalous term γ 0 is forbidden in such models as it represents the pure absorption or emission of a phonon, a process which has a vanishing phase space for subsonic defect speeds due to the linear phonon dispersion relation [34,35].It turns out that the second term in (11) dominates, describing a two-phonon elastic scattering process known as the phonon wind.With a cubic anharmonicity parameter A [36] this term has an approximate magnitude ∼k B TðA=μÞ 2 τ D , where μ is the shear modulus, in agreement with more detailed continuum treatments [11].However, the prediction γ 0 ¼ 0 from continuum analysis does not explain the observed simulation results. To see how the present treatment allows an anomalous temperature independent mobility, we express γ 0 in the eigenbasis fv k g of the vibrational Hessian ∇ 2 Φ V. Using (10) and the expansion For this term to vanish, as in all continuum theories, we require ∂ λ ∇ k V ¼ 0. But this implies that the defect displacement operator ∂ λ U is an eigenvector of the total Hessian ∇ 2 V as the "off-diagonal" terms ∂ λ ∇ k V that mix ∂ λ U and the vibrational modes must vanish.We have explicitly demonstrated that this is not the case; it is precisely this effect, which relies on the weaker identification of a defect as a localized deformation that is not an eigenvector of the Hessian, in contrast to a canonical quantity separable from vibrations, that gives rise to γ 0 .Of course, anharmonic vibrations still affect the dynamics in a manner which becomes analogous to typical scattering theories in a continuum picture, giving the phonon wind term k B Tγ w in (11).These terms are appreciable for only extended line dislocations, which significantly deform the host lattice, while the anomalous γ 0 is the leading term for nanoscale defects which are typically elastically neutral in the far field.For some extended dislocations in close-packed crystals the defect translational operator is very nearly an eigenvector of the Hessian, implying that the anomalous mobility vanishes and γ ∼ k B Tγ w [13].But, in general, we have found this not to be the case, with the mixed dependence γ ¼ γ 0 þ k B Tγ w occurring across a wide range of crystal defects. Concluding remarks.-Ourmain result is an explicit form (11) for the friction tensor γ of highly mobile crystal defects.We believe this is a new result.It may be used to parametrize accurately deterministic (_ ηðtÞg defect mobility laws.The result was obtained by identifying defects through a projection operator with no recourse to elasticity.An anomalous temperature independent mobility γ ∼ γ 0 arises because the displacement vector corresponding to defect motion is not an eigenvector of the Hessian, in violation of elasticity theory or solitonlike models, where vibrations are canonical.This finding highlights the importance of intrinsically discrete (i.e., atomistic) analysis to understand nanoscale crystal plasticity.We note that the form of γ 0 in (11) is closely analogous to the famous Kac-Zwanzig heat bath formula [18].But rather than a random array of harmonic oscillators with a constant coupling strength, we have here the vibrational modes of the entire crystal coupling to a localized deformation through ∂ λ ∇ Φ V.It is hoped that our explicit expression for γ and the method of evaluation may be used to provide further connections between analytic heat bath models and the thermal dynamics of real systems. PRL 113 , 215501 (2014) P H Y S I CA L R E V I E W L E T T E FIG. 1 ( FIG.1(color online).Evaluation of the defect FAC in unbiased molecular dynamics simulation at three temperatures and the first analytic term in (11) for a 7 atom SIA cluster in tungsten using LAMMPS[31] and an interatomic potential by Marinica et al.[32].We see a very similar peak in all methods which loses coherence after a time period ∼τ D =2, and we approximate the time integral in(11) by the area under this first peak.Inset: Comparison of the predicted diffusivity D ¼ k B T=γ and the direct measurement D ¼ hx 2 i=2t. FIG.1(color online).Evaluation of the defect FAC in unbiased molecular dynamics simulation at three temperatures and the first analytic term in (11) for a 7 atom SIA cluster in tungsten using LAMMPS[31] and an interatomic potential by Marinica et al.[32].We see a very similar peak in all methods which loses coherence after a time period ∼τ D =2, and we approximate the time integral in(11) by the area under this first peak.Inset: Comparison of the predicted diffusivity D ¼ k B T=γ and the direct measurement D ¼ hx 2 i=2t. T. D. S. was supported through a studentship in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College London, funded by EPSRC under Grant No. EP/G036888/1.This work was partially funded by the RCUK Energy Programme (Grant No. EP/I501045) and by the European Unions Horizon 2020 research and innovation programme under Grant Agreement No. 633053.To obtain further information on the data and models underlying this Letter please contact<EMAIL_ADDRESS>views and opinions expressed herein do not necessarily reflect those of the European Commission.This work was also partially funded by the United Kingdom Engineering and Physical Sciences Research Council via a programme Grant No. EP/ G050031. FIG. 2 ( FIG. 2 (color online).Evaluation of CðτÞ for a 1=2h111ið10 1Þedge dislocation in Fe, using an interatomic potential by Gordon et al.[33], normalized to the unit length aj½1 21j ∼ 7 Å.The FAC increases with temperature such that γ ¼ γ 0 þ k B Tγ w , exhibiting both anomalous and phonon wind drag.Inset: Comparison with direct measurement of the diffusivity.The values are in quantitative agreement with finite stress simulations[15].
4,510.8
2014-11-20T00:00:00.000
[ "Physics" ]
The 2019 surface acoustic waves roadmap Today, surface acoustic waves (SAWs) and bulk acoustic waves are already two of the very few phononic technologies of industrial relevance and can been found in a myriad of devices employing these nanoscale earthquakes on a chip. Acoustic radio frequency filters, for instance, are integral parts of wireless devices. SAWs in particular find applications in life sciences and microfluidics for sensing and mixing of tiny amounts of liquids. In addition to this continuously growing number of applications, SAWs are ideally suited to probe and control elementary excitations in condensed matter at the limit of single quantum excitations. Even collective excitations, classical or quantum are nowadays coherently interfaced by SAWs. This wide, highly diverse, interdisciplinary and continuously expanding spectrum literally unites advanced sensing and manipulation applications. Remarkably, SAW technology is inherently multiscale and spans from single atomic or nanoscopic units up even to the millimeter scale. The aim of this Roadmap is to present a snapshot of the present state of surface acoustic wave science and technology in 2019 and provide an opinion on the challenges and opportunities that the future holds from a group of renown experts, covering the interdisciplinary key areas, ranging from fundamental quantum effects to practical applications of acoustic devices in life science. Introduction Phonons represent-in addition to photons or electrons-a fundamental excitation in solid state materials. Over the past decades, innovation for radically new devices has mostly been driven by controlling electrons (electronics) and photons (photonics) or magnetic (magnonics) and spin excitations (spintronics). Recently, phonons shifted back into the focus of both fundamental and applied research, as controlling these similarly to electrons and photons would, for instance, harness sonic energy in novel phononic devices [1]. Many current 'acoustic' devices employ acoustic phonons, which have striking analogies to their electromagnetic counter parts, photons. Both sound in a rigid material and light in a transparent medium share a linear dispersion and are only weakly attenuated. However, for sound waves, the propagation velocity amounts to a few thousand meters per second, which is roughly 100 000 times slower than the speed of light. Microacoustics deliberately takes advantage of these very dissimilar propagation velocities: electromagnetic microwave devices in the technologically highly relevant radio frequency (RF) domain, spanning the range from several 10s of megahertz to several gigahertz, are bulky since the corresponding wavelength of light ranges between centimeters and metres. Using sound, these dimensions can be elegantly shrunk by a factor of 100 000 to fit on a small chip for signal processing in mobile communications. Thus, several dozen acoustic RF filters are integral parts of nearly every current (LTE) or future (5G) wireless device [2]. Surface acoustic waves (SAWs) and bulk acoustic waves (BAWs) also increasingly find numerous applications in the life sciences and microfluidics (acoustofluidics) for sensing or mixing and processing tiny amounts of liquids, leading to the so called 'lab-on-a-chip' (LOC) or micro total analysis systems (µTAS) [3]. Such thumbnailsized microfluidic devices begin to emerge and revolutionize diagnostic quests in medicine. Remarkably, all of the above devices are inexpensive-sometimes they may even be considered as consumables-because they are mass-produced by state-of-the-art cleanroom technologies. In addition to the continuously growing number of already very practical applications, SAWs and BAWs are ideally suited for fundamental research and to probe and control elementary excitations in condensed matter, even in the limit of single quanta. This Roadmap and its 15 contributions conclude the 'Special Issue on Surface Acoustic Waves in Semiconductor Nanosystems', which was initiated by the successfully completed Marie Sklodowska-Curie Innovative Training Network SAWtrain with ten beneficiaries in seven European countries. In the present Roadmap, we pick up several of these and other topics and present a snapshot of the present state of surface acoustic wave science and technology in 2019 and provide an opinion on the challenges and opportunities that the future holds. The topics addressed in this Roadmap are illustrated in figure 1. These span from the exploitation of phonons in emerging hybrid quantum technologies, the manipulation and spectroscopy of collective excitations, signal processing to advanced sensing and actuation schemes in life science. [100], Copyright 2014 by the American Physical Society. 2D materials: Reproduced from [28]. CC BY 4.0. Sensing: Reprinted with permission from [146]. Copyright 2017 American Chemical Society. Acoustofluidics: Adapted figure with permission from [158], Copyright 2017 by the American Physical Society. Cell manipulation: © C Hohmann, NIM. All other icons: see the respective contributions of this Roadmap. Quantum acoustics with superconducting circuits Per Delsing 1 and Andrew N Cleland 2 1 Microtechnology and Nanoscience, Chalmers University of Technology, 412 96 Göteborg, Sweden 2 Institute for Molecular Engineering, University of Chicago, Chicago, IL 60637, United States of America Status Quantum acoustics (QA) is a relatively new research discipline which studies the interaction between matter and sound, in a similar way that quantum optics (QO) studies the interaction between matter and light. This interaction is studied using acoustic waves and individual quantum systems. The waves can be either surface or bulk waves and the quant um system can for instance be superconducting circuits or semiconductor quantum dots (see section 2). Here, we will concentrate on superconducting circuits coupled to surface acoustic waves (SAWs) either in a SAW cavity similar to circuit quant um electrodynamics (QED) [29] or to open space similar to waveguide-QED [30]. Several experiments from the optics domain have been repeated in the acoustic domain. In 2014, it was shown [31] that a superconducting qubit could be coupled to SAWs by forming the capacitance of a transmon qubit [32] into an interdigitated transducer (IDT) (see figure 2). Acoustic reflection was shown to be nonlinear; and an excited qubit was shown to relax by emitting SAWs. The next development was the construction of SAW cavities with high Q-values (~10 5 ) [33], shorted IDTs were used as efficient acoustic mirrors. It was later shown that superconducting qubits could be placed inside these resonators (see figure 3) and strong interaction was observed [34,35]. Nonclassical phonon states, such as single phonon Fock-states and superpositions, have been generated and the Wigner function of these states was measured [36]. There are also very interesting differences between QA and QO. The propagation speed of sound in solids, v, is approximately five orders of magnitude slower than for light in vacuum. This results in short wavelengths for SAWs so that new regimes can be explored that cannot be studied in QO. In one approach, the dipolar approximation breaks down and the superconducting circuit acts as a 'giant' artificial atom. The slow propagation also allows for manipulation of acoustic signals on-chip. This may in the future be used for routing and capture of propagating phonons. Moreover, interesting new functionalities are possible in quantum information due to the intrinsic time delay caused by the slow propagation. Current and future challenges Single phonon sources and receivers. It would be straightforward to make a single phonon source by exciting a qubit and then just waiting for it to emit a single phonon into an acoustic waveguide. There are however two challenges. To prove that this is really a single phonon source is not trivial, one way being to measure its second correlation function; this requires detecting the phonon in some way, possibly by conversion to a photon. Further, one needs to deal with the problem that standard IDTs emit the phonon with equal probability in both directions, using unidirectional IDTs instead. Giant atoms. The size of atoms, d, is always small compared to the wavelength of light, λ. This is true for all versions of QO, including cavity-and circuit-QED. In QA, however, the artificial atom made up of a superconducting circuit is normally substantially larger than the wavelength of the acoustic field, i.e. d > λ. This allows us to attach an acoustic antenna on the artificial atom, so that the emission from the atom can be frequency dependent and directional [37]. It has also been shown theoretically that nested pairs of such giant atoms in an acoustic waveguide can be coupled to each other while they are still protected from relaxation into the waveguide [38]. However, artificial atoms can also be giant in another sense, namely if the time it takes the SAW to pass the atom is larger than the relaxation time τ of the atom, d > v τ. This turns out to be a stronger condition than d > λ, so that if the atom is giant in the second sense, it is automatically giant in the first sense. In this case, there is a possibility that a phonon emitted from the atom can be reabsorbed by the atom [39]. This leads to non-exponential relaxation, which was recently demonstrated [40]. is connected through an electronically controlled coupler (center) to an acoustic cavity formed by an interdigitated transducer facing IDT mirrors on either side (right). The qubit structure is fabricated on a separate sapphire substrate from the IDT structure on a LiNbO 3 substrate, which is viewed looking through the transparent sapphire substrate. The two are assembled using a flip-chip technique. Similar to a device shown in [35]. Strong coupling to open space. The acoustic coupling between a superconducting qubit and an open acoustic transmission line can be made quite strong just by increasing the number of finger pairs. Choosing a strong piezoelectric material such as lithium niobate also increases the coupling. This makes it relatively easy to enter the deep ultra-strong coupling regime for acoustically coupled qubits. However, complications can occur if the anharmonicity of the transmon qubit is made much smaller than the coupling, so careful engineering or alternative qubit designs are needed (see below). Coupling to other quantum systems. The ability to quantum control phonons in SAW devices poses an interesting possibility, namely the potential for coupling to other quantum systems, such as two-level systems (TLS) or optically-active defect states, such as the nitrogen-vacancy (NV) center in diamond [41] or the divacancy defect in silicon carbide. Some TLS may have strong interactions with phonons through the deformation potential, while perhaps having weaker coupling to electromagnetic fields. SAWs provide the interesting potential to probe such systems and possibly provide an avenue for quantum control [42]. Coupling to nanomechanical devices. Nanomechanical devices have been extensively developed over the past two decades, in part because of their utility as sensors and in part because they hold potential for quantum memories and for mode conversion, such as between mechanical motion and optical signals. SAWs provide an interesting opportunity for interacting with the mechanical degrees of freedom in these systems, and, with the advent of single-phonon control, the ability to operate and measure such systems in the quantum limit. Advances in science and technology to meet challenges Understanding and minimizing losses. In any kind of quantum information application, losses are unwanted. For a SAW delay line or a SAW coupled qubit, there are several different kinds of losses, including: (i) conversion loss in the IDT; (ii) beam diffraction; (iii) beam steering; and (iv) propagation loss. All of these mechanisms are dependent on a number of parameters, including frequency, temperature, substrate material, sample layout, etc. In order to minimize losses, a systematic study of these loss mechanisms is needed. Ultrastrong coupling. With a transmon qubit, it is relatively simple to get very strong coupling to an acoustic transmission line. From the point of view of making a clean study of the ultra-strong and deep ultra-strong regimes, one would like to have an anharmonicity that is larger than the coupling. This is not possible in the transmon qubit, since its anharmonicity is maximum 10% of the qubit frequency [32]. Therefore, it would be interesting to investigate if a capacitively-shunted flux qubit, which can have much higher anharmonicity, can be used. Unidirectional IDTs. As mentioned above, a normal IDT structure emits phonons with equal probability in both directions. For certain applications, like a single phonon source, it would be highly advantageous to make qubits and IDTs which are unidirectional. It has been shown that unidirectional IDTs with high conversion efficiency can be made [43], but they have not yet been applied in qubits. Concluding remarks SAWs have played an important role in conventional electronics, both for signal manipulation and, for example, as sensors. We believe their role in quantum physics could be equally important, both for fundamental science and for applications in quantum sensing. There are currently several groups with active efforts in this area, with new techniques being developed for coupling and control of SAWs. Status Surface acoustic waves (SAWs) play an important role in many branches of science and technology. Today, SAW devices are routinely integrated into compact electronic circuits and sensors. This success is due to some exceptional features: (i) SAWs are confined close to the surface, (ii) they can be coherently excited and detected with microwave electronics and (iii) stored in compact high-quality resonators or guided in acoustic waveguides over millimeter distances, and (iv) their properties can be engineered by choice of material and heterostructure [44]. Thanks to these features and further technological progress, SAWs have recently tapped into the emerging field of quantum acoustics (QA), with breakthrough experiments demonstrating the coherent quantum nature of SAWs in the few-phonon regime ( [45] and section 1), initiating research on SAW-based quantum devices and technologies. To identify and analyze the challenges and prospects of the field, the analogy with quantum optics (QO) provides useful guidance. Quantum optical concepts and systems suggest novel counterparts in the solid state, with sound (phonons) replacing light (photons) and artificial atoms and quasi particles taking over the role of natural atoms. As shown in figure 4, this correspondence principle reveals fruitful connections and notable differences between the field of SAW-based QA and some of the most prominent quantum optical systems. As in QO, we can distinguish two main uses of the acoustic field in QA: one, to provide an effective classical field to modify the motional or internal state of a quantum system, while the other is as a quantum system in its own right, using its full state space. In semiconductor implementations, uses of the first type have been demonstrated: single natural and artificial atomic systems have been coherently driven by SAWs with evidence of phonon-dressed atomic states [46] and phonon-assisted dark states (see section 4) being reported, as well as the modulation of energy levels of quantum dots [7]. Moreover, SAWs have been used to provide moving potential wells for semiconductor quasiparticles as a route towards quantum channels for single electrons (see section 3) and the study of many-body quantum ground states of an exciton-polariton condensate in SAW-induced lattices [47]. Current and future challenges Experimentally demonstrating hallmarks, such as the Purcell effect, vacuum Rabi oscillations, and superradiance for semiconductor qubits in high-quality acoustic resonators would be the next steps towards cavity quantum acoustodynamics (QAD), as would be the generation of non-classical states of the acoustic modes. Some of these steps have already been realized for superconducting qubits (see section 1). SAWs have been proposed to address a number of challenges faced by implementations of quantum information processing (QIP) in close analogy to QO and here we highlight two representative examples (see figure 5). First, a key ingredient for realizing large-scale quantum networks is the interconnection of independent nodes. Hence, one cornerstone of QIP architectures is a quantum data bus to distribute quantum information. In QA devices, phonons were proposed to serve this purpose on-chip, either by coherently shuttling spin qubits [48] or using resonator or waveguide modes to transport phononic quantum states ( [49] and section 5). In particular, SAW modes in piezoactive materials can serve as versatile quantum transducers, even interfacing with vastly different quantum systems in hybrid setups, including superconducting qubits, QDs, color centers and trapped ions [50]. Demonstrating the transfer of quantum information between different qubits using SAWs remains an outstanding challenge. Ultimately, this may pave the way for large-scale on-chip phononic quantum networks ( [49] and section 5). To this end, further improvements regarding qubit and SAW coherence, coupling strength and SAW network fabrication are needed. Apart from these technological challenges, interesting theoretical questions arise from the peculiarities of phonon-based architectures in comparison with photon-based technologies. Specifically, the slow speed of sound entails non-Markovian effects in phononic quantum networks, which has intricate implications and will have to be worked out in more detail. Second, a key goal of QIP is to implement large-scale quantum simulators. Promising candidates from QO research are cold atoms confined to optical lattices and trapped ions (see figure 4). In the solid-state setting, SAW-based lattices have been proposed as a scalable platform for quantum simulation, e.g. of long-range Hubbard models [51,52]. Confining electrons in tunable effective periodic potentials, this would enable analogue quantum simulators reaching parameter regimes very different from their QO counterparts. Their experimental realization, however, poses several demanding requirements, as detailed below. Advances in science and technology to meet challenges The main challenges outlined above require both theoretical and technological advances. First, a thorough development of the quantum theory of sound-matter interactions is needed that can be guided by QO, but must especially take into account the SAW-specific peculiarities such as the low speed of sound, the anisotropic medium in which SAWs propagate and the comparatively large size and intricate structure of artificial atoms, and specifics of quasi-particle dispersion. These can give rise to entirely new phenomena, as has been pinpointed, e.g. in the case of giant atoms, where the dipole approx imation breaks down and largely unexplored non-Markovian parameter regimes can be entered (see section 1 and references therein). On the other hand, as SAW-based quantum simulators may provide access to yet unexplored energy scales of long-range Hubbard models, QA extends the scope of testbeds for quant um technologies and QIP, but it also requires the development of advanced methods of quantum many-body theory to guide and interpret these results. The technological challenges concern the fabrication of a compact device comprising all necessary components and its operation in the quantum regime. In the case of large-scale quantum networks, these components include high-quality SAW resonators, low-loss phononic waveguides, and long-lived qubits with excellent coherence properties and good coupling to the phonon modes. Relevant SAW modes have to be singled out and protected from their mechanical environment, as can be achieved by embedding the network in a phononic crystal lattice ( [49] and section 5). Ultimately, all these individual building blocks will have to be put together in a single experiment. Regarding SAWbased quantum simulators, the necessary technical requirements for a faithful implementation have been put together in a concise list [52]. As it turns out, all stringent conditions on low temperatures, high SAW frequencies and suitable high-mobility semiconducting materials can be met in state-of-the art experiments, although there is still ample room to explore in order to identify the most promising combination of materials, heterostructures, and quasi-particles. These need to be supplemented with suitable read-out procedures to access the result of the quantum simulation. Concluding remarks To conclude, we have discussed and analyzed an emerging research field situated at the intersection between classical (relatively mature) SAW-based devices and quantum science. Using the powerful framework of QO and quantum information science, we have identified several promising research directions which are likely to lead to further rapid progress, both theoretically and exper imentally, with both the potential to resolve some of the shortcomings inherent to quantum optical platforms (such as the short-ranged nature of interactions between ultracold atoms in optical lattices or the scalability issues faced by cur rent trapped-ion setups or the large structure size of circuit-QED devices), as well as the ultimate outlook to access yet unexplored parameter regimes. Potential future applications of this still young research field include phononbased quant um networks, quant um simulation of many-body dynamics, or phonon quantum state engineering, yielding (for example) squeezed states of sound, as required for improved quant um-enhanced sensing and sound-based material analysis. Summary of our correspondence principle between QO and the emergent field of QA. With this dictionary, we can establish insightful connections between these two fields of research, ranging from cavity QED all the way to optical lattices, but also anticipate novel phenomena because we gain access to very different parameter regimes, as exemplified here for the relevant speed of light (sound) and the charge-to-mass ratio. Further details are given in the text. Status The control of single electrons is of importance for many applications such as metrology or quantum information processing (QIP) [54]. Originally, the field was motivated by the development of single-electron pumps in the quest for a fundamental standard of electrical current linking the ampere to the elementary charge and the frequency [55]. To have a highaccuracy single-electron pump is of importance as it allows for the precise determination of the value of the elementary charge. This is one of the seven reference constants in the new SI units which will be redefined in 2019 [56]. Single-electron pumps based on surface acoustic waves (SAWs) look promising as the pump can be operated at frequencies of several GHz and hence provide a much larger current compared to other approaches. A quantized acoustoelectric cur rent can be generated when transporting electrons with a SAW through a narrow channel defined by electrostatic gates in a 2D semiconductor heterostructure. The precision of the cur rent plateaus, however, has never exceeded about one part in 10 4 (100 ppm) due to the relatively shallow confinement potential [55]. In parallel to the development of controlled singleelectron transport by SAWs, much research has been devoted to the coherent control and manipulation of a single electron confined in a gate-defined quantum dot, in order to exploit this for QIP [57]. Combining these two approaches has made it possible to transport individual electrons (rather than a stream of single electrons) controllably. A single electron can be transported on-demand by a SAW between distant QDs (see figure 6) with very high fidelity [58,59]. More recent experiments have also achieved transfer of the spin information of an electron [60] using the same technique and have generated streams of single photons by pumping single electrons into a region of holes [61]. Current and future challenges In quantum technologies, the elementary building block is a TLS-the qubit. Most approaches focus on localised qubits, but some utilise flying qubits, where the qubit is manipulated in flight. Currently, the only technology that uses propagating quantum states is quantum optics (QO), where the quantum information can be coded into photon polarisation. Similar experiments should be possible with single moving electrons in a solid-state device where the Coulomb coupling between electrons provides a means of manipulation. Photons are noninteracting quant um particles and therefore have a longer coherence time than electrons. However, owing to the absence of interactions, it is very hard to construct a two-qubit gate that operates at the single-photon level. An important challenge for electron QO is coherent control of the single electron in flight. This control would allow quantum operations to be performed on the quantum state of the flying electron and hence a solidstate flying-qubit architecture could be implemented. The question of scalability is a central issue in engineering a spin-based quantum computer [62]. This is likely to require the coherent transfer of a single electron between two distant static qubits, for entangling qubits, error correction, or transfer to and from a quantum memory. Here, SAW-driven QDs have been identified as an interesting platform to control the displacement of the electron spin, with high but precise speed, and low requirements in terms of gate control. Another application of single-charge and/or spin transfer is the conversion from an electron qubit to a photon qubit, or at least, the read-out of the spin by measurement of the polarisation of the generated photon. These have not yet been achieved, but progress is being made in the generation of single photons by single electrons, a large and essential step in the right direction [61]. Coupled with single-photon detection, perhaps also by SAWs, one can envisage a hybrid solid-stateoptical system in which qubits move back and forth between photons and static solid-state dots, allowing the transmission of quantum information over large distances as photons, for quantum cryptography, and the manipulation and entanglement of qubits for use as a quantum repeater to extend the transmission range in cryptography. Here, photon qubits must be captured and stored, and then entangled pairs of photons generated and sent in opposite directions. Deterministic, lowloss and high-fidelity conversion and coupling of qubits are required. Advances in science and technology to meet challenges Coherent control of single flying SAW electrons can be realised by bringing two SAW quantum rails into close contact and making them interact by tunnel-coupling [63]. The resulting coherent oscillations of the electron between the two rails would prove the presence of coherent transport. One could also attempt to control the quantum state of the electron on the flight dynamically by ultrafast gate operations. This would allow the observation of such coherent oscillations in the time domain. To realise coherent single-electron transport is, however, quite challenging. The quantum state of the propagating electron during propagation has to be preserved and should not be perturbed by the environment. Several issues have to be addressed, such as the interactions with the random background of nuclear spins, the fluctuating electrostatic background potential induced by dopants in the semiconductor heterostructures, and the smoothness of the electrostatic gate potential to ensure adiabatic transport. Undoped systems will reduce scattering significantly, but suitable gate designs to make static dots need to be developed. To build up a scalable flying-qubit architecture also requires the ability to synchronise several single-electron sources. Currently, the limitation lies in the length of the SAW train, which is composed of over a hundred SAW minima. To synchronise two SAW sources, it is hence necessary to know exactly in which minimum the electron is loaded. Using ultrafast gate triggering, it is indeed possible to load a single electron into a predetermined SAW minimum with very high efficiency, but it could be advantageous to engineer SAW transducers that allow generation of a single SAW minimum without sacrificing amplitude. This would allow the suppression of the additional minima, which do not contribute to the single-electron transport, but which represent an additional background perturbation. As far as spin is concerned, minimising the perturbation of the SAW excitation before and after the transfer is key to probing efficient and coherent spin transfer of individual electrons. Challenges facing the conversion between spin and photon qubits include the efficient emission of single photons (which requires better p -n junction design and the combination of a SAW and a Bragg stack in a pillar projecting higher than the surface on which the SAW propagates). Also, the directions in which spins of the electron and hole with which it recombines are initialised must be orthogonal to avoid decoherence of the emitted photon, requiring particular wafer facets and layers. Concluding remarks Although there remain considerable challenges ahead, SAWs have the potential to provide the first electronic flying qubit as well as novel flying-qubit architectures [64]. They are also par ticularly relevant to plans to use single electron buses for retrieving and distributing quantum information stored in QDs that are embedded in a complex network. There remain open questions on the operation of these devices as well as their applicability to other materials, such as nuclear-spin-free materials, like 28 Si, which looks very promising for spin-based quantum computation, though a piezo electric layer would need to be added to provide the SAW potential. Further applications and functionalities of these devices are expected in fundamental science, as well as in applied research, including their use as novel phononic lattices [52]. Status Defect centers in solids can feature exceptional spin properties, including long spin decoherence times and highly efficient optical state-preparation and readout. These spin systems provide a promising experimental platform for quant um computing. High-fidelity quantum control of individual spin qubits has been achieved in a number of solid-state spin systems. An important next step is the control of interactions and the generation of entanglement between individual spin qubits. Coherent interactions between individual defect centers mediated by magnetic dipolar coupling or by long-range optical interactions have been actively pursued. An alternative approach is to exploit spin-mechanical coupling, coupling spins to mechanical vibrations, such as SAWs [50,41], and to develop a phononic network of defect centers [49,65]. Mechanical waves cannot propagate in a vacuum. The speed of sound is many orders of magnitude slower than the speed of light. It is thus much easier to confine, guide, and control mechanical waves on a chip than for optical waves. Coherent interactions between SAWs and defect centers have been demonstrated for single negatively-charged nitrogen vacancy (NV) centers in diamond and for an ensemble of neutral divacancy (VV) centers in silicon carbide. The coherent spin-SAW coupling of a single NV takes advantage of the strong strain coupling of the orbital degrees of freedom of the NV excited states and occurs through the sideband optical transitions, as shown in figure 7(a) [66]. Rabi oscillations of a single NV center have been achieved via the SAW-driven sideband transitions [66]. The coupling between the ground spin states and the SAW can take place via a resonant Raman process, which incorporates a sideband optical transition in a Λ-type three-level system as illustrated in figure 7(b) [67]. These Raman processes allow the use of the strong excitedstate strain coupling without populating the excited states, thus avoiding rapid decay of the excited states [67]. For the coherent spin-SAW coupling of ensemble VV centers in silicon carbide, a SAW resonator that focuses and confines acoustic waves in a Gaussian geometry has been developed [42]. The strong confinement provided by the SAW resonator enables the realization of Rabi oscillations and Autler-Townes splitting, driven directly by the SAWs via the ground-state strain coupling [42]. Current and future challenges There are two basic challenges for the use of mechanical processes in quantum operations. Coupling of a mechanical system to the surrounding environment leads to mechanical decoherence. Ultrahigh mechanical quality factors are thus needed for the isolation of the mechanical system from the environment. Mechanical systems are inevitably subject to thermal mechanical noises. Although various cooling processes including cryogenic cooling can be used, it is highly desirable if mechanically-mediated quantum operations can be robust against a small number of thermal phonons. In addition, a spin-mechanical system operating in the quantum regime requires the single-phonon spin-mechanical coupling rate to exceed the mechanical as well as spin decoherence rates. There are also a number of important issues that are unique to phononic quantum networks. The single-phonon coupling rate between a spin qubit and a mechanical mode, which determines the rate of gate operations, scales with 1/ √ m, with m being the mass of the relevant mechanical system. Furthermore, the nearest neighbor coupling of a large number of mechanical resonators leads to spectrally dense mechanical normal modes, which can induce crosstalk between these modes and limits the number of mechanical resonators that can be used in a network. These scaling issues have been well known in well-established phononic quantum systems, such as ion trap quant um computers [68]. Furthermore, high-fidelity quantum-state transfer in a network usually requires a cascaded or unidirectional network. While cascaded optical quantum networks can be realized with chiral optical interactions, as demonstrated with atoms and QDs, chiral acoustic processes and thus cascaded phononic networks are difficult to implement in a solid-state system. Advances in science and technology to meet challenges Recent studies in cavity optomechanics have shown that phononic bandgaps can provide a nearly perfect isolation for a mechanical mode from its surrounding mechanical environment [69]. Further advances in phononic engineering can incorporate phononic crystal shields in phononic quant um networks of defect centers. Extensive research efforts on new defect centers, including new materials systems, may lead to the design and realization of defect centers that feature spin properties and spin-mechanical coupling processes that are superior to defect centers, such diamond NV centers, used in current experimental studies. Mechanically mediated quant um operations that disentangle the mechanical subsystem from the rest of the system can in principle be robust against thermal phonons [70]. Further theoretical and experimental explorations of these or related quantum operations in a spin-mechanical system can lead to phononic networks that can operate at elevated temperatures. The scaling issues discussed above are inherent to any large mechanical system. A conceptually simple solution is to break a large phononic network into small and closed mechanical subsystems. The use of closed mechanical subsystems can not only overcome the scaling problems, but also avoid the technical difficulty of implementing chiral phononic processes [49]. This type of mechanical subsystem can be formed in a network architecture that features alternating phononic waveguides and uses two waveguide modes for communications between neighboring quantum nodes, as illustrated schematically in figure 8(a). A quantum network of spins can be formed when the closed mechanical subsystems are coupled together via the spins, as shown in figure 8(b). This phononic network can also be embedded in a phononic crystal lattice (see figure 8(c)). The successful realization of these complex spin-mechanical systems will depend crucially on the advance in nanofabrication as well as defect center implantation technologies for materials such as diamond or SiC. Concluding remarks With the recent experimental realization of coherent coupling between SAWs and defect centers in solids, one of the next milestones is the use of mechanical vibrations such as SAWs to mediate and control coherent interactions between individual defect centers and corresponding spin qubits. Scaling up these processes in a phononic quantum network can potentially enable a new experimental platform for quantum computing. Advances in phononic engineering, nanofabrication, thermally-robust quantum operations, as well as material sciences of defect centers will be needed in order to overcome the fundamental and technical challenges. Status The coupling between elastic waves and single quant um dots (QDs) has a longstanding tradition. In the early days of QD research, their coupling to phonons was considered mainly detrimental. For instance, the predicted phonon bottleneck [71] and phonon induced dephasing [72] were assumed to prevent the realization of QD lasers or limit the fidelity of quantum operations, respectively. As the field developed, many presumed challenges related to the QDs susceptibility to phonons have been found to exist only in very rare settings as it is the case for the phonon bottleneck. Remarkably, concepts have been developed and implemented which deliberately employ the coupling of phonons and for instance excitons in phononassisted quantum gates. Dynamic acoustic fields-in the form of a piezoelectric surface acoustic wave (SAW)-were put forward [73] as a high precision tool to regulate the injection of electrons and holes into the dot and thus generate a precisely triggered train of single photons even before the first demonstration of single photon emission by a single QD. Progress in the following years includes the experimental implementation of this acousto-electric scheme [74] and the development of advanced schemes incorporating concepts of solid-state cavity quantum electrodynamics. In parallel, the dynamic modulation of the QD narrow emission lines and the underlying coupling mechanisms were investigated. The observed spectral modulation faithfully reproduces the temporal profile of the phononic waveform [75,76]. In the case when the frequency of a SAW phonon exceeds the optical linewidth, the system is in the resolved sideband regime [77]. In this key experiment, the QD exciton mediates a parametric coupling between the incoming and the scattered photons with their energies differing by the phonon energy. Figure 9 shows emission spectra of a single QD modulated by a SAW with increasing amplitude. Moreover, the SAW's coherent phonon field was found to modulate the narrow linewidth optical modes of photonic crystal cavities [78] and embedded QDs. This way, the single photon emission can be triggered precisely at the time the emitter is tuned into resonance with the optical mode by the Purcell enhancement. At all other times, the emission is strongly Purcell suppressed [79]. The sound-controlled lightmatter interaction in a QD-nanocavity systems can be directly extended to implement entangling quantum gates employing Landau-Zener transition for experimentally demonstrated system parameters [80]. Current and future challenges Parametric excitation. The optical two-level system (TLS) of the QD enables parametric mixing of three waves. Already in the first experimental report on SAW-sideband modulation of a QD [77], parametric excitation of the QD exciton was achieved: by optically pumping one of the phononic sidebands interconversion between the optical and mechanical domains was achieved. This scheme enables for instance laser cooling of mechanical motion and for interfacing single semiconductor quantum emitters with propagating or even localized phonon fields. Parametric excitation is needed for future classes of hybrid devices whose operation is governed by classical and ultimately quantum mechanical effects. Phononic environments. In general, the coupling of optically active semiconductor quantum emitters to elastic waves is comparably weak. Therefore, a grand challenge lies in the enhancement of the underlying coupling between the elastic field and the quantum emitter, such that the optomechanical coupling exceeds the decoherence rate of the exciton. The governing deformation potential and the strength of the piezoelectric effect are material parameters and thus fixed. Therefore, a strong localization of the elastic field is imperative to enhance the optomechanical coupling. To control these interactions the tailoring of the phononic environment is essential. The coupling between sound and matter can be either enhanced or suppressed in the case of a low or high phononic density of states. Optical and electrostatic QDs. The SAW-mediated transport of spins and charges allows for acoustic transfer of quantum information. Such schemes have been conceived and implemented for electrostatic QDs, which have been controlled and interconnected by SAWs [54]. The QDs in focus here are addressed by resonant lasers, enabling spin qubit control [81]. To combine the individual strengths of both QD systemsthe long-range SAW-transfer of single charges and spins of electrostatic QDs and the high-fidelity optical programming and manipulation of a chip-based stationary qubit and their mapping onto and entanglement with single photons-would mark another hallmark achievement in the field. Advances in science and technology to meet challenges Optomechanical crystals. These metamaterials supporting both photonic and phononic bandstructures are a native candidate system because they can be combined with QDs. In a recent experiment for instance, the optical and mechanical mode of an optomechanical cavity were coherently controlled by sound [82] (see section 8). Most remarkably, the mean occupation of less than a single coherent GHz phonon can be detected on the incoherent background of more than 2000 thermal phonons at room temperature (RT). When made in the (In)GaAs material system, QDs can be embedded inside the membrane during crystal growth. This tripartite system is illustrated in figure 10. It allows us to confine photons and phonons to smallest volumes and single QDs coupled to these excitations. In addition, waveguide structures (background) route photons and phonons in the plane of the membrane and form an on-chip interconnect. Thus, the fabrication of such devices represents a key enabling technological advancement towards the control of light, sound and matter on a chip. Hybrid semiconductor-SAW hybrids. Engineers have been continuously developing SAW and other microacoustic devices over the past few decades, almost exclusively for RF signal processing and communication purposes. Hybrid SAW-semiconductor devices can combine advanced SAW devices fabricated on strong piezoelectrics, such as LiNbO 3 and epitaxial semiconductor QDs, harnessing the paradigms of engineering for fundamental studies on QDs [83]. The deliberate hybridization of an epitaxial QD in a membrane and a LiNbO 3 SAW-resonator would mark key technological advancements. In such a device, an enhanced optomechanical coupling [8] and a high quality factor phononic mode could be interfaced. In a next, more advanced step, the semiconductor epilayer could be patterned to create a phononic circuitry. Nanowires. In contrast to planar architectures considered, heterostructure nanowires are promising inherently 1D platform. Tuning the geometric dimensions of the heterostructure, phononic confinement can be achieved to enhance the coupling between sound and matter. In addition, the NW provides a 1D transport channel to transport charges and spins. Combining the recently demonstrated SAW-regulated tunnel extraction of carriers out [84] and injection into a quantum emitter [85] would mark the achievement of a key scientific and technological challenge. Concluding remarks The great strength of acoustic and elastic waves and acoustic phonons in general is that they couple to almost any system either classical or quantum mechanical. Thus, the concepts and challenges discussed above can be applied to other types of quantum systems. Most notably, significant progress has been made on coupling defect centers in diamond and silicon carbide (see section 4) to propagating and localized SAWs [70]. The perspective of optically active QDs integrated in phononic and optomechanical devices uniquely interfaces RF phonons with a highly coherent TLS which can be addressed with near infrared light. They can be even designed for telecom wavelengths, which could ultimately lead to high-fidelity transduction of quantum information from a single GHz phonon to a single optical photon. Status In a quantum liquid (QL) or superfluid state, an ensemble of integer-spin quasiparticles (bosons) occupy a single quant um state and can flow without dissipation or sustain quantized vortices and persistent currents. At the heart of this state of matter is Bose-Einstein condensation (BEC), a quant um phase transition first predicted by Satyendra Nath Bose and Albert Einstein in 1924-1925. Pure BEC occurs in an ideal non-interacting, bosonic gas at very low temperatures. In contrast, in a QL the interactions are a fundamental feature. The prospect of a QL in a semiconductor chip is appealing since it allows us to exploit the entanglement of the composing quasiparticles. BEC of excitons (neutral bound states of an electron and a hole) in condensed-matter was first predicted in 1962 [86]. The chase for exciton BEC and QLs became very intense in the last couple of decades, in part due to the availability of fabrication methods for high-quality semiconductor heterostructures, where energy-band engineering enables the quantum confinement of excitons. More recently, composite photon-exciton bosonic quasiparticles (polaritons) have also been intensively studied [87]. Polaritons exist naturally in bulk semiconductors, but in microcavities (MCs), sophisticated heterostructures capable of confining light (see figure 11(a)), it is possible to enhance their population to reach BEC. Polaritons have a micrometers-long de Broglie wavelength λ dB due to their low mass (typically 10 −4 to 10 −5 the electron mass) and can thus form BECs and QLs even at RT. In GaAs structures, these phases appear only up to a few kelvin, due to the small exciton binding energy. Harnessing the full potential of these QLs in devices is still a big challenge. To achieve this goal, one requires ways to manipulate QLs, such as micro-patterning of the MC or the application of electric, magnetic and/or SAW acoustic potentials. In contrast to static modulation techniques, the amplitude of the potential produced by a SAW can be changed by controlling the amount of power applied to generate them. The spatial modulation of polariton QLs by square lattice potentials created by SAWs has been successfully demonstrated. Interesting phenomena, such as fragmentation of a polariton condensate and gap-soliton formation, have been observed (figure 11(b)) [25]. Current and future challenges The best studied polariton structures are epitaxially-grown (Al,Ga)As-based MCs [87]. A MC consists of a spacer containing quantum wells (QWs) inserted in-between two distributed Bragg reflectors (DBRs, see figure 11(a)). A nonpiezoelectric SAW propagating on the MC surface interacts with polaritons mainly by modulating the exciton levels in the QWs and the MC optical resonance energy with its evanescent hydrostatic strain field. The optimal depth for polariton modulation is roughly λ SAW /4, (λ SAW is the SAW wavelength). For example, a typical top DBR is 2 µm thick in an (Al,Ga)As-based MC, so λ SAW 8 µm (inset in figure 11(a)) [25]. The value of λ SAW is, thus, coupled to the top DBR thickness. Reducing λ SAW opens interesting perspectives. Polaritonblockade due to polariton-polariton interactions has been predicted for confinement dimensions below 1 µm [88]. The fabrication of arrays of sub-µm micropillars in GaAs MCs by micro-patterning techniques such as reactive ion etching is, however, challenging, due to the thickness of the multilayer MCs (five or more microns). The modulation of MCs by SAWs with λ SAW 1 µm could allow us to create perfect, amplitude-tunable lattices (see figure 11(b)) with a single polariton per lattice site, where the inter-site tunnelling rate could be controlled. These acoustic lattices are thus solidstate analogues of optical lattices for cold atoms. Additionally, the adiabatic fragmentation on a polariton BEC into single, entangled polaritons (superfluid-Mott insulator transition) by increasing the lattice potential would enable the massive generation of entangled photons [89]. Thus, finding a way of using high frequency SAWs to modulate MC polaritons would be a significant advance. Note that a reduction of λ SAW in the structure of figure 11 also requires a reduction in the thickness of the top DBR, which compromises the MC optical quality. A different approach must thus be used. Envisaging applications, RT polariton QLs and BECs have been demonstrated in MCs with a polymer, where the exciton binding energy exceeds the thermal energy [87]. Polaritons have also been observed at RT in two-dimensional (2D) mat erials, such as transition-metal dichalcogenides (TMDCs). TMDCs have interesting spin properties at the M-point valleys of their band structure, which are inherited and enhanced by polaritons [90]. SAW modulation and collective quantum effects in these materials however remain to be studied. Advances in science and technology to meet challenges To achieve the ambitious goal of a polariton chip, several challenges must be tackled. For example, in order to be able to modulate polaritons with small λ SAW SAWs, novel MC architectures must be designed. One option is to use guided waves propagating along the MC spacer, which would allow the direct acoustic modulation of the QWs with high amplitudes and frequencies. Another option is the open cavity system, where the upper DBR is replaced by an external mirror controlled by piezoelectric positioners [90]. The effects of SAWs in these systems remains to be studied. Finally, an interesting different approach for high frequency modulation (tens of GHz) is laser-generated bulk acoustic waves that travel in the MC [91]. The polariton blockade mechanism also needs to be better understood. There is a considerable spread in the measured values for the polariton-polariton interaction energy (ΔE pp ) in polariton ensembles-for single polaritons ΔE pp has only been experimentally accessed very recently [92]. For the polariton blockade, the interaction energy must exceed the natural linewidth of the polariton levels. Here, either very high-quality MCs with long polariton lifetimes must be used or, as recently shown, the interactions must be enhanced, e.g. by using dipolar polaritons [93,94]. The fabrication of large-size, high-quality TMDC 2D mono layers is readily available, opening the possibility for experiments involving SAW modulation and collective effects. Additionally, the use of van der Waals heterostructures (stacks of different TMDC monolayers) could allow the electrical manipulation of polaritons or dipolaritons in TMDCbased MCs. Finally, it is unlikely that a potential polariton chip relies on a single modulation technology for manipulation. A mix of complementary static and dynamic techniques would be necessary. The latter requires a strong effort in the engineering of hybrid structures combining SAWs and micropatterning, potentially in combination with in situ electric and/or magnetic fields. Brilliant but isolated efforts have demonstrated the efficient SAW modulation of a QD inserted in a MC in the form of a pillar [95]. The combination of, for example, a condensate in complex 2D potentials with the acoustic modulation by SAWs, opens interesting possibilities for the implementation of enhanced modulation schemes. Concluding remarks The modulation of polariton and exciton QLs is an exciting and challenging research field with great applicative potential, many interesting challenges and open questions. SAWs have a special place among the different techniques used, since they allow for a dynamic degree of freedom. Harnessing the full potential of QLs in semiconductor chips to implement advanced devices such as quantum simulators and single photon generators, requires an interdisciplinary effort combining material science, optics, quantum physics and engineering. Status Excitons, electron-hole pairs coupled by the Coulomb interaction, are the main quasi-particles mediating the interaction between light and electronic excitations in semiconductorsexciton-based information storage and manipulation therefore provide a straightforward approach for the processing of optical information in solid-state structures. Two approaches towards this goal based on surface acoustic waves (SAWs) have recently emerged. The first comprises the acoustic modulation of microcavity polaritons-quasi-particles resulting from the strong coupling between excitons and photons in a microcavity. The second, which will be discussed here, relies on indirect (or dipolar) excitons (IXs) in a double quantum well (DQW) separated by a thin tunnelling barrier (see figure 12(a)). An electric field E z applied across the DQW drives electrons and holes to different wells, while maintaining Coulomb correlations between them. The field-induced spatial separation controls the IX lifetime, which can reach the ms range, thus opening the way for the realization of excitonbased memories and excitonic circuits [96]. The charge separation also imparts an electric dipole moment to IXs, which increases IX-IX interactions [97] and can thus be exploited for IX-IX control gates [98]. The transport of charge neutral IXs can be driven by a lateral gradient of E z . The latter provides an in-plane force that was exploited in functionalities such as IX conveyors [99], and transistors [96]. These field gradients are, however, always accompanied by an in-plane electric field component, which destabilizes excitons. The strain field of a non-piezoelectric SAW (i.e. purely elastic modes devoid of a piezoelectric field) provides, in contrast, a powerful tool for IX control while preserving their stability. Their strain field can induce a type-I periodic modulation of the conduction (CB) and valence band (VB) edges via the deformation potential interaction [100], which captures IXs at the sites of minimum band gap and transports them with the acoustic velocity, v SAW (see figure 12(b)). This strain-induced modulation increases IX stability and contrasts with the type-II modulation by a piezoelectric SAW employed for the transport of uncorrelated electron-hole pairs. Current and future challenges A main challenge for the acoustic IX transport is the weak strain-induced amplitude of the band-gap modulation, which in (Al,Ga) structures is typically of a few meV. Efficient longrange IX transport can nevertheless be observed in structures with high IX mobility, as illustrated in figure 12(c). Here, the transport is probed by optically exciting IXs using a focused laser beam and mapping their spatial distribution along the SAW transport path using spatially resolved photoluminescence (PL). The two PL maps superimposed on the device structure of figure 12(a) compare the excitonic PL in the absence (left) and presence (right map) of a SAW. In the former, the PL is restricted to the neighbourhood of the excitation spot. Under a SAW, in contrast, one observes PL at the edge of the semi-transparent electrode (STE) located approximately 500 µm away from the laser spot. The remote PL is attributed to the recombination of IXs transported by the SAW to the edges of the STE [100,100]. This assignment is confirmed by the spectral dependence of the PL along the SAW channel displayed in figure 12(c). While the spectral signatures of neutral (DX) and charged direct exciton (DX±) around 1.53 eV remain close to the excitation spot, the energy of the weak PL trace along the SAW path and the strong emission at the STE edge correspond to the one for the IXs. The transport dynamics (see figure 12(d)) reveals that most of the IXs remain confined in the SAW potential and move with velocity v SAW . Some of the IXs, however, are delayed due to trapping along the path, which reduces the transport efficiency [102]. Acoustic transistors consisting of gates on the SAW path can store IXs and control their flow [101]. Furthermore, the direction of the IX flow can be bent by 90° by interfering orthogonal SAW beams. The bending relies on the moving square potential lattice created by the interference of the beams, which moves along an oblique direction and transfers IXs between the beams. Lazic et al [101] demonstrates an acoustic IX multiplexer based on this lateral transfer, which enables the coupling of several IX sites and forms the basis for scalable IX circuits. Advances in science and technology to meet challenges Prospects for the acoustic IX manipulation includes the storage and transport of single IXs using high-frequency SAWs. It has recently been demonstrated that single IXs can be isolated using µm-sized electrostatic traps [103]-similar potentials can be created by driving IXs along a narrow channel using SAWs with sub-µm wavelengths, as illustrated in figure 13. The discrimination of single IX states relies on the repulsive IX-IX dipolar interactions, which, in a way analogous to Coulomb repulsion, makes the energy of the confined IXs dependent on population (see the inset of figure 13). The quantum state of the transported IXs can be initialized via the absorption of a polarized photon and manipulated along the transport channel by gates or via dipolar interactions with an IX pool close to the channel [98]. Finally, IX can be captured by a two-level trap after transport, leading to the emission of single photons [104]. If combined with the multiplexer concept, the scheme of figure 13 thus forms the basis for a scalable solid-state quantum processor with a built-in interface for long-range information exchange via photons. Another important feature of IXs is the combination of a composite boson character with dipolar inter-particle interactions. The latter gives rise to a rich phase diagram for dense IX ensembles including an exciton liquid and a Bose-Einsteinlike condensate. The modulation by short wavelength SAWs can be an interesting tool to probe the spatial coherence of these phases. The application of SAWs for the investigation of both dilute and dense IX phases faces several challenges in fabrication technology, acoustics (e.g. generation of strong SAW beams with sub-µm wavelengths), as well as in material science (IX mobility control, reduction of potential fluctuations) and physics (coherence effects and interaction mechanisms) of excitons. Finally, the small binding energy in GaAs is a major limitation for all IX-based applications. The previously described concepts for acoustically based functionalities can, however, be extended to other material systems with higher binding energies, such as GaN, ZnO heterostructures and IX in 2D-materials [105], where excitons are stable up to much higher temperatures. Concluding remarks SAWs enable the creation of a tunable strain field with µmsized dimensions in semiconductor nanostructures. We have shown here that this field is a powerful tool for the modulation of the energy levels, confinement and transport of IX excitons. Research prospects for the combination of SAWs and IXs include the investigation of dense IX phases as well as the realization of scalable quant um opto-electronic circuits based on the control of single IX entities. Status The growth of the field of cavity optomechanics [106] has been partly brought about by advances in micro and nano-electromechanical systems (MEMS/NEMS) and nanophotonics. These systems, in which optics and mechanics interact via radiation pressure, photothermal, and electrostrictive forces, have been developed across many material platforms and geometries. As the field pushes towards higher mechanical mode frequencies in an effort to achieve stronger interactions and sideband resolution (single-sideband operation), surface acoustic wave (SAW) devices provide a natural platform for exciting high frequency motion and exploring optomechanics with travelling acoustic waves (the regime of stimulated Brillouin scattering) [107]. The rationale for integrating SAW transducers (and more generally, piezoelectric devices) with cavity optomechanics is also driven by other trends. One is the desire to interface RF electro magnetic fields with optics. This has relevance to classical applications, such as microwave photonics, as well as quantum information science, where efficient and low-noise frequency conversion between the microwave and optical domains could remotely connect, via optical links, superconducting quantum circuits. A proof-of-principle demonstration combined capacitive electromechanical transduction with dispersive optomechanical transduction [108], where the latter used a free-space Fabry-Perot cavity modulated by a thin membrane vibrating at MHz frequencies. Realizing a fully chip-integrated transducer will likely require a mechanical frequency in the hundreds of MHz or GHz range, to be sideband-resolved and enable broader conversion bandwidths. At GHz frequencies, capacitive transduction is inefficient, whereas piezo electric approaches are more naturally suited, as evidenced by the many existing technologies in the GHz domain (e.g. SAW and film bulk acoustic resonator (FBAR) filters). The integration of such approaches with nanocavity optomechanics has recently been explored. Bochmann et al [109] used integrated electrodes to drive an AlN optomechanical resonator at 4.2 GHz, while Fong et al [110] drove an AlN microdisk resonator at 780 MHz. Balram et al [86] directly integrated SAW technology by using an interdigitated transducer (IDT) to generate 2.4 GHz propagating acoustic waves that resonantly excited a GaAs optomechanical crystal cavity ( figure 14). The integration of SAW devices in free-space optical resonators, which can have much narrower linewidths than integrated resonators, has also been considered [111], and SAW-based acousto-optic modulators [112] (see also section 5) have been pushed to >10 GHz operating frequency [113]. Current and future challenges Piezoelectric cavity optomechanical systems [109][110][111][112][113] have illustrated the coherent interplay of the RF, acoustic, and optical fields, and new contexts in which this can be valuable, such as non-reciprocal optical systems, continue to be explored [114]. In general, microwave-to-optical transduction efficiencies have been low (<0.1%) [115], and their improvement is an important challenge, particularly for quantum applications. A schematic illustrating the microwave-to-optical conversion process is shown in figure 15(a). An RF drive resonantly excites an acoustic excitation, which is then upconverted to the optical domain by a pump whose frequency is detuned from the optical cavity by the mechanical (acoustic) frequency. The optical cavity enhances the coupling between optical and acoustic modes, and its linewidth must be narrow enough so that only the higher frequency anti-Stokes sideband is effectively created. Optical and mechanical quality factors, piezoelectric and optomechanical coupling rates, and coupling of the input RF signal and output optical signal determine the overall efficiency. Achieving superlative performance across the optical, mechanical, and electrical domains requires appropriate isolation of the individual sub-systems. High optical quality factor resonators cannot be achieved if the optical field overlaps with the electrodes used in the piezoelectric device. Recent demonstrations of piezo-optomechanical systems [82,109] have avoided electrode-optical field overlap, and the relative ease with which this is accomplished is a strength of the piezoelectric approach. On the other hand, the extent to which piezoelectric substrates can achieve the ultra-high mechanical quality factors observed in materials like silicon [106] at low temperatures is not yet known. The choice of material starts with a consideration of its piezoelectric and photoelastic properties, and although the effective coupling strengths can be enhanced by geometry (via strong confinement and high quality factor), the material properties set basic tradeoffs ( figure 15(b)). For example, AlN and LiNbO 3 have significantly larger piezoelectric coefficients than GaAs. However, GaAs-based devices have exhibited >10 × larger optomechanical coupling rates, due to their larger refractive and photoelastic coefficients [86]. In general, the optomechanical and electromechanical coupling rates should be equal for optimizing conversion efficiency (achieving impedance matching between the RF and optical domains). Advances in science and technology to meet challenges As noted above, efficiently mapping the RF input to an acoustic wave that is well-coupled to the optical mode is a major challenge. This can sub-divided into two tasks: converting the RF drive to an acoustic excitation, and coupling that acoustic excitation into a suitable optomechanical cavity. For example, optimizing the approach of [86] might combine more efficient IDTs with acoustic waveguide tapers (or use focusing IDTs), or may require a different type of piezoelectric actuator (e.g. a resonator-based geometry) altogether. Moving from GaAs to a stronger piezoelectric material is another solution. Hybrid platforms that could combine a very efficient piezoelectric material (LiNbO 3 ) with a high-performance optomechanical material (Si) might be the ultimate solution ( figure 15(b)), though fabrication and design complexity need to be considered. Alternatively, continued development of materials that show both a strong piezoelectric and photoelastic response, such as BaTiO 3 , within a thin-film platform suitable for chip-integrated nanophotonics and nanomechanics is another approach [116]. Continued development of nanofabrication processes that limit sources of dissipation (both optical and acoustic) and excess heating, which leads to a non-zero thermal population of the mechanical resonator, ultimately serving as a source of added noise, are also needed. In general, the combination of these different physical domains (RF, acoustic, and optical) in the context of quantum applications is a new field, with many basic experiments (e.g. ultra-low temperature performance of different piezo electric transducer geometries) still to be performed. No less important than fabrication and measurement developments is the design of the overall transducer system, which requires both fundamental knowledge and detailed simulation capabilities that address the multiple physical processes involved. Current approaches largely focus on being able to break up the problem into sub-systems that can treated individually, enabling separate optimization steps. Given the recent progress in the RF MEMS community in developing piezoelectric resonators [117], and in the nanophotonics community in achieving record optical performance in piezoelectric platforms [118], the appeal of this approach is quite evident. However, as indicated above, the multiple tradeoffs and considerations involved when integrating the two types of devices suggests that this approach may not yield the best solution, and a more integrated design approach may provide benefits. Concluding remarks The integration of SAW devices (and more generally, piezoelectric actuation) with cavity optomechanics enables the coherent interaction of RF electrical waves, acoustic waves, and optical waves in a common platform. This short overview has focused on quantum-limited microwave-to-optical transduction, but the general potential of this platform lies in the possibility of combining desirable characteristics of each of these domains in a way that can be tailored for different applications. However, numerous challenges abound in being able to appropriately combine these sub-systems together while retaining the level of performance available to each in isolation. Continued development of nanophotonics and NEMS, combined with strong interest in the applications of these devices from the quantum information science community, suggests that interest in this topic will continue to increase. Integration of a SAW transducer with a cavity optomechanical system, as in [86]. An IDT (left) generates a 2.4 GHz SAW that is coupled through a phononic waveguide and resonantly excites an optomechanical cavity (center), whose mechanical breathing mode (right) strongly interacts with a localized optical mode at 1550 nm. Geoff R Nash Natural Sciences, The University of Exeter, Exeter, United Kingdom Status Since the first isolation in 2004 of free standing graphene, an atomically-thin layer of carbon atoms arranged in a honeycomb lattice, there has been rapidly growing interest in not only graphene research, but also in a wide range of other 2D materials [119]. Their large relative surface areas mean that these materials naturally lend themselves to integration with surface acoustic wave (SAW) devices. Not only can the waves and materials couple mechanically, but the electric fields generated by a SAW on a piezoelectric substrate can interact with any charge carriers present. The interactions between SAWs and 2D materials provide both an exciting test-bed to study new phenomena, but could also ultimately form the basis of new electronic and photonic devices. To date, most research has been focused on the integration of SAWs and graphene, and a comprehensive review of this area can be found in [12]. Theoretical studies predict a range of rich physical phenomena arising from SAW-graphene interactions, such as plasmonic coupling, and graphene's potential as an extraordinarily responsive sensing material is also being exploited for the development of a wide range of SAW sensors. In addition, there has been much recent focus on acoustic charge transport, where the piezoelectric fields associated with a propagating SAW can be used to trap and transport charge, at the speed of sound, over macroscopic distances. Uniquely to graphene, the acoustoelectric current in the same device can be reversed, and switched off, using an applied gate voltage [28]. The use of a lithium niobate thin film, on top of a conducting substrate, allows the same effect also to be observed in more conventional transistor architecture [14], as shown in figure 16. More recently, the piezoelectric coupling of SAWs with charge carriers in other 2D materials has also been explored and SAWs have been used to modulate carriers within molybdenum disulphide [11,120,121], as illustrated in figure 17, and black phosphorous [122,123]. These mat erials, which have inherent bandgaps, are particularly attractive for optoelectronics and their integration with SAWs has the potential to improve device performance and provide new device functionality. Materials challenges. Many of the challenges associated with the integration of 2D-materials and SAW devices are common to the development and exploitation of 2D-materials more generally. For example, many SAW studies have been based on mechanically exfoliated flakes, which tend to be high quality (low numbers of defects) and therefore, for example, have high electron mobility. However, such flakes tend to be only a few tens of micrometers in size, whereas applications require scalable device architectures that are cost effective. Some materials, such as graphene and hexagonal boron nitride (h-BN), can be grown by techniques, such as chemical vapour deposition, and obtained in large areas (on the scale of cm 2 ) commercially. These large areas sheets can then be transferred onto SAW substrates, using relatively well-established processes. In contrast to flakes, however, 2D materials grown this way are polycrystalline, with many of their properties defined by defects associated with the grain boundaries. In addition, the transfer process itself affects the quality of the graphene, introducing wrinkles and tears, and also leads to the device processing being somewhat irreproducible. Direct epitaxial growth of some 2D materials onto SAW substrates, such as quartz and lithium niobite, is possible [120], but receives much less attention compared to growth of these materials on more conventional substrates. A key challenge is therefore how to reproducibly obtain large area, high quality 2D materials on SAW substrate materials, such as quartz and lithium niobate. Device architectures. To fully exploit the properties of 2D materials, for example, the ability to modulate the conductivity of graphene using an applied gate voltage will also require the further development of thin film architectures [14] so that conducting substrates can be used as a back gate. The large surface area of 2D materials also often means that the environ ment can dramatically affect their properties. The significant effect of water on the conductivity of graphene can, for example, be exploited in a SAW humidity sensor, but can also reduce the consistency and reliability of other graphene-based SAW devices. Encapsulation of the active layer will therefore be often required to isolate the 2D materials from the environment. Materials and architecture. Advances in materials growth will lead to both improvements in the quality and reproducibility of 2D materials in SAW devices, but will also open up new avenues of research. For example, encapsulation of graphene in hexagonal boron nitride is known to reduce the effect of the environment on the graphene, leading to longer electronic mean-free-paths that are of the same order of SAW wavelengths (a few micrometers). Such large mean-free-paths will allow ballistic effects to be exploited in future SAW devices, and also phenomena that have only been predicted theoretically, for example, SAW mediated optical coupling to plasmons in graphene, to be experimentally demonstrated. The electrically insulating top surface provided by the h-BN also provides a means of incorporating other structures, such as metallic metamaterials to increase the efficiency of optical coupling, into such devices. Heterostructures based on the layering of different 2D materials are also a promising route for the development of photodetectors and light-emitting diodes, and direct growth of these structures will allow further integration of such heterostructures with SAWs [122]. On the other hand, very little work has been carried out investigating how the use of the relatively unusual substrates common in SAW devices might affect the properties of the 2D materials. For example, lithium niobate is highly pyroelectric and changes in device temperature could induce doping in the 2D materials; further study of such effects will be important for the realisation of practical devices. Future challenges. Finally, most research to date has focused on the piezoelectric coupling of SAWs with charge carriers in 2D materials. However, the distortions caused by the mechanical coupling of the SAW will also affect the properties of the material. This could be particularly important if 2D materials are combined with emerging phononic structures (for example, see [124]), where SAW displacements can be highly localised, to create novel devices, such as cavity-based sensors, using the 2D material as the sensing element. The potential role of 2D materials in SAW microfluidics, exploiting 2D materials for sensing, filtration or fluid control, is also just beginning to be explored. Concluding remarks Work in this area so far has been focused on a relatively small fraction of the huge variety of known 2D materials, which includes the graphene family (e.g. h-BN and silicene), transition metal dichalcogenides, such as molybdenum disulphide, metal carbides, and metal halides. SAWs can be used to probe the properties of these materials, and to provide a test-bed for the exploration of new phenomena. Over the last couple of decades, there has also been considerable interest in the use of SAWs for sensing, quantum information, and in microfluidics. Exciting future research common to all these areas is likely to be the incorporation of 2D materials into resonant elements, whether optical, mechanical, or fluidic (or combinations thereof), and the use of SAWs to probe and control such resonant systems. Such integration could, for example, lead to highly sensitive sensors (section 12), or new devices for quantum technology (see sections 2-8) Status The interaction of acoustic waves and magnetic excitations (spin waves) through magnetoelasticity was proposed in the late 50s, when Kittel showed that their resonant coupling under conditions of equal frequency and wave-vector leads to mixed magnon-phonon modes. The acoustic excitation of GHz ferromagnetic resonance (FMR) modes, or on the contrary, the generation of GHz phonons by spinwaves excited using radio-frequency (RF) magnetic fields was studied in the following decades. The research then mainly focused on nickel and yttrium iron garnets (YIG)-moderately magnetostrictive, but well understood ferro-/ferri-magnets. This coupling was then implemented in the field of electronic device engineering, for instance to turn surface acoustic wave (SAW) delay-lines into magnetically tunable RF components or field sensors [125]. The strong field-induced variations of acoustic wave velocity are indeed particularly well suited to the detection of low amplitude, low frequency magnetic fields, which can be challenging using other magnetometry techniques. The past ten years have seen a clear revival of the topic with a much stronger focus on potential applications in the field of magnetic data storage, spintronics or magnonics. Information is then encoded by the magnetic state of microor nano-structures, the spin-polarization of electrons, or the amplitude/phase of spinwaves. With their low attenuation, their typical frequencies of the order of magnetic precession frequencies, and power-flow confined to the surface, SAWs rapidly emerged as a relevant tool. The effective RF field they induce tickles the magnetization into ferromagnetic resonance. This can in turn lead to the generation of pure electron spin currents in the presence of a heavy-metal top layer [126] ( figure 18(a)), or to the full reversal of static magnetization, provided a non-linear coupling regime is reached [127] ( figure 18(b)). SAW-driven 'straintronics' [128] have thus joined the likes of spintronics (using current), valleytronics (using valley-dependent properties), caloritronics (using temperature gradients) or multiferroicbased systems (using electrical fields) for non-inductive control of magnetization. Contrary to local electric-fielddriven switching however, SAW-driven switching offers the possibility of high efficiency, and remote control of magnetic bits using waveguiding and focusing, or reconfigurable addressing using interference patterns, without the need for local metallic contacts. Current and future challenges The interaction between SAWs and magnetization is now fairly well understood, but much remains to be done to harness it to actual magnetic architectures. SAWs could, for instance, be the missing ingredient in magnonics, for now limited by the fact that the attenuation distance of spin waves is of the order of a few micrometers for most ferromagnetic materials, limiting prototypes to extremely low damping materials like YIG. Remotely excited SAWs could act as a relay for spinwaves through resonant coupling, and locally modify their amplitude or phase. The optimization of mixed phononic-magnonic crystals, up to now mainly studied numerically [129], could be key for spinwave-based computational circuits. Concerning the manipulation of static magnetization, localized switching has now been demonstrated using stationary and focused SAWs [127,130]. Novel architectures which would most benefit from magnetoacoustic coupling now need to be elaborated upon, since current schemes will not be competitive for dense storage devices. The smallest accessible switching size is indeed 'large', i.e. of the order of the SAW wavelength, a few hundreds of nanometers at best. SAWswitching could for instance provide an interesting alternative for remote addressing of moderately dense magnetic bit arrays, provided the transducer design is adapted, using wavefront shaping for example. The resonant, non-linear behaviour of magnetization dynamics submitted to large amplitude SAWs could also be exploited beyond switching, for instance in magnetic solid-state neuromorphic systems. These require tunable non-linear oscillators which can act as artificial neurons, as recently demonstrated using current-driven spin transfer torque nano-oscillators [131]. SAWs could enable interneuronal synchronization via magnetoacoustic coupling, or even drive the magnetization dynamics, offering the added possibility of a surgical addressing of a given 'neuron' within an assembly. While most of the above considerations focus on magnetic applications, magnetoacoustics are also highly relevant to the field of phononics. Breaking time-reversal symmetry with magnetism, for instance, leads to non-reciprocal acoustic propagation, with the tantalizing prospect of making phononic diodes. This was studied in the 80s, but could be revisited in thin magnetic films making use of Dzyaloshinski-Moriya interaction [132], or astute patterning to engineer the relevant strain components ( figure 19). In graphene-like phononic crystals, magnetism could moreover lead to field-tunable topological protection of SAW propagation: an ultrasonic structure made immune to defects [133]. Elegant demonstrations of this have been shown at low frequencies and large dimensions (Hz/cm) using moving elements, but experiments are still lacking in the realm of the nm/GHz. Advances in science and technology to meet challenges SAW-driven magnetization switching has up to now been demonstrated using resonant interaction, at low temperature [127], or fairly inefficient non-resonant coupling in room-temperature ferromagnets [130]. This results from the difficulty of matching SAW frequencies (typically a few GHz) and magnetic ones (often over 10 GHz) at reasonable magn etic fields. Materials presenting both high magnetostriction and low precession frequencies must now be sought after to enable low (or zero) field resonant switching at RT. Moreover, while the industrial sector has optimized very magnetostrictive materials such as Terfenol-D in the bulk form, there is a real need for the synthesis of high quality crystalline magnetostrictive materials in thin-film or multi-layer form. Last but not least, these highly magnetostrictive materials should be grown on efficient piezoelectric films for generation of high SAW amplitude, which demands efforts in synthesis optimization and characterization of these hybrid multiferroic heterostructures. On the device side of things, magnetoacoustics has so far only exploited a small portion of the elaborate SAW transducer designs optimized by electronic engineers. These can now be implemented in the search of broader bandwidths, more efficient electro-mechanical transduction, but mostly, higher SAW frequencies. This will decrease minimum addressable dimensions and boost the coupling efficiency to [127]. © IOP Publishing Ltd. All rights reserved. Two counter-propagating SAWs are sent on a magnetic thin film of (Ga,Mn)As. At the magneto-acoustic resonance, the SAW drives the precessional reversal of magnetization, as evidenced by the magneto-optical contrast. In a stationary geometry, magnetic domains λ saw /2-wide can be created, and positioned precisely by tuning the relative phase of the exciting bursts. magnetization or magnetic defects, making SAWs well suited for sensing NV centers (see section 7) or inducing magnetocaloric effects, as has been shown on MnAs [134]. Finally, the race towards higher frequencies and smaller feature size will entail the need for experimental tools other than optics to study magnetoelastic interaction, be it from the point of view of strain or magnetization dynamics. X-ray diffraction techniques, such as PEEM or XMCD, are viable solutions [135], but they remain cumbersome to implement. Eventually, local electrical probing of the magnetic state by magneto-resistive effects and near-field techniques compatible with SAW excitation should prove to be more adapted to lab-ready approaches. Concluding remarks SAWs have proven to be a very useful tool to probe elementary excitations (see sections 1-6 and 8), and magnons are no exception. Far from the academic world, SAWs are commonly used in microelectronic devices, sensors and filters, and are as such a mature technology allowing low power, high efficiency and broad tunability operation. The time is now ripe to harvest the benefits of the fundamental studies of magnetoacoustic interactions of the past decades, and implement these effects into magnetic field-tunable phononic devices, or strain-controlled magnetic structures. Status For about 40 years, surface acoustic wave (SAW) devices have been key components of wireless data transmission systems. They were first applied in high volumes as intermediate-frequency (IF) filters in TV receivers. In comparison with filters based on lumped inductors and capacitors, they were much smaller and required no manual tuning. The same advantages led to their pervasive use in the digital mobile phone systems introduced in the 1990s. One may state that wireless digital communication systems would not have evolved in the way they did without SAW IF filters, RF filters, duplexers, and multiplexers for base stations and mobile phones. Despite the efforts to standardize global communication, several incompatible systems with different frequency bands and different modulation schemes coexist today. The availability of miniaturized frequency-selective components was a prerequisite for the development of multi-band, multi-standard mobile phones as required by the markets. As it turned out, microacoustic devices are the only technology capable of providing this frequency selectivity at low enough cost and with sufficiently small shape factors. As a result, manufacturers today ship billions of units per year. The technological development has not come to an end yet because the demand for wireless data transmission continues to grow. To accommodate the data traffic associated with human communication, audio and video file distribution, machine-to-machine or vehicle-to-infrastructure communication, and others, regulatory bodies are allocating ever more frequency bands to digital communication systems. Moving from the current fourth generation of digital communication systems (4G, also called LTE (long-term evolution)) to the upcoming fifth generation (5G) will require even more RF filters and multiplexers with even more highly developed characteristics. As in the past (see figure 20 for an example), the challenges will be met by improvements in the filter designs, the material systems, the fabrication methods, and the packaging and integration technologies. Current and future challenges 33 frequency bands with center frequencies from 750 MHz to 3.5 GHz have been reserved for 4G networks using frequency division duplexing (different transmit, Tx, and receive, Rx, frequencies). As only a subset of the bands is available in any given region, a world phone must support several bands and standards. This requires two bandpass filters for each link, combined into a duplexer: a Tx filter on the power amplifier output side and an Rx filter on the low-noise amplifier input side ( figure 21(a)). A simple dual-band phone requires two duplexers and a switch ( figure 21(b)), whereas a modern multi-band phone contains dozens of filters and switches. This explains the pressure on component suppliers to miniaturize their filters and to combine them into modules together with matching-network elements, amplifiers or switches. A modern technique called carrier aggregation (CA), i.e. the simultaneous transmission of data in several frequency bands and over the same antenna to increase the data rate, leads to even more RF filters in the frontend [141]. They must now be combined into 1-to-n multiplexers in packages as small as possible (figure 21(c)). It no longer suffices to design an excellent filter. Instead, it must be an excellent filter in the presence of other filters and the electrical loading, parasitics, and packaging effects this entails. Some frequency bands are so close to each other that the filter passband skirts must be very steep to ensure a sufficient stopband attenuation in the adjacent band. They must be even steeper to make up for sample-to-sample variations of the filter center frequency due to fabrication tolerances and for the frequency-shifting effects of influence quantities, such as temperature. In the future as in the past, the required steep passband skirts will only be achievable with filters composed of interconnected high-Q one-port resonators, but the fabrication tolerances and the temperature sensitivity of the devices will have to be reduced. Further challenges are the reduction of the filter passband ripple (required by higher-order modulation schemes), the reduction of the passband attenuation (to bring losses down to avoid self-heating, to reduce power consumption, to improve signal to noise ratio, etc), the reduction of nonlinearities (required because CA causes many mixing products to fall into usable frequency bands), an improved power durability for Tx filters (which would enable further miniaturization), and the production of filters for frequencies above 3 GHz. [137,139]), duplexers (triangles [138]), and 1-to-4 multiplexers or quadplexers (square [140]). All areas have been normalized to the number of filter functions in the package. Companies have succeeded in continuously shrinking the footprint between 1990 and 2010. The progress has slowed down since then. It looks as if substantial further size reductions will require technological innovations. Advances in science and technology to meet challenges Several of the mentioned challenges are linked to the quality factors of the resonators making up a SAW RF filter. Higher Q means smaller insertion attenuation and steeper passband skirts. The Q-factor is determined by the resonator design and the materials used (substrate, metallizations, additional layers). The piezoelectric substrate is also a main contributor to the temperature characteristics of a filter. The two-decadelong dominance of single-crystal LiTaO 3 as a substrate material appears to have expired. Its losses, temperature behavior and electroacoustic coupling coefficient k 2 all appear insufficient in view of current filter requirements. Instead, filters are built on LiNbO 3 with its large k 2 and an additional SiO 2 overlay with the opposite temperature behavior provides the temperature compensation (temperature-compensated SAW, TCSAW) [142]. New systems with a thin piezoelectric layer over a di electric substrate such as silicon covered with SiO 2 have been developed. Such layered systems have been shown to reduce losses, or increase Q in combination with a very promising temperature stability, paving the way to filter solutions of unprecedented overall performance [143,144]. More results in this direction are to be expected. A paradigm shift may lie ahead in the fabrication. In layered structures, the outstanding reproducibility provided by lithography may have to be supplemented by wafer-based processes, such as ion-beam etching, for frequency-trimming purposes, as is already common for bulk-acoustic wave (BAW) filters. Currently, BAW filters are superior to SAW filters at higher frequencies (above, say 2 GHz) in terms of resonator Q. Advanced devices, such as TCSAW and piezolayer based filters, may tip the scales in favor of SAW filters again. It remains to be seen, however, if SAW filters can really conquer the frequency range above 3 GHz. Filter suppliers already have the ability to quantitatively describe wave propagation in layered structures, loss reduction by local mass loading, electromagnetic coupling resulting from miniaturization, temperature effects, power durability, and nonlinearities. Further progress in modelling and simulation skills will result in more sophisticated designs, which will be realized by advanced substrates and fabrication processes. This coevolution of modelling, materials and processes will lead to more complex SAW structures with outstanding performance. Concluding remarks For four decades, microacoustic devices have played a key role in the development of wireless systems. On the one hand, they have benefited from the phenomenal success of mobile phones and of smartphones. On the other hand, telephone and infrastructure suppliers have benefited from the capabilities of microacoustic devices in that they could utilize the available frequency spectrum in the most efficient way. The current and future complexity of multi-standard phones will most certainly increase the pres sure to develop more complex microacoustic systems in addition to the pure BAW and pure SAW filters that have come to dominate the field until now. Whatever the new filters look like, they will have to contribute to an increase in the effective data rate and to a functional densification of digital communication systems. This paper is dedicated to our dear colleague Dr Dmytro Denysenko, who sadly passed away. Status Sensing is one of the most important tasks for the communication between two or more entities. In our technical world, sensing very often means that a system or a machine probes its own state or the conditions of the environment which then is transmitted to another, very often a decision-making system, let it be for the sake of a reaction of a human. Sensors thus play an important role in the function of complex systems and are extremely widespread everywhere. So far, sensors are mostly restricted to measure one or more properties of an environment or a device, but also more recently the status of vital functions of living beings. Wireless transduction of the sensor output together with the endless opportunities for 'artificial intelligence' will open a wide field of applications for sensors of all kind, one specific challenge being autonomous transportation. Historically, sensors were first used to measure quantities being important in daily life like temperature, pressure and weight. Then, sensors were developed to extend the sensing capabilities of humans. These included light sensitive devices in spectral ranges where we cannot see, very sophisticated hearing devices, tactile sensors and sensors mimicking our taste and smell perception and others. Especially the latter, olfactory sensors have a very important role in the detection of chemicals like poisons, pollutants, chemical and biological warfare agents and explosives, but lately also for breath analysis for health-related issues. Also, sensors are becoming more and more important in automobiles and the communication between the driver and the car as well as for autonomous driving machines, and finally in so-called smart homes and digital industrialization [145]. In any case, a sensor typically consists of an active sensing device and a 'transducer', picking up a sensor response like a change in conductivity, volume, color, etc, and transducing it into a (machine) readable quantity like a voltage or similar. In many cases, SAW sensors [146] play an important role in the field of the above list of sensing devices, as they combine an outstanding sensitivity and the potential for wireless interrogation. Current and future challenges A typical SAW sensor layout for the detection of specific gases is shown in figure 22. Here, we depict the combination of a SAW device with three interdigital transducers (IDTs) [147] and a gas sensitive thin film layer. The center IDT generates a bidirectional SAW propagating along the depicted directions. One of the sound paths on the delay lines is covered by a thin film which is very selective in absorbing a special gas. The selectivity in this case is a result of a tunable pore size and chemistry of a highly porous material like a metal organic framework (MOF) [149]. The phase difference between the SAWs is proportional to the mass loading difference of the delay lines [148]. Sophisticated high frequency signal processing techniques can be applied to extremely sensitively measure such phase differences, hence resulting in an extremely sensitive mass detection chip. Apart from the development and availability of future 'smart' coating materials, the performance of SAW devices for sensing also crucially depends on many other variables, like the choice of the piezoelectric substrate, attenuating issues in liquids, temperature, frequency, and design fabrication for optimum response. These are important parameters for making SAW a competitive sensor transducer. The advances in modern materials science, both in the physical as well as in chemical and biochemical communities, however, leave a lot of room for confidence for future scientist generations. Advances in science and technology to meet challenges All sensors have in common that they are only as good as they are selective and sensitive. It is not very helpful if a gas sensor, say, is more sensitive to temperature changes in the environment than to the presence of the suspicious gas. Also, if one looks, for instance, for NO x detection, the sensor should be very specific and not become easily disturbed or even blinded by the presence of other gases. This is a key challenge which can only be tackled by the design of very specific transducer materials, such as MOFs. The second challenge is the sensitivity. For a SAW device, for example, the sensitivity increases strongly with increasing frequency. Hence, it is very desirable to operate Figure 22. Simple gas sensing SAW sensor consisting of a gas sensitive MOF layer with adjusted pore sizes and a highly sensitive SAW double delay line to convert the mass loading of the chip into a measurable phase difference between the test and the reference SAW. This example is sketched to be able to differentiate between CO 2 and N 2 molecules in a mixture. Reprinted with permission from [147]. Copyright 2017 American Chemical Society. the sensor at frequencies which are as high as possible, which have to be, of course, compatible with the thin film transducing layer on top. Here, highly porous nano systems like zeolithes [150] or, more favorably, the above-mentioned MOF seem to be very promising candidates because of their unprecedented degree of tunability. Not only can the pore size be adjusted over a wide range, they can also be functionalized to become chemically sensitive to adsorbates. Recently, there have even been reports on the electrical switchability of MOFs, thus enabling some kind of built in adaptivity and 'smartness' [151]. Concluding remarks Our ever-expanding technology-driven world will more and more rely on the interaction between humans and machines. Be it smart homes, smart cars or even the monitoring of our vital parameters and functions. Remote, wireless or even battery-less operation will be of paramount importance. SAW sensors will be the interface between us and the many artificial systems surrounding us in the future. Equipped with and connected to artificial intelligence, machine learning and the internet of things, smart sensors will broaden our own horizon of experience and hopefully lead to a safer environment. Acknowledgments This article is based on our longstanding work and experience with SAW devices and sensors in general. It has been funded by numerous agencies like the German Research Foundation DFG and the German Federal Initiative of Excellence. We also gratefully acknowledge endless discussions with many of our colleagues and friends whom we were privileged to work with over the last two decades. Status Acoustofluidic technology [152] for handling of particles and fluids on the sub-mm-scale by ultrasound in closed lab-ona-chip systems has been used for many different purposes, primarily targeting life science-related applications and microfluidics. Recent examples presented at the Acoustofluidics 2018 conference [153], include separation, sorting, washing, mixing, patterning, enrichment, aggregation and 3D culturing of cells. Typical objects being manipulated are biological cells, bacteria, micro/nano-beads, droplets, bubbles, vesicles, or the fluid medium itself. Actuation is mainly performed with bulk-acoustic-wave (BAW) or surface-acoustic-wave (SAW) technology. During the last one to two decades, the field has gradually moved from proof-of-concept demonstrations of unit operations to application-driven device designs and platform developments for specific used-defined needs. Comparing SAW and BAW technology, this transformation of the research field is still young for SAW, partly explained by its greater complexity in terms of the intricate acoustic SAW-fluid interaction and the use of a much wider frequency range. More fundamental work focusing on understanding SAW technology has yet to be done. BAW-based acoustofluidic technology, on the other hand, has recently been commercialized by companies targeting the life science industry, such as AcouSort (Lund, Sweden), FloDesign Sonics (Wilbraham, MA, USA), and Thermo Fisher Scientific (Waltham, MA, USA), the latter company supplying the acoustic focusing cytometer, Attune. Thus, we may expect upcoming SAWbased acoustofluidic applications and products being launched in the future, in addition to early commercialization attempts such as the ArrayBooster and PCR-in-drops platforms by Advalytix (Brunnthal, Germany). The theory for acoustofluidics is maturing (Bruus [153]), now covering resonances (Baasch [153]), acoustic radiation force on suspended particles (Zhang [153]), acoustic streaming (Bach [153], Qiu [153]), and elastodynamics of the walls (Reichert [153]). Further development of the theory is necessary to fully comprehend the fundamental mechanisms behind acoustofluidics, and to obtain sufficient predictive power to develop design tools for making improved devices. Examples of current improvements in theory include whole-system 3D-modeling (Skov [153]), acoustic tweezers (Thomas [153]), and inhomogeneous fluids (Bruus [153]). However, there is still an unmet demand for improved theoretical understanding of particle-particle interactions at the high particle concentrations often found in biological solutions. Current and future challenges For the main application area, acoustofluidics for life sciences, a challenge is to further improve robustness, automation and throughput of devices and methods. Still, manual calibration procedures are often used based on visual inspection through a microscope; see figure 23(a). Integration of acoustic handling systems with detection and readout systems needs to be addressed as well as improving the electrical-acoustical matching, optimizing the energy consumption of the devices and investigating and developing new materials for efficient substrate-fluid coupling and delivery/control of acoustic energy to intended places. It is well known that acoustic exposure of ultrasound frequencies to cells and other biological samples may cause various effects [155]. Biological effects of interest, such as viability, proliferation, stress and function of cells, are therefore important to measure. Here, generalized conclusions based on previous studies are risky since ultrasound may cause a variety of physical effects. Different biological samples may also respond differently to a specific set of physical param eters. For example, cells respond differently to standing waves and propagating waves even when frequency, amplitude and energy are the same [155]. Recent studies targeting the cellular state in acoustofluidics have primarily focused on demonstrating the absence of any detrimental effect of significance. However, a future challenge is also to investigate whether acoustic handling may cause beneficial effects on cells or other biological samples. Here, an emerging application field is to use acoustofluidics for tissue modelling and engineering [156]. For this purpose, a recent example of a beneficial effect is the improved quality of cartilage tissue constructs gained by acoustofluidic biomechanical stimulation [156]. For SAW-based acoustofluidic technology, it is also important to extend viability studies to a broader frequency range (in particular to the range 10-1000 MHz), to a broader pressure amplitude range (beyond 1 MPa decoupled from any temperature-related effects [154], see figure 23(b)), and to cells being exposed to more complex acoustic fields that are realizable in SAW devices. Finally, it is also of interest to apply acoustofluidic technology to other fields than within life sciences. Here, possible areas are, for example, acoustofluidic-based liquid electrolyte recirculation in batteries (Friend [153]), separation of minerals and fossil pollen in geology research, and air-borne filters for nanoparticles and aerosols. Advances in science and technology to meet challenges The manipulation of nanoparticles (bacteria, exosomes and viruses) is an important challenge. Progress is reported using electrodes with an angle to the flow direction in SAW devices [157], as well as using the suppression of BAW-induced streaming in inhomogeneous media (Bruus [153]). Furthermore, the first single-cell acoustic tweezing has been obtained by introducing a new transducer design: spiral-formed electrodes on a SAW substrate [158]. Another radically new technique concerns handling of miscible concentration profiles of molecules or nanoparticles as recently proposed in a theor etical study [159], illustrated in figure 24. However, regardless of the specific acoustofluidics setup, good numerical modeling of streaming is needed to meet the scientific and technological challenges involving nanoparticles. A step towards meeting this challenge is the recent theory of pressure acoustics with viscous boundary layers [160], which, compared to a full direct numerical model, reduces the computer memory requirement by a factor of 100 or more using an analytical treatment of the viscous boundary layers. This allows for 3D modeling of streaming in microscale acoustofluidics. Currently, a limiting factor for fully exploiting the application potential of acoustofluidics is the cost of the glass or silicon components, used because of their high acoustic contrast relative to water. In addition, processing of microstructures within these materials is also costly, time consuming, and sometimes complicated. These limitations are especially severe for applications intended for point-of-care clinical use, where the acoustic separation unit must be a single-use consumable. In a recent theoretical study with experimental support [161], the principle of whole-system ultrasound resonances was introduced to identify and characterize well-suited resonances in all-polymer devices. This principle, combined with off-stoichiometry thiol-ene polymer having tunable acoustic parameters, may point to a way to overcome the challenge of designing and fabricating good, polymer-based acoustofluidic devices. Concluding remarks SAW-based acoustofluidics is undergoing a promising scientific and technological development. Given the robust and controllable actuation of the SAW technology in combination with its ability to support complex acoustic fields at a wide frequency range, it is likely that within specific areas it will surpass the less complex BAW technology already used in the first commercial products. Furthermore, SAW-devices for acoustofluidics may be significantly improved in the future by taking whole-system resonances in three dimensions into account in the design process (see [161]). Here, the whole system includes, e.g. the fluid sample, droplet, channel or chamber as well as any supporting solid structure. As a result, we may see handheld or wearable battery-driven devices, and we may find new interesting application areas of acoustofluidics based on SAW technology. To achieve this, efforts within both theory development, numer ical modelling, as well as in experimental development need to be accomplished. Status The use of surface acoustic waves (SAWs) has become key in the toolbox of different methods available to manipulate fluids, opening up a range of microfluidic applications in medical diagnostics, drug delivery, cell sorting, tissue engineering and life science research. Despite the novelty of many of the methods being proposed, it is perhaps surprising that the first practical demonstration of interaction of SAWs with fluids was made nearly 30 years ago by Shiokawa et al [162], and involved demonstrating the liquid actuation functions of pumping and nebulisation by a Rayleigh wave, on a piezoelectric lithium niobate (LiNbO 3 ) wafer ( figure 25(a)). White et al [163], working in the USA, also showed that piezo electric ZnO thin films on a silicon nitride membrane (or plate) could also be used to create a liquid pump, this time showing the application of Lamb waves to actuate the fluid. More recently, work by Wixforth's group [164] showed that Rayleigh SAWs can play a particularly powerful role when manipulating very small (nl-µl) microfluidic volumes of liquid-as the majority of the energy associated with the SAW is confined at the piezoelectric surface, and can be efficiently dissipated into the liquid. The smaller the volume, the greater the proportion of its volume that 'feels' this dissipated energy, and thus the more efficient the actuation process (whether this be movement or heating). One further advantage of using SAWs in such systems results from mechanical forces, which cause convective streaming within the liquid, and also, depending upon the nature of the induced flow, may enhance mass transfer for the rapid mixing of reagents [165,166]. This latter phenomenon saw the first commercial applications of SAWs in life-science instrumentation. Extensive studies have now also demonstrated that SAWbased acoustofluidics provides the unique ability to manipulate liquids (and particles/cells within them) without contact (offering a contamination-free solution) and in a biocompatible and programmable way [166] (see also section 13). Such capabilities place SAW as a technique of choice to overcome many challenges in fluid handling within microfluidic systems and deliver its long-standing promises. Further advances, which may lead to new applications in wearable diagnostics and ubiquitous sensors and actuators, include decreasing the cost of materials used (e.g. by using thin piezoelectric films on low cost polymer sheets and foils) [167], or increasing functionality (by creating bendable/flexible functions and new flow profiles). Current and future challenges For many applications, LiNbO 3 has been the piezoelectric material of choice, because it is very consistent in its behaviour and response (e.g. its piezoelectric coefficient is large and predictable for different crystal orientations, despite being relatively expensive, difficult to process and fragile). This has allowed the creation of very complex field structures [168], translating approaches from optical wave shaping (also termed wavefront engineering) into acoustofluidics. To realise new opportunities of SAW-based acoustofluidics, there is a need for new strategies to further integrate the piezo electric actuator with other sensing and microfluidic functions to enable new low-cost and low power solutions-opening up new challenges in fluid mechanics and acoustics. Over the last few decades, the liquid has often been processed as a 'wall-less' droplet placed directly onto the piezoelectric surface to maximise the energy transfer ( figure 25(a)). The fluid may also be contained within an elastomeric microchannel with a defined geometry ( figure 25(b)), with the possibility of allowing the reuse of the actuator [169]. However, the manufacturing and assembly of such devices are complex, limiting their practical applications. As an alternative, SAW manipulations can be produced on a thin disposable chip placed on the surface of the piezoelectric SAW substrate, which can act as a disposable biochip [170]. Such chips have come to be known as superstrates as they sit in contact with the piezoelectric substrate. Their design can be further modified through the introduction of arrays of microstructured features in order to create phononic crystals [171], producing new complex acoustic fields, which can also be used to control liquid flows and interfaces [172]. However, the physics of the complex interactions between the liquids, the newly shaped acoustic field and the different interfaces of the system can be challenging to model, predict and control. Another integration strategy is to deposit piezoelectric films, such as ZnO and AlN, onto a variety of substrates including silicon, metals, glass and plastics. This will provide new opportunities for integration whilst bringing about a dramatic decrease in material costs, and opening the way for implementing integrated, disposable, or bendable/flexible lab-on-a-chip (LOC) devices [167]. However, there remain issues including their performance and reliability as well as the development of low-cost manufacturing methods. Advances in science and technology to meet challenges Acoustic waves generally manifest on timescales of microseconds and produce surface deformations on the piezo electric wafer that may be below a few tens of nanometers. In contrast, the deformations on fluid surfaces generally respond on the order of milliseconds with displacements that may be a few microns in size. The subsequent flows within the bulk of the liquids provide functionalities in seconds and with distances on the order of the millimetre and beyond. All commonly-used approaches to simulate these phenomena, from finite-element, finite-volume to finite-difference time-domain methods, have imposed constraining boundary conditions to reach practical computational capabilities, that limit precise predictions. To advance the boundaries of our understanding of acoustofluidics beyond currently well-established wave and flow profiles, new analytical and modelling approaches will be required to bridge these spatio-temporal scales. As an example, new approaches that combine analysis in the frequency domain and timedomain [173] may reveal new behaviours, especially where complex rheological and surface properties are available. In the field of advanced materials, the recent demonstration of deposition of thin piezoelectric films onto a great variety of solid surfaces has opened up new avenues of development to enable us to implement acoustofluidics functionalities into deformable components (figure 26) [167]. These, in turn, could realise wearable lab-on-chips, able to process liquid samples close to or inside the human body. To date, this capability has been hindered by the high energy loss encountered in these flexible, 'soft' systems, limiting the physical reach of the waves to only a few wavelengths. New opportunities in thin plates (below the wavelength) and new modes of propagation and their combinations [174] may provide a promising avenue to overcome this limitation. In particular, controlling different (crystal) structure orientations by controlling deposition parameters [167], or integrating acoustic metamat erials with anomalous mat erial properties, will provide the capability to generate complex wave patterns on a single substrate. This could enable the integration of actuation (e.g. for both medical diagnostics and therapy) and molecular sensing on a single, deformable and disposable substrate (see section 12 for details on SAW sensing capabilities). In this context, novel materials, such as piezoelectric doped graphene may play a future role in such new LOC systems (see also section 9). Concluding remarks The field of SAW-based acoustofluidics has begun to reach maturity after an initial exponential growth that has spanned the last three decades. The topic is now generating new practical applications in medical diag nostics and drug delivery applications whilst providing biologists with new tools for lifescience research based upon cell manipulations and sorting. A new impetus in the fundamental understanding of the physical processes across spatio-temporal ranges that span many orders of magnitude is still required to enable the techniques to be fully realised, enabling translation of capabilities demonstrated in laboratory settings, into real-world settings. This process is likely to require the bridging of different communities and disciplines, a challenge which is not unique to this field, but nevertheless needs to build upon existing knowledge with a shared vocabulary and cross-disciplinary collegiality. Status The application of SAWs to the life sciences arose in the early 2000s and is still a rapidly emerging field. Due to the various powerful possibilities demonstrated based on stirring, mixing and pumping very small amounts of fluids acoustically (see sections 13 and 14), and the effect of acoustophoresis, the potential for more complex applications in cell manipulation has become obvious. While the idea of the manipulation of cells with ultrasonic standing waves is much older [175], the transfer from bulk acoustic waves (BAW) with wavelengths on the size of cells has revealed the actual power of the approach. The different fields, as illustrated in figure 27, can be categorized as (i) manipulation of the medium, mainly based on acoustic streaming, (ii) mechanically moving or trapping cells by acoustophoresis and (iii) employing both the mechanical and electrical properties of SAW to stimulate cells. While for the first two fields mainly Rayleigh waves are applied, most commonly generated on LiNbO 3 substrates or ZnO films on transparent substrates, the latter field employs shear waves or mixed modes as well. Acoustic streaming so far has been employed to quantify cell adhesion in low volumes, thereby applying a wide range of shear forces simultaneously to single cells or whole cell ensembles [176]. Here, the cell-substrate interface probed is either on the chip itself or positioned opposite to it. Acoustophoresis applications employ pulsed travelling SAWs with high amplitudes for sorting applications in microchannels [177], standing wave fields for alignment and trapping, as well as phase detuning and chirped IDTs to precisely control cell-cell distances [178]. Exemplarily chosen studies elucidate compound transport, co-culture or multi-cell analysis to name just a few areas [179][180][181]. Moreover, there are some first reports on employing SAWs for cell stimulation, in terms of increased wound healing by migration and proliferation or drug uptake [182,184]. While for some studies there are clear indications that the accompanying electrical fields are important, others use coupling fluids to influence cells in more-or-less conventional well-plates. Here, the effect might most likely be caused by enhanced effective diffusion constants. All fields are still of the highest interest, aiming on the one hand towards translation into diagnostics, pharmacy and tissue engineering, but also on the other hand towards basic research towards stem cell differentiation. Current and future challenges The current and future challenges are of a technological and strategical/translational/transfer-to-market nature. On the one hand, this technology combines RF-technology with microfluidics, cell biology and solid-state physics and is therefore highly interdisciplinary. Here, addressing all the requirements to ensure a controlled environment remains challenging. On the other hand, as with other lab-on-a-chip techniques, the transfer from chip in a lab to real lab-on-a-chip applications is still one of the main challenges; that is, the integration of the peripheral instrumentation, such as pumps, valves and gasmixers to ensure cell-culture conditions from external, highend and high functionality instrumentation to a chip, or at least to compact benchtop devices. Such solutions could bring real advantages from physics/engineering labs to life science laboratories or pharmaceutical and clinical applications. As discussed earlier, for such a commercialization and wide usage of the developed SAW-based technologies more than incremental improvements are necessary [183]. To achieve this, high importance clinical/biological tasks without existing simple, easy-to-handle and affordable solutions have to be identified, developed and engineered from the clinical side employing the broad base of possible applications developed so far, instead of arbitrary multiplexing of developed tasks from the engineering side. Regarding the more technological challenges, longevity and reproducibility of the setups are issues. Here, the combination of reusable setup parts and disposables might help to overcome problems like contamination by debris from cells and tissue. Another challenge is the integration of 3D and mechanically tunable visco-elastic environments going beyond simple coatings [184]. Regarding the relatively new and small field of cell stimulation, the biological response (proliferation, cell stress, calcium release, membrane permeability, organization of the cytoskeleton,…) and interaction mechanisms with the electrical and mechanical fields of SAW have to be studied and understood. A highly interesting field here is elucidating the possible impact of SAW for stem cell differentiation in analogy to mechanically guided differentiation of these cells by Young's modulus E of the substrate [185]. Advances in science and technology to meet challenges The strategical/translational challenges, especially the inversion of the approach starting from the biological task, require even more interdisciplinary communication to make those communities meet who identify the actual need of solutions and those who have the expertise in acoustic manipulation of micron sized soft matter. However, on the technological side, to increase the degree of precision and deliberate control of cells hybrid setups may become more important. The use of wave guides, resonator and phononic crystals can help to gain precision, reduce losses and increase the effective amplitudes for even lower input signals. Moreover, towards better 3D control combinations of BAW and SAW could be advantageous and more simple to implement, compared to devices limited to the use of SAW. Towards more in vivo-like environments, hybrids of SAWchips for cell trapping and stimulation with covalently bound, thin (sub-wavelength) elastic polyacrylamide (PA) gels are promising candidates. By fabrication of very soft gels (e.g. bulk Young's modulus E = 1 kPa) of different thicknesses, the effective Young's modulus E eff , the cell experiences can be adjusted, as shown in figure 28. Another significant increase of functionality to a level not reached from other cell manipulation techniques is combing the enormous sensing potential of SAW sensors (see section 12) with the one of SAW actuators. New promising perspectives in the exploration of neuronal networks have been shown by culturing primary neurons on SAW chips and manipulating the outgrowth of their neurites [186]. Here, the possibilities of using static approaches-e.g. by an appropriate patterning of the chip surface-to produce or manipulate neuronal networks, have proven to be limited. Dynamic approaches like tunable SAW-fields in space and time bear the potential to overcome those. However, significant improvements are still necessary to especially control neuronal networks at will to allow basic biophysics researchsuch as the correlation between structure, supra-cellular signal propagation and function of neuronal networks. These new and far-reaching perspectives for the long-term need the combination of such novel tools with established ones like multi transistor arrays or other electrophysiological methods. Concluding remarks From the three categories of application for SAW-based cell manipulation, especially sophisticated acoustophoresis-based applications, as well as cell stimulation, e.g. for drug uptake, are promising, highly interesting fields of research. The challenges include the need for multiplexing and an inversion of the approach starting from the biological task. Especially, hybrid approaches bundling the expertise of the field of semiconductor devices, material science and cell biology bear the potential to make significant steps and enable new functionalities. In par ticular, the combination of SAW with visco-elastic environments allows for creation of biomimetic in vivo like environments where active and passive mechanics can be well-controlled to dissect their influence on cellular behaviour (e.g. cardiomyocytes in a beating heart modelled on a chip). Time will tell how fast the progress in this highly exciting research field develops!
26,279.4
2019-07-03T00:00:00.000
[ "Physics" ]
The idea of the circular motion of time in the thought of the Greeks of the 8th century B.C. The aim of this paper is to reveal the specifics of a perception of calendar time among the Greeks of the 8th century B.C. Through an analysis of iconographic sources and original texts, we have made an attempt to determine the peculiarities in their perception of the flow of time, changing of the seasons, and annual circulation of time. By studying the imagery as it is shown in the pottery decoration of the Protogeometric and Geometric periods, we have come to the conclusion that the symbols depicted in it were reflected in the representations about calendar time as connected to the natural environment and alteration in the surrounding space in accordance with the annual changes in nature. Introduction Observation of the surrounding space by the people of archaic cultures let them eventually realise that all the phenomena in the world were dependent on definite rhythms. Scholars point out that the ideas of space and time based on natural rhythms existed interchangeably in the minds of people with mythological thinking. 1 The consequence of the involvement of human life into the natural processes and prevailing of sensual perceptions was the comprehension of the passage of time in accordance with changes in the surrounding space. 2 Primarily, the initial orientation in time, as well as the initial orientation in space, was based on the solar cycle and an alternation of seasonal states in nature. 3 In this paper we will argue that the annual renewal of nature, the repetition of natural processes, and a return to the initial state, together with sensual perception, resulted in the emergence of the idea of the circular motion of time 4 that, in its turn, 1 Bouzek (2018: p. 74); Calame (2009: p. 56); Giannakis (2019: pp. 238-239); Whitrow (1961: pp. 73-78). E. Cassirer (2010: p. 100) thus noted that the reflection of space and time in mythological thinking was implemented by sensual perception. He also argued that the basis for the emergence of a sense of time was laid by the same natural conditions that had influenced the formation of ideas about space. A regularity of a change of various states of nature thereby facilitated the development of the ability to comprehend time intervals: Cassirer (2010: pp. 120-123). 2 The perception of time in early Greek poetry was associated with the perception of space that revealed itself through the description of movement. A. T. Zanker (2019: pp. 61-64) notes that the concept of time in ancient Greek thought expressed itself through motion and shifting in space. Various objects in space played the role of metaphors and thus emphasized the specifics of the passage of time in a definite episode or in common sense: Zanker (2019: pp. 76-78). A. C. Purves points to the interconnection of space and time in Homer by referring to the description of the blowing of the wind in his similes: Purves (2010: pp. 334-335, 337-338, 341). On the similar interconnection of space, time, and movement see also Purves (2006: pp. 181, 187-188, 197). J. F. De Jong notes that the description of the passage of time as an action in Homer is revealed through the spatial images which create definite spatial and temporal relations: De Jong (2007: pp. 21-22, 24-25, 33-34). E. Husserl (1973: pp. 298-300) argues that the perception of space is based on the kinaesthetic orientation which forms the system of initial spatial coordinates. The main role in this orientation belongs to the human body, in accordance with the position and movement of which the initial representations about space are formed. Taking into account Husserl's concept, we are inclined to suppose that, in addition to the motion of the human body, the motion of the objects filling the surrounding space, as well as observations of the changes of space in the immediate vicinity to the observer, produced the initial representations about time. 3 Cassirer (2010: pp. 120-123); Giannakis (2019: pp. 238-239); Heidegger (1967: p. 96). 4 There are two models of time, linear and cyclic time, which are the result of a perception of changes in the surrounding space and human life. Speculating on the character of linear time, A. Bartolotta considers this concept of time from the two positions. The author argues that, on one hand, time was perceived in its duration as the sequence of events. On the other hand, it has a subjective nature dependent on the individual characteristics of the percipient. In both cases, time is regarded as a linear motion of events when two spatial positions (in front of and behind) are the initial points of reference for an orientation both in space and time: Bartolotta (2018: pp. 2-4, 8 Brown (1998); Currie (2012); Falkner (1989); Most (1997); Querbach (1985); Smith (1980);Zanker (2013). Iuliana Lebedeva The idea of the circular motion of time in the thought of the Greeks of the 8th century B.C. ČLÁNKY / ARTICLES produced in their consciousness some images that were connected with this sense of time. When referring to the interpretation of Greek iconography of the 8 th century B.C., scholars foremostly pay attention to its social context or its association with religion and eschatology. Considering the crucial meaning of the research done in studies of early Greek iconography, we would like to emphasize another significant aspect, namely that it could have also depicted representations of time. In our research, we will consider the images associated with the solar cycle and the life of nature as they are connected with the ideas of the Greeks of the 8 th century B.C. about the calendar time. Another aspect that we regard as important is that the human life and cosmic processes were involved into a specific framework which existed subconsciously in people's thought, despite the culture or epoch, and this is the sense of the circle. 5 The idea of the circle as a basic sense of the world arrangement, inherent subconsciously in people's minds and reflected in their culture, is seen e.g. in the human tendency to bring circularity into their organisation. 6 We are inclined to suppose that this circular essence of the cosmic being was based on the solar cycle as the initial point for an orientation both in space and time that was reflected in iconography. The idea of the circular motion of time in poetry The idea of the circulation of time was inherent in the thought of the Greeks of the 8 th century B.C., as follows from the literature of this period. When denoting the passage of a certain period of time, both Homer and Hesiod use verbs which could refer to the Though regarding this concept of time as crucial, in this paper we refer to another model of time which is more essential for us when considering the perception of calendar time, namely its cyclical concept. Pointing to the simultaneous existence of representations about cyclical and linear time in the ancient Greek thought, A. T. Zanker (2019: p. 81) notes that "when it comes to cosmic time, things are often represented in cyclical terms". 5 The idea of the "roundness of being" is thus argued by G. Bachelard (1961: pp. 192-193, 211-213). Speculating on the nature of human being, G. Bachelard comes to the conclusion that "tout est circuit, tout est detour, retour" in its essence. Being, by its circular nature, is itself a spiral, and the round form is the archetypal core on which the human perception of the world is based: Bachelard (1961: pp. 193, 213). R. Caillois (2015: p. 79) connects the sense of the circle with the aspiration of people with mythological thinking to demarcate the sacral space and to divide it from the profane one. The sense of the circle in their world perception had thus an utterly positive meaning as the lifegiving and protecting source. On a similar opinion see Hertz (2004: p. 102). M. Roblee (2018: p. 133) emphasizes the association of the circle with the sense of movement, solar cycle, and natural rhythms. On this aspect see also Kaul (2005: pp. 137, 145-146). E. Husserl (1973: pp. 309-310) argues that the initial space perception, which is based on the specifics of the human body structure, forms a so-called "okulomotorische Raum" that is created "Ein geschlossener, zweidimensionaler Raum konstituiert sich durch Drehung des Kopfes um seine Grundachse", by means of which the "ein kugelartig geschlossener Raum" is formed. From Husserl's analysis of his kinaesthetic system, it follows that the initial space, defined by the specificity of the human body structure, has a round form. Iuliana Lebedeva The idea of the circular motion of time in the thought of the Greeks of the 8th century B.C. ČLÁNKY / ARTICLES sense of a circular motion of time. These verbs are περιτρέπω 7 meaning to rotate, to turn around, περιτέλλομαι 8 and περιπέλομαι 9 meaning to go round, and περιτελέω 10 which implies to complete and then recur. 11 When mentioning the expiration of an annual cycle, both authors use, in variations, the following word formula: ἀλλ᾽ ὅτε δή ῥ᾽ ἐνιαυτὸς ἔην, περὶ δ᾽ ἔτραπον ὧραι 12 μηνῶν φθινόντων, περὶ δ᾽ ἤματα πόλλ᾽ ἐτελέσθη, 13 "when a year had passed, and the seasons had revolved as the months waned, and many days had been completed." 14 In this variation of the formula the flow of time is represented as an expiration of an annual cycle, 15 a rotation of the seasons, 16 and a revolving of days. 17 We can see, then, that time was perceived as a circulation of days, months, and seasons, which had as its sequence the completion of the annual time cycle and the coming of a new year. The idea of the circle in its cosmic significance in Greek archaic cosmology, as is followed from literary sources of the 8 th century B.C., is seen clearly in the image of the river Oceanus, encircling the earth. 18 As a spatial image, Oceanus was also connected to the solar cycle, being the place where the sun rises from and sets into, and thus it could have been connected with the idea of the circular motion of time based on it. 19 Another significant circular image in the thought of the Greeks of the 8 th century B.C. is the shield of Achilles. 20 In its compositional unity, it functions as the model of the universe 7 Il., 2. 295; Od., Theog.,[58][59] Il., 2. 551; 8. 404; Od., 11. 295-296; 14. 293-294; Theog., 58-59. In the Homeric hymn to Apollo, we can also find the use of this verb in the lines 349-350. 18 Il., 18. 399,[606][607]Od.,20. 65. 19 Il.,Od.,23. 244;Herm.,68,[184][185] and represents events occurring in space and time as well as depicting the astral bodies and the natural rhythms, calendar, and cosmic time in accordance with which the universe exists. 21 The idea of the circular motion of time connected with the idea of an annual renewal of nature and its rebirth in a new state in accordance with the solar cycle could be present in the iconography of Proto-geometric and Geometric styles. We can trace the existence of this idea in such symbols inherent in the iconography of the Dark Ages and the early Archaic age as concentric circles and semicircles as well as spirals in earlier Minoan and Mycenaean art, plentifully present in pottery decorations. In this research we will argue that the circular symbols on ceramic vessels could have been connected with this idea. Despite the main field of our research being the 8 th century B.C., the pottery of the Proto-geometric period will be considered as well in order to follow the continuity of images and ideas connected to them. The iconography of earlier periods (Cycladic, Minoan, Mycenaean) is also the subject of our research with the same purpose. Symbols associated with the idea of the circular motion of time The most ancient motif associated with the idea of circular motion is the spiral. We would like to emphasize the ancientry of this symbol present on objects of art since the Neolithic time, as, for example, in the case of the spherical vase from the Late Neolithic I with the spiral in the centre (Fig. 1). 22 As an image which, in accordance with its circular nature, could have visualized the idea of movement in circles, the spiral might have been connected to the sea. The derivation of the spiral from sea waves is clearly seen e.g. in the decoration of the Mycenaean kylix of the 14 th century B.C. depicting the sea waves enclosing the spirals (Fig. 2). Another example of the correlation of the spiral and the sea could be the depiction of octopuses as one of the most popular marine motifs, with their tentacles curving into spirals (Fig. 3). One of the aspects of marine semantics in archaic thought is its apparent connection with the ideas of death, afterlife, and regeneration at the same time. 23 Another aspect of the sea is its connection to the sun, namely its setting and rising in definite spatial points, which, in their turn, influenced the emergence of the double semantics of the sea in its connection both to life and death. Spirals in the decoration of Cycladic "frying pans" appear in their evident association with the sea and sun. On the frying pan from Naxos there is a composition of four spirals connected together and symbolizing sea waves, encircling the sun image in the very centre. Out of the spiral band there emerge four fish images (Fig. 4). From this example it is clear that the spirals symbolize the sea, and the sea here is connected to the solar cycle. Being associated both with the solar cycle and the sea as its spatial background, the spiral as defining both the motion of the sun and the motion of the sea waves could have been connected to the natural rhythms depending on the solar cycle. Another symbol in Greek iconography connected with the idea of circular motion in space and time is the image of concentric circles and semicircles. They could have been associated with the sun, representing the idea of its circular movement from east to west, as well as the sense of circulation of time within the year expressed with the changes of nature as dependent on the solar cycle. Concentric semicircles, as the most popular motif on the ceramics of the Dark Ages (Fig. 5), are often depicted in the upper part of the vessels while the vessels are divided by horizontal lines into three parts (Fig. 6). We are inclined to assume that in this case it might be a depiction of the division of the cosmos into three spheres while concentric semicircles in the upper part of the vessel could have symbolized the sun rising from or setting into the Oceanus. If so, these vessels may have represented the cosmic model inherent in Greek thought in this period. These semicircles sometimes contain an image of a double axe inside (Fig. 7), which, in its turn, in Minoan and Mycenaean iconography could have been associated with the Great Goddess as a deity connected to fertility and regeneration, 24 so this symbol could have been based on the idea of natural rhythms dependent on the solar cycle as well. In the case of the circles, the concentric image is usually depicted in the centre of the vessels. The positioning of these circles in the central part in accordance with the very significant semantics of the centre in the archaic mind 25 could thereby emphasize their crucial meaning. This motif is characteristic of the iconography of the 10-8 th centuries B.C. A wheel, as the symbol of spinning and moving in circles, is sometimes enclosed in the centre of a concentric circle (Fig. 8). Another element related to concentric circles is the cross (Fig. 9), which in ancient thought could have existed as a solar symbol, 26 or it could have been associated with a potter's wheel. 27 Concentric circles, like semicircles, as a visualization of such an abstract concept as time in its annual circulation, obtained a spatial character as well by expressing this idea in its visual form. The concentricity of these images thus emphasized the idea of alternation of natural processes in their relation to the solar cycle. The concentric nature of both circles and semicircles could have been connected with the sense of the regular circular movement of the sun and the annual circulation of time that was perceived by changes in the surrounding space. 28 The idea of the circular motion of time in the thought of the Greeks of the 8th century B.C. The idea of circular motion of time and the spring season The idea of the circular motion of time, as pointed out above, was also associated with the idea of a point of time with which the beginning of a new year was associated, while this initial point on the border of alternation of years when the process of annual renewal of the nature began was the spring. Hesiod in Erga., 561-562 with the phrase εἰς ἐνιαυτὸν could imply a renewal of nature correlated with the coming of the spring. 29 As follows from the Works and Days, markers such as the appearance of the Arcturus in the night sky after sixty days counted from the day of the winter solstice and the return of swallows heralded the coming of this season, as can be seen from the lines 564-569. Homer in Od., 10. 469-470 apparently correlates the coming of a new year with the day of the vernal equinox, implying as "the long days" 30 the point of time when the length of the day starts to prevail over the night. A direct association of the long days with the springtime is found in Od., 18. 367. 31 Considering the significance of this season for the Greeks of the 8 th century B.C. as the beginning of the new year, we are inclined to assume that it could have had some symbolic markers which could have been reflected in the iconography of this period. Homer, like Hesiod, associates spring with the return of some birds, as can be traced in the following fragment of the Iliad: τῶν δ᾽ ὥς τ᾽ ὀρνίθων πετεηνῶν ἔθνεα πολλὰ χηνῶν ἢ γεράνων ἢ κύκνων δουλιχοδείρων Ἀσίω ἐν λειμῶνι Καϋστρίου ἀμφὶ ῥέεθρα ἔνθα καὶ ἔνθα ποτῶνται ἀγαλλόμενα πτερύγεσσι. 32 "And as the many tribes of winged fowl, wild gees and cranes or long-necked swans on the Asian mead by the stream of Caystrius, fly this way and that, glorying in their strength of wind, and with loud cries settle ever onwards, and the mead resoundeth." 33 circle. Possibly, this concentric nature of the shield generated, in the mind of the Greeks, the association of this piece of armour with a cosmic structure based on the solar cycle and the natural rhythms which are also reminiscent of this concentric nature. In descriptions of this shield, words referring to the idea of circular motion are used, so it is represented in its dynamic state in accordance with the life processes and the flowing of time. See Carruesco (2016: pp. 77-79). Here, then, we can draw a parallel between the round shield of Achilles as a cosmic model enclosed in a definite time frame, a time of performance, and cosmic time existing simultaneously, and the concentric circles found in Geometric pottery. Both these examples, literary and the iconographic, represent the idea of circular time obtaining a visual nature. ČLÁNKY / ARTICLES As follows from Hesiod's Works and Days, the cranes' migration to the south foreboded the coming of the autumn. 34 By analogy, their return back to Greece, like of other species of migratory birds, could signify the coming of spring. Besides the mention of migratory birds, this fragment holds another important marker associated with the semantics of springtime, namely the notion that this scene takes place in the spring meadow. Abundant vegetation is an attribute of spring as well. In his similes, Homer compares the warriors in the battlefield with the flowers and leaves, mentioning that: ἦλθον ἔπειθ᾽ ὅσα φύλλα καὶ ἄνθεα γίγνεται ὥρῃ, 35 "Out of the morning mist they came against uspacked as the leaves and spears that flower forth in spring." 36 ἔσταν δ᾽ ἐν λειμῶνι Σκαμανδρίῳ ἀνθεμόεντι μυρίοι, ὅσσά τε φύλλα καὶ ἄνθεα γίγνεται ὥρῃ. 37 "So they took their stand in the flowery mead of Scamander, Numberless, as are the leaves and the flowers in their season." We also find mention of the spring flowers in another scene describing the battle. 38 From these instances of poetry we may suppose that use of the floral images in scenes with an absolutely different thematic context might have been done by the author subconsciously if these images had extraordinary significance and existed in his mind as archetypes. We can suppose that flowers as the symbol of spring, which, in turn, was a period with exceptional cosmological importance as a time of regeneration and rebirth, acted as the visualization of this time. The idea of the springtime as the beginning of a new life can be found in Il., 6. 146-148, where Homer compares human life to the vegetative cycle, using the image of the leaves with this purpose: οἵη περ φύλλων γενεὴ τοίη δὲ καὶ ἀνδρῶν. φύλλα τὰ μέν τ᾽ ἄνεμος χαμάδις χέει, ἄλλα δέ θ᾽ ὕλη τηλεθόωσα φύει, ἔαρος δ᾽ ἐπιγίγνεται ὥρη. "Even as are the generations of leaves, such are those also of men. As for the leaves, the wind scattereth some upon the earth, But the forest, as it burgeons, putteth forth others when The season of spring is come." In another fragment, we again find this association of people's life with the leaves: 34 Erga.,[448][449][450][451]9. 51. 36 Transl. by R. Fagles (1996). 37 Il., 2. 467-468. 38 ἐπ᾽ ἄνθεσιν εἰαρινοῖσιν. Il., 2. 89. εἰ δὴ σοί γε βροτῶν ἕνεκα πτολεμίξω δειλῶν, οἳ φύλλοισιν ἐοικότες ἄλλοτε μέν τε αφλεγέες τελέθουσιν ἀρούρης καρπὸν ἔδοντες, ἄλλοτε δὲ φθινύθουσιν ἀκήριοι. 39 "if I should battle you because of mortals. Mortals resemble a tree's leaves. Sometimes they absorb the earth's bounty, bursting with life. Other times they weaken and die." 40 From the fragments above we can thus conclude that two kinds of images in Greek literature and Greek mind were correlated with the springtime as the beginning of a new year and new life, and that these symbols were birds and vegetative images, namely flowers and leaves. In the following part of our paper, we will observe these images in detail in their interconnection and correlation with the symbols of the circular motion of time discussed above. Spring images: vegetative symbolism Floral imagery is abundant in Minoan art while the flowers there seem to be connected to the cult, probably to the cult of the Great Goddess, which was associated with fertility and regeneration. The flowers which might have been the attributes of the cult in honour of this deity include crocuses, 41 lilies, 42 and roses. 43 The floral motif becomes popular again after a long period of oblivion in the Dark Ages as an element of pottery decoration in the Late Geometric period. It is difficult to say what kinds of flowers are depicted there because of the schematic and stylized character of these images. We are inclined to suppose that the first spring flowers growing in Greece, such as narcissi and poppies, could have been perceived as the symbols of the spring. Poppies as spring flowers are mentioned in Il., 8. 36-37. 44 Narcissi are mentioned in the Hymn to Demeter which acts as an etiological myth explaining the processes of annual renewal of nature. 45 The correlation of the spring flowers with Persephone's return is seen in the lines 401-402 of the hymn: ὁππότε δ᾽ ἄνθεσι γαῖ᾽ εὐώδεσιν εἰαρινοῖσι παντοδαποῖς θάλλῃ, τόθ᾽ ὑπὸ ζόφου ἠερόεντος 39 Il., 21. 463-466. 40 Transl. by H. Jordan (2008). 41 Day (2011: pp. 369-370); Rehak (2004: pp. 86-96). 42 Lawler (1944: pp. 76-78); Marinatos (1993: p. 95); Watrous (1991: p. 295). 43 Chapin (1997: p. 20). 45 On the cosmological significance of this flower see Brockliss (2019). The idea of the circular motion of time in the thought of the Greeks of the 8th century B.C. ČLÁNKY / ARTICLES αὖτις ἄνει "When the earth blooms in spring with all kinds of sweet flowers, then from the misty dark you will rise again." 46 Taking into account the importance of the spring flowers as the symbol of regeneration in poetry, we are inclined to assume that the floral images in iconography might have been associated with the spring season. In the pottery of the Late Geometric style, rosettes are frequently combined with an image such as a swastika, 47 which in this period was the main symbol connected to the solar cycle and could thus have been associated with the idea of the circular motion of time as well (Fig. 10, 11) or with concentric circles (Fig. 12-14). The analogical interconnection of vegetation and the solar cycle in the earlier period represented in the art of the Minoan and Mycenaean periods can be traced in a combination of vegetative elements with the spiral. One of the variations of this motif is represented by petals or leaves curving into spirals, e.g. as in the case of the lilies in Fig. 15. On the jug decorated with ivy leaves curving into spirals, in the middle of the composition there is a round image with spiral curves on its ends, visually similar to the swastika (Fig. 16). Above it is a small schematic image of two wavy lines which could depict the sea waves and the circle which, in turn, could be an image of the rising or setting sun. Another Mycenaean jug (Fig. 17) has three decorative bands: concentric semi-circles in its upper part, sea wave shaped spirals in the middle, and the floral motif as ivy leaves in the lower part. The combination of three motifs is thus present on this vessel: the solar, in the case of concentric semi-circles, the marine, represented by the spirals, and the vegetative, in the case of the ivy leaves. All these motifs are connected with the sun and natural cycles based on it. Along with the depiction of the flowers, another vegetative motif is popular on the pottery of the Late Geometric style. This motif is represented by the depiction of petals or leaves. Like the petals being possibly correlated with the floral images depicted on ceramics, the leaves might have been connected with the idea of the Tree of Life. 48 This motif is widely represented in the iconography of Late Geometric pottery. An example to illustrate this idea can be the pyxis, decorated with the petal motif on its lid (Fig. 18). In the centre of the vessel there is a rosette with a wheel in the middle. The petals may be both the part of this rosette as well as its departed elements in parallel with the idea of life and death illustrated in the fragments of Il., 6. 146-148 and Il., 21. 463-466 cited above. The wheel in the centre of the rosette could depict the idea of circular motion, change of seasons, and rebirth. 46 Transl. by H. P. Foley (1993). 47 On its connection with other symbols associated with the solar cycle, fertility, and regeneration see Baldwin (1915: pp. 118, 128, 131). Spring images: birds Along with the vegetative symbolics, another significant image abundantly present in Late Geometric ceramics and connected with the ideas of time, especially with the semantics of spring, as we have seen above from the literature, is the bird. The birds depicted on the vases could have had multiple symbolism. The first symbolic meaning of birds from iconography is that they could have acted as the symbol of a deity. 49 Another case of using of the bird images in iconography is their association with death. 50 In our discussion of the symbolism of time and its visual representation in the iconography, one more semantic connotation of the bird image is important for us, namely its connection to spring and regeneration. The iconography of the birds depicted on the ceramic vessels, judging by the images which could be associated with some species of migratory birds, despite their schematical depiction, may be correlated with spring as we have seen from Homer's similes above. There are plenty of images of aquatic birds which were also popular both in the iconography of Minoan and Mycenaean time or in the art of the Archaic age. Our assumption about the image of a bird as the symbol of rebirth may be supported by the fact that this symbol in the iconography of the Late Geometric period was used in combination with the vegetative symbolics and the symbols associated with the solar cycle. The interconnection of bird and vegetative images is evident in Minoan and Mycenaean art, and there they are connected with symbols associated with the idea of the circulation of time as well. An interesting example for us is the Mycenaean krater with its depiction of two aquatic birds. Their bodies are decorated with dotted circles, which could be the solar symbols (Fig. 19). In the upper part of the vessel and in the middle, between the birds, there are solar symbols as well. Another example of the connection of birds with the solar cycle and the idea of the alternation of seasons is the amphora from Mycenae, depicting an aquatic bird surrounded by spirals with rosettes inside (Fig. 20). One more motif in pottery decoration which may be associated with birds and their connection to the idea of the circulation of time is a combination of bird images with concentric circles as is depicted on the Middle Geometric oinochoe with two birds sitting on the concentric circle ( fig. 8). 51 Another later example from the 7 th century B.C. is a crater with the depiction of two aquatic birds surrounded by spirals (Fig. 21). In the very centre of the composition there is a fragmentary round image resembling a wheel in shape with poppy buds on its ends. A similar combination of bird, vegetative, and solar images can be found on the pottery of the Late Geometric period and can be shown in the following examples: combinations of birds and concentric circles (Fig. 22) and of birds and swastikas (Fig. 23); birds and rosettes, supplemented with the serpentine motif 52 and vegetative symbols (Fig. 24); a combination of swastikas, rosettes and birds with the bird figurine on top of the amphora (Fig. 25); and two aquatic birds surrounded by astral symbols (Fig. 26). Conclusion In conclusion, we would like to emphasize that the images represented in the iconography of the Proto-Geometric and Geometric periods, as well as in the earlier Cycladic, Minoan, and Mycenaean periods, were interconnected and expressed the ideas of time inherent in Greek thought and transferred from one epoch to another. The key idea of the perception of time was correlated with the sense of the circular motion of time, which apparently emerged under the influence of alternations of states of nature dependent on the solar cycle. This sense of circularity was formed in the human mind as a consequence of observing various objects in the surrounding space, such as the sun and its rising and setting in definite points in space, the stars and their shifting in the sky, sea waves, vegetation and its seasonal changes, the migration of birds, etc. This principle in iconography is expressed in such images as a spiral, concentric circle and semicircle, wheel and swastika. The other principle which we were able to trace in our research is the association of some images with the definite season of the year as the time of annual renewal, rebirth, and regeneration of nature, namely the springtime. The idea of the circulation of the seasons is present in the literature of the 8 th century B.C. as we have pointed out above, and it is also represented in the iconography of this period by bird and vegetative symbolism combined with the symbols of circular motion. This work can be used in accordance with the Creative Commons BY-SA 4.0 International license terms and conditions (https://creativecommons.org/licenses/by-sa/4.0/legalcode). This does not apply to works or elements (such as image or photographs) that are used in the work under a contractual license or exception or limitation to relevant rights
7,600.6
2023-01-01T00:00:00.000
[ "Physics" ]
Design of Quad-Port MIMO/Diversity Antenna with Triple-Band Elimination Characteristics for Super-Wideband Applications A compact, low-profile, coplanar waveguide (CPW)-fed quad-port multiple-input–multiple-output (MIMO)/diversity antenna with triple band-notched (Wi-MAX, WLAN, and X-band) characteristics is proposed for super-wideband (SWB) applications. The proposed design contains four similar truncated–semi-elliptical–self-complementary (TSESC) radiating patches, which are excited through tapered CPW feed lines. A complementary slot matching the radiating patch is introduced in the ground plane of the truncated semi-elliptical antenna element to obtain SWB. The designed MIMO/diversity antenna displays a bandwidth ratio of 31:1 and impedance bandwidth (|S11| ≤ − 10 dB) of 1.3–40 GHz. In addition, a complementary split-ring resonator (CSRR) is implanted in the resonating patch to eliminate WLAN (5.5 GHz) and X-band (8.5 GHz) signals from SWB. Further, an L-shaped slit is used to remove Wi-MAX (3.5 GHz) band interferences. The MIMO antenna prototype is fabricated, and a good agreement is achieved between the simulated and experimental outcomes. Introduction In contemporary wireless communication, the demand for super-wideband (SWB) and ultra-wideband (UWB) antennas is on the rise [1,2]. The UWB antenna possesses a bandwidth ratio of 3.4:1, and its bandwidth is defined from 3.1 to 10.6 GHz (by the Federal Communications Commission) [3], while the SWB antenna offers a bandwidth ratio of more than 10:1 [4,5]. As compared to the UWB systems, the SWB antenna can be used for both short-range and long-range communication. The planar monopole antenna, owing to its small size, light weight, low cost, and ease of fabrication and integration, is a suitable candidate for obtaining UWB/SWB [6,7]. In the literature, several antennas with fractal geometry have been proposed for SWB applications. In [8], a coplanar waveguide (CPW)-fed hexagonal-shaped patch antenna modified by Sierpinski square fractal form was designed for a bandwidth ratio of 11:1. In [9], an antenna comprising modified star-triangular fractal (MSTF) geometry fed by a microstrip line and a semi-elliptical ground surface was reported. A CPW-fed octagonal-shaped radiating patch modified using four fractal iterations was proposed in [10]. In [11], an octagonal-shaped radiating patch antenna using the second iteration of the fractal shape was suggested. A monopole antenna comprising an egg-shaped radiating patch and a ground plane loaded with complementary semi-elliptical shaped fractal slot was developed in [12]. However, SWB antenna configurations using fractal shapes are difficult to manufacture, and practically only a few iterations are possible to design. Recently, the use of self-complementary antenna (SCA) structures has been in focus for SWB and UWB communication systems. In [13], the authors proposed a semi-circular shaped quasi self-complementary (QSC) monopole antenna for UWB. In [14], a CPW-fed antenna composed of QSC geometry and a tapered radiating slot was designed for UWB. A microstrip line-fed monopole antenna comprising a quarter-circular disc and ground plane embedded with a quarter-circular slot was suggested for UWB systems [15]. In [16], an antenna with two parallelly arranged circular elements was presented with multiple-input-multiple-output (MIMO) characteristics. In [17], a UWB MIMO antenna containing two QSC radiating patches located opposite to each other to realize high isolation was proposed. In [18], a dual-port MIMO antenna with a castor leaf-shaped structure and possessing WLAN and Wi-MAX band rejection characteristics was reported. In [19], the authors presented a UWB MIMO antenna with two QSC half-circular monopoles, where the notch band and isolation were obtained by introducing parasitic strips of Levy's and Hilbert fractal-shaped strips, respectively. A dual-port SWB MIMO antenna composed of two circular patches, asymmetrical E-shaped stubs, and mushroom-shaped electromagnetic band-gap (EBG) structures was suggested in [20]. In [21], a four-port SWB MIMO antenna with QSC resonating elements and exhibiting WLAN and Wi-MAX bands elimination characteristics was proposed. However, designs of the SWB antenna reported up until now primarily consist of antennas with one radiating element possessing one or two bands rejection characteristics. SWB antennas with four radiating elements and triple or multiple band elimination characteristics have seldom been reported. In this article, a quad-port MIMO/diversity antenna consisting of four similar truncated-semielliptical-self-complementary (TSESC) radiating elements is presented. The resonating elements are excited through tapered CPW feed lines. A complementary slot matching the radiating patch is introduced in the ground plane of the truncated semi-elliptical antenna element to obtain SWB. The proposed resonating element displays a large bandwidth, which could be helpful for achieving a high data transmission rate, and the MIMO/diversity system offers better signal reception. The SWB antenna is designed to achieve triple elimination characteristics to avoid interferences of Wi-MAX, WLAN, and X-band signals. The Wi-MAX band interferences are rejected by introducing an L-shaped slit in the resonating patch. Similarly, a complementary split-ring resonator (CSRR) is introduced in the radiating element of the antenna for eliminating WLAN and X-band signals. The adjacent resonating elements are arranged orthogonally to each other, and diagonal elements are positioned in an anti-parallel manner to reduce coupling between the four radiators. The ground surfaces of the four monopole antenna unit cells are connected to ensure the same voltage in the proposed MIMO/diversity antenna. Antenna Design The input impedance of an SCA is constant, as shown by Mushiake's relationship [22]: where Z 0 is the value of impedance measured in free space. The equation shows that the antenna dimensions, bandwidth, or wavelength do not affect the input impedance of a well-matched SCA. This method is used to design antennas with large bandwidth requirements [23,24]. TSESC SWB Antenna The schematic of the TSESC resonating element is illustrated in Figure 1. The design contains a truncated semi-elliptical monopole antenna excited by a tapered CPW feedline. A truncated semi-elliptical slot (corresponding to the radiating patch) is embedded in the ground plane of the antenna element to obtain SWB. The antenna is printed on the FR-4 dielectric substrate with a relative permittivity (ε r ) 4.4, loss tangent (tan δ) 0.02 and 1.6 mm thickness. The TSESC resonating antenna element dimension details are presented in Table 1. The designing and optimization of the TSESC antenna are carried out using the ANSYS HFSS ® tool. The design stages of the resonating element are revealed in Figure 2. Initially, a truncated semi-elliptical shaped monopole radiator with a modified ground surface (Antenna-A) is designed, as displayed in Figure 2a. A radiating patch matching slot is etched from the ground surface of the antenna element to attain impedance matching over the SWB. The reflection coefficients of the geometrical design stages are presented in Figure 3. The designed antenna displays an impedance bandwidth (|S 11 | ≤ − 10 dB) of 1.3-40 GHz. In Figure 2b, a split-ring resonator (SRR) is laden on the resonating patch of the antenna element (Antenna-B) to eliminate interfering WLAN band (5.5 GHz) from the SWB. Next, as illustrated in Figure 2c, another SRR (complementary to the SRR in stage-ii) is laden on the radiating patch element (Antenna-C) to notch interfering X-band signals (8.5 GHz). Further, as demonstrated in Figure 2d, an L-shaped slit is introduced in the resonating patch (Antenna-D) to eliminate Wi-MAX band (3.5 GHz) interferences from SWB. The geometric layout of the proposed TSESC resonating element with the L-shaped slit and CSRR is shown in Figure 4a. The etched CSRR (for eliminating WLAN and X-band signals) is composed of two concentric circular rings of different radii and the same width, as shown in Figure 4b. The effective lengths of L-shaped slit (S L ) and SRR (S Ri ) are 0.29λ g1 and 0.52λ gi , respectively, which are calculated as [25]: ε r,e f f = ε r + 1 2 where ε r is the dielectric constant, ε r,eff is the effective dielectric constant, c is the velocity of light in free space, f ci is the centre frequency, and λ gi is the guided wavelength of the notched band. Figure 5a-c signifies the surface current distributions at frequencies 3.5, 5.5, and 8.5 GHz, respectively. It is revealed in Figure 5a that the current is mainly concentrated along the L-shaped slit, which is accountable for the elimination of Wi-MAX band. In Figure 5b, a stronger current is seen near the outer split-ring, which results in the rejection of WLAN band. In the same way, the current is stronger close to the inner split-ring (illustrated in Figure 5c), which is accountable for the elimination of X-band signals. Therefore, by etching the L-shaped slit and CSRR from the TSESC antenna element, triple band-notched characteristics are obtained in the SWB. TSESC SWB MIMO Antenna The antenna dimensions must be as small as possible due to space constraints in the communication devices. Designing the four-port diversity antenna is complex due to the mutual coupling of each radiating element to the other three similar resonating structures. The existence of multiple identical elements in the MIMO antenna leads to a manifold increase in the envelope correlation coefficient (ECC) and mutual interference among different elements. A four-port TSESC MIMO antenna with compact size is proposed for SWB applications. The geometric layout of the proposed antenna is presented in Figure 6, and the dimensions of various design parameters are provided in Table 1 Results The reflection coefficients of the proposed SWB MIMO antenna are shown in Figure 7. The impedance bandwidth and bandwidth ratio of the SWB MIMO antenna are 1.3-40 GHz and 31:1, respectively. The rejection of frequencies 3.5, 5.5, and 8.5 GHz is observed due to the introduction of L-shaped slit and CSRR in the radiating element of the antenna. In the proposed SWB antenna, the rejection bands can be controlled by changing sizes of the L-shaped slit and CSRR. The experimental results are shown only up to 18 GHz, which is due to the availability of ordinary SMA connectors and limited resources. While measurements are done at one port of the diversity antenna, the other ports are terminated using 50 Ω matched loads. Figure 8a,b demonstrates mutual coupling among different antenna elements of the proposed quad-port MIMO antenna. Isolation greater than 16 dB is obtained at lower frequencies, which increases significantly on shifting to higher frequencies. Figure 9 demonstrates that a peak gain of 5.5 dBi is realized. The antenna gain shows a sharp dip at the triple-band rejection frequencies; otherwise, it exhibits satisfactory behavior at other frequencies. Figure 10a that the current is mostly concentrated along the L-shaped slit, which is accountable for rejecting the Wi-MAX band. In the same way, the current is strong close to the outer split-ring ( Figure 10b) and inner split-ring (Figure 10c), which are responsible for the WLAN and X-band rejection behavior, respectively. The ECC between port-1 and port-2 of a four-port MIMO system can be computed using the expression [26]: Similarly, ECC between other ports of the antenna can also be calculated. Figure 11 illustrates the ECC values between different antenna ports. It is noted that ECC remains below 0.01 for the complete SWB region. Figure 12 shows the simulated and measured co-polar and cross-polar radiation patterns of the proposed antenna at frequencies 2.5, 7.5, and 12 GHz. The difference between the levels of co-polar and cross-polar radiation patterns is greater than 15 dB in both the E-plane and H-plane, which signifies stability in the radiation performance of the antenna. It can also be noticed from the figure that the H-plane co-polar patterns show omnidirectional characteristics and E-plane co-polar patterns show bi-directional characteristics. Table 2 gives a comparison of various parameters of the designed antenna and other similar antennas. The comparison shows that the proposed antenna configuration has several advantages over previously reported antennas [8][9][10][11][12][13][14][15][16][17][18][19][20][21], in terms of bandwidth ratio, compact size, number of radiating patches, and isolation among radiating elements. Further, the usage of CPW feeding in the proposed antenna provides the advantage of easy integration into portable devices. In the proposed antenna, the signals at the rejection frequencies (3.5, 5.5, and 8.5 GHz) are eliminated using CSRR and an L-shaped slit, without using any filtering circuitry/active devices. The use of filtering circuitry results in bulky design, and in turn, creates problems during the integration stage, due to the greater space requirement. Moreover, the radiating patches are arranged orthogonally and anti-parallel to provide polarization diversity and better isolation between antenna ports. A common ground plane is used in the proposed antenna to provide stable operation of the quad-port SWB MIMO antenna. Conclusions In this paper, a compact, tapered CPW-fed quad-port MIMO antenna with triple band-notched features was designed and developed. Self-complementarity was used to achieve SWB characteristics, and notch bands were attained by loading an L-shaped slit and CSRR in the antenna resonating element. The coplanar design of the radiators with connected ground planes offers a compact antenna structure that can be easily integrated into the portable device or monolithic microwave integrated circuits. The simulated and measured gain, isolation, S-parameters, and radiation patterns were investigated and verified. The performance of the proposed antenna for various communication bands like L, S, C, X, Ku, K, and Ka proves that it could be a good choice for wireless access systems, cognitive radio, radio astronomy, wideband high-definition television, and other short-range and long-range wireless, satellite and defense applications. Funding: The APC was funded by the Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia, through the Fast-Track Research Funding Program.
3,047.8
2020-01-22T00:00:00.000
[ "Engineering" ]
CHI3L2 Is a Novel Prognostic Biomarker and Correlated With Immune Infiltrates in Gliomas CHI3L2 (Chitinase-3-Like Protein 2) is a member of chitinase-like proteins (CLPs), which belong to the glycoside hydrolase 18 family. Its homologous gene, CHI3L1, has been extensively studied in various tumors and has been shown to be related to immune infiltration in breast cancer and glioblastoma. High CHI3L2 expression was reported to be associated with poor prognosis in breast cancer and renal cell carcinoma. However, the prognostic significance of CHI3L2 in glioma and its correlation between immune infiltration remains unclear. In this study, we examined 288 glioma samples by immunohistochemistry to find that CHI3L2 is expressed in tumor cells and macrophages in glioma tissues and highly expressed in glioblastoma and IDH wild-type gliomas. Relationships between CHI3L2 expression and clinical features (grade, age, Ki67 index, P53, PHH3 (mitotic figures), ATRX, TERTp, MGMTp, IDH, and 1p/19q co-deleted status) were evaluated. Kaplan-Meier survival was conducted to show high CHI3L2 expression in tumor cells (TC) and macrophage cells (MC) indicated poor prognosis in diffusely infiltrating glioma (DIG), lower-grade glioma (LGG), and IDH wild-type gliomas (IDH-wt). The overall survival time was higher in patients with dual-low CHI3L2 expression in TC and MC compared to those in patients with non-dual CHI3L2 expression and dual high expression in DIG and IDH wild-type gliomas. By univariate and multivariate analysis, we found that high CHI3L2 expression in tumor cells was an independent unfavorable prognostic factor in glioma patients. Moreover, we used two datasets (TCGA and CGGA) to verify the results of our study and explore the potential functional role of CHI3L2 by GO and KEGG analyses in gliomas. TIMER platform analysis indicated CHI3L2 expression was closely related to diverse marker genes of tumor immune infiltrating cells, including monocytes, TAMs, M1 macrophages, M2 macrophages, TGFβ1+ Treg and T cell exhaustion in GBM and LGG. Western Blot validated CHI3L2 is expressed in glioma cells and microglia cells. The results of flow cytometry showed that CHI3L2 induces the apoptosis of CD8+ T cells. In conclusion, these results demonstrate CHI3L2 is related to poor prognosis and immune infiltrates in gliomas, suggesting it may serve as a promising prognostic biomarker and represent a new target for glioma patients. INTRODUCTION Gliomas comprise the bulk of primary brain tumors in adults (1). Diffuse glioma is histopathologically classified into grade II-IV according to morphological criteria, including mitotic count, nuclear atypia, microvascular proliferation, and necrosis. Glioblastoma multiform (GBM) is categorized as one of the most malignant subtypes (2,3). The 2016 World Health Organization (WHO) classification of adult diffuse glioma combines tumor histological morphology and molecular features, including the isocitrate dehydrogenase (IDH) mutation and the chromosomal arms 1p and 19q complete deletion (1p/19q co-deletion) (4). Even combining maximal surgical resection and radiotherapy with adjuvant temozolomide, tumor recurrence is inevitable and the prognosis of gliomas remains very poor (5). Consequently, it is an urgent demand to discover the potential molecular characteristics of gliomas and look for more effective treatment strategies. CHI3L2 (Chitinase-3-Like Protein 2), also known as YKL39, is a kind of secretory protein. It is a member of chitinase-like proteins (CLPs) which include CHI3L1, CHI3L2, SI-CLP, YM1 and YM2. CHI3L2 was originally isolated from the culture medium of primary human articular cartilage cells (6). It has two physiological activities, one is to induce autoimmune response (7), the other is to participate in tissue remodeling, both of which may lead to disease progression. Previous studies showed CHI3L2 mRNA is significantly up-regulated in osteoarthritis, Alzheimer's disease, multiple sclerosis, and amyotrophic lateral sclerosis patients (8)(9)(10)(11). CHI3L2 was secreted by microglia/astrocytes and could increase monocyte/ macrophage infiltration, angiogenesis, and neuronal death in amyotrophic lateral sclerosis (11). It is not yet clear what type of cells CHI3L2 is expressed in gliomas. However, previous studies on CHI3L2 have shown that macrophages are a possible source of CHI3L2 in tumors (12)(13)(14)(15). CHI3L2 has a high degree of sequence identity with CHI3L1, but no cross-reactivity has been observed (16)(17)(18). There have been many studies on the correlations between CHI3L1 and the progression of a number of cancers (19)(20)(21)(22). In recent years, the relationship between tumor immune microenvironment and immunotherapy has received more and more attention. It was also reported CHI3L1 was related to immune infiltration in breast cancer and glioblastoma (23,24). However, the data about the role of CHI3L2 in cancers and its association with immune infiltrates are fragmentary. Previous studies reported that CHI3L2 was overexpressed in tumor-associated macrophages and related to poor outcomes in breast cancer and renal cell carcinoma (13)(14)(15). Studies have been shown that CHI3L2 mRNA expression was increased in gliomas (18,25,26). However, the prognostic significance of CHI3L2 and its correlation with immune infiltrates in glioma remain unclear. To systematically explore the CHI3L2 protein expression in diffusely infiltrating glioma, we first evaluated the CHI3L2 expression levels of 288 glioma tissues by immunohistochemistry (IHC) and analyzed the association between CHI3L2 levels and clinicopathological parameters. Moreover, we took advantage of CHI3L2 transcriptional data of gliomas in The Cancer Genome Atlas (TCGA) and the Chinese Glioma Genome Atlas (CGGA) datasets to validate our findings. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses were used to explore the potential biological process and pathways of CHI3L2 in glioma. The Tumor IMmune Estimation Resource (TIMER) platform was used to explore the correlations between CHI3L2 and diverse marker genes of tumor immune infiltrates. Finally, we further verified the results through Western Blot and flow cytometry. In gliomas, this is the first comprehensive study to elaborate on the clinical significance of CHI3L2, its influence on prognosis and its correlation with immune infiltrates. Samples We enrolled 288 glioma patients (WHO grade II-IV) operated at the Sun Yat-sen University Cancer Center (Guangzhou, China) from January 2009 to January 2016. The median follow-up time was 54 months. Follow-up was last done in June 2019. The detailed clinical data are listed in Table S1. There were 167 males and 121 females. The median age of all patients at initial diagnosis was 43 years (range 7-78years). According to MRI imaging, 278 cases of glioma were located on the supratentorial and 10 cases of glioma were located infratentorial. 264 out of 288 patients received postoperative adjuvant treatment (radiotherapy or chemotherapy). The median overall survival time of all patients was 27 months (range 0-110 months). This cohort included 112 cases of astrocytoma, 45 cases of oligodendroglial gliomas and 131 cases of glioblastoma (WHO IV). All samples were ethically approved for use based on informed consent. Cell Culture The human glioma cell lines and a human microglia cell line were purchased from the American Type Culture Collection resource center. The human glioma cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% heat-inactivated fetal bovine serum (FBS) and the human microglia cells were cultured in Minimum Essential Medium (MEM) with 10% FBS at 37°C in a humidified incubator containing 5% CO2. Peripheral blood mononuclear cells (PBMC) were isolated by Ficoll-Hypaque density gradient centrifugation (Solarbio, Beijing, China). CD8+ T cell were separated by positive selection from PBMCs with CD8 magnetic beads and cultured in RPMI-1640 medium supplemented with 10% human serum, 5% L-glutamine-penicillin-streptomycin solution (Sigma-Aldrich, USA), CD3/ CD28 antibody (Biolegend, USA) (25ul/ml) and IL-2 (100IU/ ml) in 24-well plates. After culturing for 24 hours, add the corresponding concentration of human CHI3L2 protein (Sino Biological, Beijing, China) to T cells and culture for 72 hours at 37°C in a humidified incubator containing 5% CO2. Immunohistochemistry (IHC), Molecular Genetics and Assessment Standard Immunohistochemistry was essentially performed as previously reported (27). These tissue specimens were incubated with CHI3L2 rabbit polyclonal antibody (#22164, SAB, Maryland, USA). Immunohistochemical evaluation was independently conducted by two pathologists blinded for patient characteristics and outcome, and CHI3L2 expression by tumor cells and macrophage cells was scored separately. The discrepancies were resolved by consensus under a microscope for multi-viewing. A semi-quantitative IHC scoring criterion was used to determine the CHI3L2 protein expression levels in tumor cells. The percentage of positive cells and staining intensity were assessed to improve accuracy. The percent positivity of staining cells range from 0 to 4: 0, none; 1, 1%-25%; 2, 26%-50%; 3, 51%-75%; 4, 76%-100%. The intensity of staining was graded from 0 to 3 (0, no staining; 1, weak; 2, moderate and 3, strong). Then, we obtained the final IHC score by multiplying the proportion score by the intensity score of staining. We chose 4.5, which was determined by the Youden index as an optimal cutoff point to separate low CHI3L2 expression (score of 0-4.5) from high CHI3L2 expression (score>4.5) in tumor cells. For the macrophage cells, we only count the number of CHI3L2 positive staining macrophages, regardless of the staining intensity. We designed 7.5 determined by Youden index as an optimal cutoff point to differentiate low expression (number≤ 7.5) from high expression (number>7.5) in macrophages. The other antibody markers including PHH3, P53, Ki67, ATRX, CD163, CD4, CD8, and CD20 were also used by immunohistochemistry tests. We detected MGMT promoter methylated status, TERT promoters and IDH mutation status by Sanger sequencing. 1p and 19q deletion status was detected using fluorescent in situ hybridization (FISH). The detailed protocol and assessment standard were described as a previous study (28). Bioinformatic Analysis in Cancer Datasets The CHI3L2 RNA-seq data were downloaded from http://www. cgga.org.cn/. We totally analyzed 601 TCGA RNA-seq cohort and 608 CGGA RNA-seq cohort of gliomas, ranging from WHO grade II to grade IV. To identify the CHI3L2-related genes, the limma package of R software was used to screen out the differentially expressed genes (DEGs). The top 26 hub genes of the overlapping DEGs were built via the plug-in molecular complex detection and cytoHubba of Cytoscape. To explore the functions and pathways of CHI3L2-related genes, we performed GO and KEGG analyses on ClueGo and Metascape websites. The TIMER platform (https://cistrome.shinyapps.io/timer/) (29,30) was performed to explore the association between CHI3L2 and marker sets of tumor immune infiltrates in GBM and LGG (31). Western Blot Total protein was extracted from seven human glioma cell lines and one human microglia cell line (HMC3). 30 ug of protein was loaded onto 10% SDSPAGE and electrophoretically transferred to PVDF membranes. After blocking, the membranes were incubated with primary antibody against CHI3L2 (1:1000 dilutions, rabbit polyclonal anti-CHI3L2, #22164, SAB, Maryland, USA). The membranes were then incubated with horseradish peroxidase-linked anti-rabbit antibody (at a 1:3000 dilution, Santa Cruz Biotechnology, Inc., Santa Cruz, Calif., USA). B-Actin was served as a loading control. Statistical Analysis GraphPad Prism 8 and SPSS 22 software were performed for statistical analyses. The measurement data are represented as mean ± SD. The Chi-square test was conducted to explore the correlations between CHI3L2 levels and clinicopathological features. Kaplan-Meier analysis was conducted for the overall survival of glioma patients with the log-rank test. The Cox proportional hazards regression model was used for univariate and multivariate analyses to evaluate the independence of CHI3L2 in predicting prognosis. The association between CHI3L2 and marker genes of immune infiltrating cells was assessed by Spearman's correlation coefficients. P < 0.05 was regarded as statistically significant. The Expression Levels of CHI3L2 in Glioma Samples and Its Correlation With Clinicopathological Parameters We detected CHI3L2 protein expression levels in histological sections from patients with different glioma grades by immunohistochemistry (IHC). Among the 288 glioma specimens inspected, we found CHI3L2 was mainly stained in tumor cells, as well as macrophage cells. cells were upregulated with increasing WHO grade of gliomas ( Figure 2A), but there was no significant difference in CHI3L2+ macrophage density between WHO II and WHO III gliomas ( Figure 2D). The CHI3L2 expression levels of GBM were significantly increased compared with LGG (WHO II-III) (P<0.001) in tumor cells ( Figure 2B) and macrophages ( Figure 2E). A significant increase of CHI3L2 expression levels was found in IDH-wildtype gliomas compared with IDH-mutant gliomas (P<0.001) in tumor cells ( Figure 2C) and macrophages ( Figure 2F). We further analyze the CHI3L2 IHC score and density of CHI3L2+ macrophages in diffusely infiltrating glioma of new molecular classification, including IDH mutant without 1p/19q codeleted gliomas, IDH mutant with 1p/19q codeleted gliomas, and IDH wild-type gliomas ( Figures S1A, B). We found the expression of CHI3L2 is not related to the status of 1p/19q codeleted in IDH mutant gliomas. Based on the expression levels of CHI3L2 in tumor cells and macrophage cells, we evaluated the association between CHI3L2 staining and clinicopathological factors, as listed in Table 1. In tumor cells, we found significant correlations between CHI3L2 expression and WHO grade (P<0.001), age (P=0.001), Ki67 (P<0.001), P53 (P=0.034), PHH3 (mitotic figures) (P<0.001), ATRX protein expression (P=0.026), IDH (P<0.001) and 1p/19q codeleted (P=0.002). In macrophage cells, CHI3L2 is strongly correlated with WHO grade (P<0.001), gender (P=0.008), age (P=0.006), Ki67 (P<0.001), PHH3 (mitotic figures) (P<0.001), and IDH status (P<0.001). However, CHI3L2 expression levels of glioma cells were not significantly related to gender, location, TERT promoter mutation status, and MGMT promoter methylated status. In macrophage cells, CHI3L2 expression has no correlation with location, P53, ATRX protein expression, MGMT promoter methylated status, TERT promoter mutation, and 1p/19q codeleted status. Impact of CHI3L2 Expression on the Prognosis of Gliomas To explore the prognostic significance of CHI3L2 in gliomas, we performed the Kaplan-Meier method and log-rank test. We found high CHI3L2 expression levels of tumor cells and macrophages significantly predicted worse overall survival in diffusely infiltrating glioma (DIG) (Figures 3A, D) and lowergrade glioma (LGG) patients ( Figures 3B, E). However, there was no statistical significance difference in GBM in our cohort ( Figures 3C, F). When considering the CHI3L2 expression of tumor cells and macrophages together, we found the overall survival time was higher in patients with dual-low CHI3L2 expression in TC and MC compared to those in patients with non-dual CHI3L2 expression and dual high expression in DIG ( Figure 3G), but this difference is not statistically significant in LGG and GBM ( Figures 3H, I). Similarly, we also analyze the effect of CHI3L2 on the prognosis in the new molecular classification of glioma. We found CHI3L2 expression in tumor cells is closely related to the prognosis of all new molecular classification of glioma, and high CHI3L2 expression in tumor cells, macrophages and TC + MC predicted poor outcome for IDH wild-type gliomas ( Figure S2). Furthermore, we found regardless of whether patients with glioma have methylation of the MGMT promoter or have received adjuvant therapy, high CHI3L2 indicates a poor prognosis for glioma ( Figure S3). Additionally, to evaluate the independent risk factors for prognosis of glioma, we conducted the univariate analysis ( Table 2) and multivariate analysis ( Table 3). In univariate analysis, CHI3L2 expression in tumor cells, CHI3L2+ macrophage cells density, CHI3L2 expression in both tumor cells and macrophage cells (TC + MC), grade, age, location, adjuvant therapy, Ki67 index, PHH3 (mitotic figures), IDH, and 1p/19q codeleted status were shown to be prognostic variables for the prognosis of overall survival in glioma patients ( Table 2). Then we included the prognostic variables in the univariate analysis (P<0.05) into the multivariate analysis. We found CHI3L2 expression in tumor cells, location, Ki67, IDH, 1p/19q codeleted were independent prognostic factors in gliomas ( Table 3). Validation of CHI3L2 mRNA Expression Levels and Prognostic Effect in TCGA and CGGA Datasets To further verify the results of our study, we collected a total of 601 glioma samples from the TCGA dataset and 608 glioma samples from the CGGA dataset to analyze the CHI3L2 mRNA expression. In the TCGA dataset, CHI3L2 mRNA levels were significantly increased in GBM (WHO IV) compared with WHO II, WHO III, and LGG patients (Figures 4A, B). CHI3L2 mRNA expression levels were significantly higher in IDH wild-type gliomas compared with IDH mutant gliomas ( Figure 4C). Similar results were also obtained in the CGGA dataset ( Figures 4D-F). We further analyze the CHI3L2 mRNA expression levels in new molecular classification of diffusely infiltrating glioma, including IDH mutant without 1p/19q codeleted gliomas, IDH mutant with 1p/19q codeleted gliomas, and IDH wild-type gliomas, in TCGA and CGGA database ( Figure S4A and Figure S4B). We found CHI3L2 mRNA expression levels in IDH wild-type gliomas are higher than IDH mutant gliomas. Gliomas with IDH mutant and 1p/19q codeleted have higher CHI3L2 mRNA levels than gliomas with IDH mutant and non-1p/19q codeleted. Moreover, we performed Kaplan-Meier analysis to confirm whether CHI3L2 mRNA levels could predict poor prognosis of gliomas in datasets. As shown in Figure 5, patients with high CHI3L2 mRNA levels correspond to shorter survival time in all glioma subgroups both in the TCGA (Figures 5A-C) and CGGA ( Figures 5D-F) datasets. Similarly, we also verified the effect of CHI3L2 on the prognosis in the new molecular classification of glioma in database ( Figure S5). Except for gliomas with IDH mutant and non-1p/19q codeleted in the TCGA dataset, high levels of CHI3L2 mRNA in any other subgroup indicate a poor prognosis, whether in the TCGA or CGGA dataset. Predicted Functions and Pathways of CHI3L2 in Gliomas The GBM and LGG RNA-seq data were from TCGA and CGGA datasets. Limma package in R software was conducted to screen out the differentially expressed genes (DEGs) with the cut-off criterion of adjusted P< 0.05 and |log2FC| > 1. We identified 1356 overlapping DEGs which were aberrantly expressed in TCGA and CGGA datasets ( Figure 6A). The top 26 hub genes were screened via the plug-in molecular complex detection and cytoHubba of Cytoscape ( Figure 6B). GO analysis was performed to show the overlapping DEGs were involved in several biological processes, including angiogenesis, immune, and inflammatory response ( Figure 6C). The KEGG pathways enriched in several classic signaling pathways, such as cell adhesion molecules (CAMs) and PI3K-Akt signaling pathways ( Figure 6D). The Correlation Between CHI3L2 and Markers of Immune Infiltrates in Gliomas Infiltrating immune cells are important components of the tumor microenvironment and are frequently associated with tumor behavior and patient outcomes. Since GO analysis revealed that CHI3L2 was related to the immune response, we further explored the infiltration of immune cells in gliomas. To estimate the relevance of CHI3L2 and diverse immune cell markers, we used the TIMER platform to investigate correlations between CHI3L2 levels and markers of diverse immune cells, included monocytes, TAMs (tumor-associated macrophages), M1 and M2 macrophages, Tregs (regulatory T cells), exhausted T cells, CD8+ T cells, T cells (general), B cells and neutrophils in GBM and LGG ( Table 4). We found CHI3L2 was significantly associated with marker sets of monocytes, TAMs, and M2 macrophages in GBM and LGG. Particularly, we showed the scatter plots of association between CHI3L2 and the marker sets of monocytes, TAMs, M1 phenotype, and M2 phenotype in GBM and LGG ( Figures 7A-H). We also found significant correlations between CHI3L2 and some markers of Treg and T cell exhaustion, such as TGFb1, CTLA4, TIM-3, and GZMB. Since Treg and T cell exhaustion play an important role in tumor immune escape. We believe CHI3L2 may also play an immunomodulation role in gliomas. In addition, we used several clinical commonly immune cell markers, including CD163, CD4, CD8, CD20, to perform immunohistochemical test on glioma samples, and found that CHI3L2+ macrophages have a certain correlation with CD163+ M2 macrophages (r=0.547, p<0.001), CD4+ T cells (r=0.330, p<0.001), CD8+ T cells (r=0.389, p<0.001), and CD20+ B cells (r=0.237, p<0.001) in gliomas ( Figure S6). The Expression of CHI3L2 in Glioma Cell Lines and Its Effect on CD8+ T Cells The expression of CHI3L2 in glioblastoma cells (U251, U87, T98G, DBTRG, A172, LN229) and microglia cell (HMC3) has been verified by Western Blot ( Figure 8A). Figure 8A shows that CHI3L2 is expressed in glioblastoma cell lines and a microglia cell line. It is strongly expressed in the glioblastoma cell lines U87, U251, LN229, A172 and the microglia cell line HMC3, while the expression in the glioblastoma cell lines T98G and DBTRG is weak. Figure 8B is the result of flow cytometry analysis. The left image is a representative sorting that lists the percentage of cells in each quadrant: bottom left-live cells; top left-mechanically damaged cells; bottom right-early apoptosis; top right-late apoptosis. The cellular apoptotic rate was a sum of early and late apoptotic rates. The proportion of apoptotic cells in the control group was 28.1%, the proportion of apoptotic cells in the 0.5ug/ml CHI3L2 group was 33.6%, and the proportion of apoptotic cells in the 2.5ug/ml CHI3L2 DISCUSSION At present, the outcome of most glioma is very poor, even with the use of comprehensive treatment strategies. It has been widely reported that the therapeutic resistance of glioma is closely related to its unique metabolic mechanism and the surrounding complex immunosuppressive microenvironment (32)(33)(34). Therefore, exploring reliable prognostic biomarkers and personalized treatment strategies for this disease are urgently needed. In the present study, CHI3L2 has been identified as a novel prognostic biomarker and associated with tumor immune infiltration markers in gliomas, which indicate CHI3L2 may serve as a target for glioma treatment in the future. CHI3L2, as a member of the glycoside hydrolases 18 family, can act as a cytokine and growth factor but lacks chitinase activity (17). It was found produced by tumor-associated macrophages in breast cancer (13,14). In renal cell carcinoma, MAP kinase signaling cascade in 293 and U87 MG cells (18,26). However, the correlations between CHI3L2 expression and clinicopathological features, the association with tumorinfiltrating immune cells, the prognostic value of CHI3L2, and its other functions in gliomas are still unknown. Our study showed CHI3L2 expressed in tumor cells and macrophage cells in glioma tissues and particularly up-regulated in GBM and IDH wild-type gliomas. The Kaplan-Meier curves reveal higher CHI3L2 expression levels correlated with short overall survival in diffusely infiltrating glioma, lower-grade glioma, and IDH wild-type gliomas. High CHI3L2 expression indicates a poor prognosis for glioma patients, regardless of whether the MGMT promoter is methylated or has received adjuvant therapy. Cox proportional hazards regression model indicates CHI3L2 expression in tumor cells is an independent prognostic indicator of glioma. TCGA and CGGA datasets further confirm our findings. However, it should be pointed out that no significant differences between high CHI3L2 and poor prognosis in GBM were achieved in our cohort, which is different from the results (The relationship between high CHI3L2 mRNA expression and short overall survival time were statistically significant in all subgroups) in TCGA and CGGA datasets. We believe there are two reasons for the inconsistent results. One probable reason is the difference in detection levels. The relative expression levels of CHI3L2 mRNA were detected using high-throughput sequencing in TCGA and CGGA datasets, but CHI3L2 expression levels in our samples were assessed at protein levels by immunohistochemistry. Another possible reason is the difference in sample size. Accordingly, we intend to enlarge our sample size in the following study. Based on these results, we further performed GO and KEGG pathway analyses to conclude the CHI3L2-related genes were involved in several biological processes, including angiogenesis, immune, and inflammatory response, and enriched in several classic signaling pathways, including cell adhesion molecules and PI3K-Akt signaling pathway. It has been reported that CHI3L2 acts as a powerful monocyte chemotactic factor and angiogenesis stimulating factor in breast cancer (14). A recent review also reported CHI3L2 acts as a new target for anti-angiogenic therapy in breast cancer patients (35). The angiogenesis function of CHI3L2 may be responsible for the poor prognosis of glioma, which needs further confirmation by follow-up studies. A recent review reported the role of cell adhesion molecules (CAM) in immune responses and tumor microenvironment--Cell adhesion molecules affect the antigen-presenting function, and inhibit the development of regulated cells and the leaching of regulatory cells into tumors, thus promoting tumor immune escape (36). Our study showed CHI3L2 expressed in tumor cells and macrophages in glioma tissues. A previous study suggests human glioma-infiltrating macrophages have similar functions to CAM in mediated immune responses (37). It was also reported that CAMs are potential prognostic biomarkers and attractive therapeutic targets for glioblastoma (38). A previous study suggested PI3K-Akt signaling pathway activation, to some extent, affects the activity of most immune cell types. PI3K-AKT-mTOR pathway plays a certain role in regulating immunosuppression in tumor microenvironment (39). We speculate CHI3L2 may be able to act as immunomodulation through this pathway. Additionally, we have learned in previous studies that CHI3L1 (the homologous gene of CHI3L2) may be used as an immunomodulatory factor to affect the therapeutic efficacy of PI3K/AKT-based pathway inhibitors in glioblastoma (40). CHI3L1 also plays a key role in inducing immunosuppression and metastasis in breast cancer. CHI3L1 up-regulates pro-inflammatory mediators, CCL2, CXCL2 and MMP-9, all of which contribute to tumor growth and metastasis, and treatment with chitin can significantly reduce these effects (23). These studies provide a reference for us to further explore the internal mechanism of CHI3L2 in immune infiltration. Based on the analysis of the TIMER platform, the correlations between CHI3L2 and markers of immune cells imply CHI3L2 may play a part in immunomodulation in GBM and LGG. Our results suggest CHI3L2 expression has strong correlations with marker sets, include CD86 and CSF1R of monocytes, CCL2, CD68, and IL 10 of TAMs and CD163, VSIG4, and MS4A4A of M2 macrophages in GBM and LGG. A study has shown that purified CHI3L2 strongly induces the migration of freshly isolated human CD14+ monocytes (14). It was reported that CD163 could act as a regulator of immune response and the potential to be a target to suppress immune escape and recover the function of T-cell populations in gliomas (41). In our experiments, we also found a strong correlation between CHI3L2 and CD163 ( Figure S6). Additionally, there was a Treg and T cell exhaustion, including TGFb1, CTLA4, TIM-3, and GZMB. In the tumor microenvironment, TGFb can serve as an anti-tumor immunosuppressive factor and play an essential part in Treg cells (42). It was reported that only TGFb, the key regulatory factor of tumor progression, was able to stimulate CHI3L2 mRNA levels in human macrophages in vitro (43). TGF-b, which can be secreted by both microglial cells and glioma cells, participates in the functional transformation of macrophages into immunosuppressive and pro-invasive phenotypes, which supports tumor growth (44,45). Macrophages are believed to be activated microglia within the central nervous system. Our data show that CHI3L2 is expressed in tumor cells and macrophages in glioma tissues, and has a certain correlation with TGF-b, further suggesting that CHI3L2 may also have the function of inhibiting tumor immune regulation and promoting tumor growth. Similarly, CTLA4 and TIM-3 can induce T cell exhaustion via direct mechanisms through the interactions with their ligands, leading to impaired T cell activation, inhibition of T cell proliferation, and impaired cytokine release (46). The correlation between CHI3L2 and T cell exhaustion markers indicates CHI3L2 may play a part in mediating T cell depletion. Our flow cytometry results further confirmed that CHI3L2 can induce CD8+ T cell apoptosis, indicating that CHI3L2 has the potential to promote tumor immune escape. However, how CHI3L2 protein promotes the apoptosis of CD8+ T cells needs to be further explored in future research. The limitations of this study are as follows: First, the sample size of gliomas for IHC is limited. Second, we did not accurately define the type of CHI3L2 + macrophages. In addition, further experimental investigation and analysis are needed to gain insights into the underlying mechanisms. In conclusion, our study suggests CHI3L2 may be a promising prognostic biomarker that contributes to poor prognosis for gliomas. CHI3L2 may also play an important role in immunomodulation, suggesting CHI3L2 may serve as a novel therapeutic target for glioma patients. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Sun Yat-Sen University Cancer Center. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS LL and WH designed the study and wrote the manuscript. LL, WH, JH, HD, and LS acquired the data. YY and HD provided help for the IHC test and Flow cytometry analysis. LL, YY, and WH performed the data analysis. WH and JZ reviewed the manuscript. All authors contributed to the article and approved the submitted version.
6,536.6
2021-04-15T00:00:00.000
[ "Medicine", "Biology" ]
A novel autism-associated KCNB1 mutation dramatically slows Kv2.1 potassium channel activation, deactivation and inactivation KCNB1, on human chromosome 20q13.3, encodes the alpha subunit of the Kv2.1 voltage gated potassium channel. Kv2.1 is ubiquitously expressed throughout the brain and is critical in controlling neuronal excitability, including in the hippocampus and pyramidal neurons. Human KCNB1 mutations are known to cause global development delay or plateauing, epilepsy, and behavioral disorders. Here, we report a sibling pair with developmental delay, absence seizures, autism spectrum disorder, hypotonia, and dysmorphic features. Whole exome sequencing revealed a heterozygous variant of uncertain significance (c. 342 C>A), p. (S114R) in KCNB1, encoding a serine to arginine substitution (S114R) in the N-terminal cytoplasmic region of Kv2.1. The siblings’ father demonstrated autistic features and was determined to be an obligate KCNB1 c. 342 C>A carrier based on familial genetic testing results. Functional investigation of Kv2.1-S114R using cellular electrophysiology revealed slowing of channel activation, deactivation, and inactivation, resulting in increased net current after longer membrane depolarizations. To our knowledge, this is the first study of its kind that compares the presentation of siblings each with a KCNB1 disorder. Our study demonstrates that Kv2.1-S114R has profound cellular and phenotypic consequences. Understanding the mechanisms underlying KCNB1-linked disorders aids clinicians in diagnosis and treatment and provides potential therapeutic avenues to pursue. Introduction Developmental encephalopathies constitute a heterogenous group of neurodevelopmental disorders (Bar et al., 2020a,b).Most patients with developmental encephalopathies are diagnosed during early childhood.Symptoms persist throughout life, and can include social, cognitive, motor, language, and behavioral impairments.The concept of "developmental and epileptic encephalopathy" (DEE) refers to the frequently associated epileptiform activity that can contribute to developmental plateauing or regression (Scheffer et al., 2017).Recent studies have highlighted the role of ion channels in the pathogenesis of DEEs (Wang et al., 2017;Raga et al., 2021).The KCNB1 gene encodes the alpha subunit of the Kv2.1 voltage gated potassium channel on chromosome 20q13.3.Voltage-gated potassium (Kv) channels form the largest family of ion channels in the human genome and are ubiquitously expressed throughout the human body.Kv channels are critical for regulating various excitable and non-excitable physiological processes, including skeletal and cardiac muscle contraction, nervous signaling, neurotransmitter and hormone release, and cell proliferation (Abbott, 2020). Heterozygous, pathogenic variants in the KCNB1 gene can contribute to a diverse phenotype of neurodevelopmental disorders, ranging from DEEs to global development delay with or without epileptic activity (Bar et al., 2020a,b).If present, epileptiform activity typically occurs during infancy or childhood, and is often unresponsive to antiepileptic treatment.Features of DEEs can include photosensitivity, sleep activation abnormalities on EEG, language and speech difficulties, behavioral problems, hypotonia, spasticity, and ataxia.Abnormalities in magnetic resonance imaging (MRI), including atrophy and nonspecific periventricular white matter abnormalities, have been described in some individuals (de Kovel et al., 2017;Bar et al., 2020a,b). To date, 55 KCNB1 mutations have been reported in patients with encephalopathic epilepsy, infantile epilepsy, autism, and neurodevelopment disorders, and are located throughout Kv2.1 (Figure 1B) (de Kovel et al., 2017;Bar et al., 2020a;Xiong et al., 2022).However, only two mutations have been previously discovered in the N-terminal cytoplasmic region, P17T and E43G (Bar et al., 2020a;Veale et al., 2022).Here, we report a KCNB1 sequence variant encoding a substitution in the N-terminus of Kv2.1, which we discovered in a male/female sibling pair with neurological disorders including autism, absence seizures and developmental delay.Functional characterization revealed an unexpected and complex perturbation of function in the mutant channel. Human genome sequencing We received a signed case report consent form from the legal guardian of the children.Both siblings had whole exome sequencing performed at GeneDx (Gaithersburg, MD, United States) using paired-end reads on an Illumina platform.Sequence reads were aligned to human genome build GRcH37/USCS hg19.Data were filtered using GeneDx's custom analysis tool (XomeAnalyzer).The variant was reported as a variant of uncertain significance in accordance with the American College of Medical Genetics and Genomics (ACMG) criteria based on transcript NM_004975.2(Richards et al., 2015). Preparation of channel subunit cRNA and Xenopus laevis oocyte injection cDNA encoding human KCNB1 was sub-cloned into a Xenopus expression vector (pMAX) incorporating Xenopus laevis β-globin 5′ and 3' UTRs flanking the coding region to enhance translation and cRNA stability by Genscript (Piscataway, NJ, United States).The mutant KCNB1 construct was generated by Genscript and subcloned into pMAX as above.cRNA transcripts were generated by in vitro transcription using the T7 mMessage mMachine kit (Thermo Fisher Scientific, Waltham, MA, United States) according to manufacturer's instructions, after Two-electrode voltage clamp TEVC was conducted at room temperature with an OC-725C amplifier (Warner Instruments, Hamden, CT, United States) and pClamp10 software (Molecular Devices, Sunnyvale, CA, United States) 24 h after cRNA injection.Oocytes, in a smallvolume oocyte bath (Warner), were viewed with a dissection microscope for cellular electrophysiology.Extracellular bath solution (in mM): 96 NaCl, 4 KCl, 1 MgCl 2 , 0.3 CaCl 2 , and 10 HEPES, adjusted to pH 7.6 with TRIS BASE.Solutions were introduced into the oocyte recording bath by gravity perfusion at a constant flow of 1 mL per minute.Pipettes (1-2 MΩ resistance) were filled with 3 M KCl.Current-voltage graphs were measured in response to voltage pulses between −80 mV and +40 mV at 10 mV intervals from a holding potential of −80 mV.Conductance graphs were measured from tail currents generated at −40 mV immediately following the prepulse and normalized to the maximal current.Conductance was plotted as a function of voltage and fitted with a single Boltzmann function: where g is the normalized tail conductance, A 1 is the initial value at −∞, A 2 is the final value at +∞, V 1/2 is the half-maximal voltage of activation and V s the slope factor.We fitted activation and deactivation kinetics with single exponential functions. Activation and deactivation kinetics Activation kinetics were measured in response to voltage pulses between −10 mV and +40 mV at 10 mV intervals from a holding potential of −80 mV.Deactivation kinetics were measured between −120 mV and −60 mV in 10 mV intervals immediately following a +40 mV prepulse from a holding potential of −80 mV.Activation and deactivation traces were each fitted with a single exponential function. Inactivation and recovery from inactivation Inactivation was measured in response to a single voltage pulse at +40 mV for 10 s and 20 s from a holding potential of −80 mV.The percentage of inactivation was derived from the difference between the peak and plateau of the current at +40 mV.Fraction of non-inactivated channels was measured in response to 10 s voltage pulses between −80 mV and +10 mV immediately prior to a +40 mV voltage pulse from a holding potential of −80 mV.Fraction of non-inactivated channels was measured from the +40 mV voltage pulse, normalized to the maximal current, and fitted with a single Boltzmann function (Eq.1).Recovery from inactivation was measured in response to consecutive 5 s +40 mV pulses at interpulse intervals of increasing duration from 0.01 to 30 s.The subsequent peaks of these pulses were then divided against the initial pulse and plotted as a function of the interpulse interval. All data were analyzed using Clampfit (Molecular Devices) and Graphpad Prism software (GraphPad, San Diego, CA, United States). Statistics and reproducibility All values are expressed as mean ± SEM.At least 2 batches of oocytes were used per experiment.Multiple comparison statistics were conducted using a One-way ANOVA with a Dunnett's test for multiple comparisons.Comparison of two groups was conducted using a t-test; all p-values were two-sided.All electrophysiological data and statistics are summarized in the Supplementary material. Clinical phenotype The sibling pair consisted of a 10-year-old boy (patient A) and a 6-year-old girl (patient B).Family history was significant for learning disability, dyslexia, autistic features and red-green color blindness in their father, and developmental delays and major depressive disorder in their mother.Their maternal grandmother had learning disability, syncopal episodes and severe muscle weakness.The parents are non-consanguineous. Patient A was born after an uncomplicated pregnancy at 41 weeks gestation to a 23-year-old G1P1 mother via C-section for prolonged and worsening decelerations.His mother denied any drug, cigarette, or alcohol use during pregnancy nor was any genetic testing performed.His birthweight was 3.015 kg (19th percentile) and head circumference was 35.50 cm (43rd percentile).His Apgar scores were 2, 5, 6, 8 at 1, 5, 10 and 15 min, respectively, due to initially being limp, apneic, and bradycardic requiring a fluid bolus, positive pressure ventilation and CPAP for 30 min immediately postpartum.He was transferred to the neonatal intensive care unit for concerns of hypoxic ischemic encephalopathy and respiratory distress.At 4 days of age there was concern about the infant's swallowing capabilities as he was having difficulty breathing and feeding via nasogastric tube simultaneously.No abnormalities were noted on a swallowing study, and brain MRI at 5 days of life was also unremarkable.He was discharged from the NICU on normal feeds at 14 days of life having passed newborn hearing screening and subsequent newborn metabolic screening also being normal. From age 1-2, he demonstrated delayed achievement of developmental milestones, walking at 24 months and speaking at 36 months.Physical, occupational, and speech therapy were initiated.Multiple physical examinations by various providers over the next few years revealed jerky, poorly controlled upper extremity movements and ataxic gait leading to a diagnosis of cerebral palsy. Brain magnetic resonance imaging (MRI) at 7 years of age again revealed no abnormalities and an EEG performed at the same time showed mild encephalopathy.At 3 years of age, the patient began demonstrating repetitive speech and behaviors along with limited social skills.He was diagnosed with autism spectrum disorder.A hearing test along with urine organic acids/plasma amino acids were all normal.A Comprehensive Neuromuscular Disorders Panel (Invitae) covering 211 genes was ordered.This panel revealed a heterozygous VUS in HNRNPDL c.259 C>T/p.Arg87Cys causative of autosomal dominant limb-girdle muscular dystrophy type 1G (LGMD1G), a phenotype inconsistent with patient A's presentation with a gnomAD frequency of 0.007%, in silico ambiguity as to the deleterious nature of this variant on protein structure/function, and no reports of this variant being causative of LGMD1G in the literature. At 10 years of age a genetic evaluation was performed for the combined complaints of DD/ID, ASD, absence seizures, hypotonia, muscle cramping (upper extremity >lower extremity), anxiety, sleep disturbances and visual hallucinations.Ophthalmologic examination revealed myopia, astigmatism and red-color deficiency in both eyes.There was no history of developmental regression, persistent vomiting, rhabdomyolysis, or unexplained coma but perhaps lethargy post carbohydrate or protein intake.Morphometrics were normal including head circumference.Physical examination revealed an oblong-shaped face, wide eyebrows with lateral thinning, moderately thin and downsloping palpebral fissures with mild hypertelorism, wide nasal tip, low and wide columella, slight micrognathia, wide misaligned teeth (Figure 1C), and a solitary café-au-lait macule on the lower right leg.Neurologic examination was normal except he was unable to walk on his heels or toes.Whole exome sequence analysis identified a heterozygous nonmaternal variant of uncertain significance (c.342 C>A, p. (S114R)) in the KCNB1 genemitochondrial DNA analysis was normal.Current medications are hydroxyzine 10 mg at bedtime and trazodone 50 mg once daily.He is sociable and remains in a regular classroom 80% of the time. The proband's 6-year-old sister (patient B) was born at 36.5 weeks via C-section.Her Apgar scores were 9 at 1 min and 9 at 5 min.Her birth weight was 2.710 kg (10th percentile).The child was admitted to the neonatal intensive care unit due to respiratory distress of the newborn requiring one dose of surfactant with subsequent weaning off CPAP and supplemental O 2 after 10 days.She experienced anemia of prematurity and feeding difficulties along with bilateral hyperflexion of hips.She passed newborn hearing screening with subsequent newborn metabolic screening also being normal and was discharged from the NICU at 18 days of life. Patient B demonstrated delayed achievement of developmental milestones: at 12 months, she refused to walk or crawl, was unable to pull to sit/stand, and had persistent head lag.She rolled over at 13 months of age, sat unassisted at 15 months of age, and started crawling at 16-17 months of age.At 14 months, she demonstrated diffuse muscle hypotonia, lack of coordination, limited food acceptance, and slowed speech and language development only having three words at 20 months and being unable to follow simple commands.She passed an audiology exam at 12 months.At 17 months, she was diagnosed with a global development delay with no signs of regression. At 17 months she was formally evaluated for hip dysplasia and found to have no abnormalities.The patient continued to have severe hypotonia; at 20 months, she was noted to have the muscular tone at the level of a 10-month-old.A swallowing study performed at this time for evaluation of dysphagia revealed mild deficits in oropharyngeal swallow due to reduced motor skills.A brain MRI performed at 19 months of age was normal. At 23 months, patient B presented with behavioral challenges.Her mother described spells characterized by behavioral pause followed by screaming and shaking of the hands, frequent tantrums, staring spells accompanied by eye flutter, loss of head tone, and infrequent myoclonus without clonic, tonic, or clonic/tonic activity, all suggestive of absence seizures.Subsequently, at 2 years of age, she was diagnosed with a gait abnormality and was formally diagnosed with mild cerebral palsy in addition to autistic spectrum disorder.At 34 months of age an EEG was performed which failed to demonstrate ictal activity but showed abnormal mild, diffuse background slowing. Patient B also demonstrated sleep difficulties, only sleeping for short periods throughout the night.Further, at 4 years of age patient B presented with complaints of intermittent severe muscle cramps in her legs.Muscle biopsy revealed no abnormalities.It was concluded that muscle cramping was not a myopathy, and likely had central nervous system origin.The patient's guardian also reported drop attacks in which the patient would suddenly fall without warning.At 4 years and 7 months of age a Comprehensive Neuromuscular Disorders Panel (Invitae) covering 211 genes was ordered.A heterozygous pathogenic variant in ACADM was detected which when biallelically inherited causes medium-chain AcylCoA dehydrogenase deficiency.Repeat EEG revealed abnormal, bifrontal spike wave, consistent with stereotype in the setting of ASD. Genetic evaluation at 5 years and 7 months of age was performed for the indications of DD/ID, ASD, and seizure disorder.No history of regressions or symptoms consistent with metabolic crises were noted.Morphometrics were normal with head circumference being at the 95th percentile.Dysmorphic features on physical examination included a prominent forehead with mild bossing, mild hypertelorism, wide and fleshy nose, low and wide columella, widely spaced large teeth, and difficulty walking with low tone the upper and lower extremities (Figure 1D). Whole exome sequence analysis revealed the same, heterozygous nonmaternal variant of uncertain significance (c.342 C>A), p. (S114R) in the KCNB1 gene found in her older sibling Patient A. Patient B is currently taking clonidine 0.1 mg daily, hydroxyzine 10 mg at bedtime, lamotrigine 25 mg two tablets every 12 h, levetiracem 100 mg/mL, 3 mL every 12 h, and tizanidine 2 mg ½ tablet every 12 h. In summary, patient A had a variant of unknown significance in a gene causing limb girdle muscular dystrophy identified on a neuromuscular panel done prior to exome sequencing.This variant was not verified on exome sequencing and the patient does not manifest symptoms consistent with this disease.Patient B had a heterozygous variant of unknown significance in a gene causing middle chain acetyl CoA dehydrogenase deficiency (MCAD) identified on a neuromuscular panel performed prior to exome sequencing.This disease manifests when homozygous/combined heterozygous variants are identified in a patient; her newborn screen was negative for increases in C8, C6, or C10.This variant was not verified on exome sequencing.Thus, the variant in KCNB1 (Kv2.1)identified in these siblings via exome sequencing was the best explanation for their overlapping neurologic phenotype, and we pursued functional characterization of the KCNB1 variant. Functional characterization of Kv2.1-S114R reveals slowing of activation and deactivation compared to wild-type Kv2.1 Given the extensive neurological disruption associated with Kv2.1-S114R, we conducted cellular electrophysiological analysis to determine potential effects on channel function.At the time of 2E,F) or the mean resting membrane potential (E M ) of unclamped oocytes (Figure 2G).Next, we investigated whether S114R altered the potassium selectivity of Kv2.1, thereby shifting the reversal potential (E REV ).Some previously characterized Kv2.1 mutants have been shown to alter ion selectivity and E REV (Torkamani et al., 2014;Thiffault et al., 2015), while the N-terminal domain of, e.g., TREK-1 has been shown to play a pivotal role in ion selectivity (Veale et al., 2014).However, the S114R mutation had no effect on E REV , which was comparable to that of wild type (Figure 2H).Interestingly, S114R did alter Kv2.1 activation kinetics, slowing activation >threefold at some voltages (Figure 2I), with the greatest effects observed between −10 mV and 20 mV (Figure 2J).Similarly, S114R slowed deactivation between −110 and −60 mV, greater than twofold at some voltages (Figures 2K,L). S114R slows Kv2.1 gating processes Compared to wild-type, Kv2.1-S114R exhibited negligible steadystate inactivation across all voltages recorded from the I/V family (Figure 2B).Thus, we pursued this effect further by employing singlepulse protocols of +40 mV of increasing durations.At 1-, 10-, and 20-s pulse durations, wild type Kv2.1 exhibited characteristic slow steadystate inactivation.However, S114R showed no inactivation at 1-and 10-s at +40 mV (Figure 3A; left and middle), and only modest inactivation at 20 s (Figure 3A; right).To probe this further, we investigated whether S114R could alter the voltage-dependence of steady-state inactivation by using a protocol with 10 s depolarizing prepulses from −80 mV to +10 mV immediately followed by a +40 mV pulse (Figure 3B).The V 0.5 of steady-state inactivation for wild-type Kv2.1 was −24 mV, which meant at a holding potential of, e.g., −20 mV, ~40% of channels were still available for activation.Strikingly, S114R exhibited no discernible inactivation across all holding potentials, meaning no channels entered the inactivated state and essentially all were available for activation (Figure 3C).Next, we measured the recovery from inactivation, which is an indicator of channel refractory period, by using a double-pulse protocol whereby consecutive +40 mV voltage pulses of 5 s are separated by increasing intervals from 0.01 to 30 s (Figure 3D).Compared to wild-type Kv2.1,S114R had an increased rate of recovery from inactivation between 0.01 and 3 s compared to wild-type, but their recovery rates were similar between 10 and 30 s (Figures 3E,F).The multiplex effects of the S114R variant on Kv2.1 channel function result in context-dependent effects on overall function, i.e., under some circumstances gain of function would be observed; under other circumstances, loss of function. Heterozygous mimic channel function The above studies were conducted on homozygous mimicking all-wild type or all-S114R Kv2.1 channels to determine the mechanistic basis for altered channel activity in the mutant.However, patients A and B each had a single wild-type KCNB1 allele and a single mutant KCNB1 allele.Therefore, we compared the function of heterozygous-mimicking Kv2.1/Kv2.1-S114Rchannels by injecting oocytes 50/50 with wild-type and S114R Kv2.1 cRNA (Figure 4A).Compared to wild-type, heterozygous mutant channels had similar peak current magnitude and activation voltage dependence, although they appeared to result in moderately depolarized E M (Figures 4B-D).Heterozygous channel activation rate was intermediate between that of homozygous mutant and wild-type, while heterozygous channel deactivation rate was closer to that of homozygous wild-type (Figures 4E-G).Heterozygous channel inactivation rate was intermediate between that of homozygous wild-type and mutant (Figures 4H-J), as was channel availability following inactivating depolarizing pulses (Figures 4K,L). Discussion We report the discovery, clinical significance, and functional characterization of a novel pathogenic KCNB1 mutation (p.S144R) in the cytoplasmic N-terminal region of Kv2.1.Prior to this study, 55 patients with KCNB1 mutations had been previously studied.Clinical workups found 85% of the patients examined had developed epilepsy and all had developmental delays, with varying degrees of severity (de Kovel et al., 2017;Bar et al., 2020a;Xiong et al., 2022).Here, the siblings' presentations, including dysmorphic features and developmental problems, were highly suggestive of an underlying genetic disorder.Genetic testing revealed that both patients possessed the same mutation-a heterozygous variant of previously uncertain significance (c.342 C>A), p. (S114R) in the KCNB1 gene.Thus, the siblings were diagnosed with a KCNB1-related disorder.As indicated by Bar et al. (2020a), the siblings' presentations are consistent with presentation of other known KCNB1 patients.Aside from the nervous system, Kv2.1 is expressed in the GI tract, including pancreatic β-cells, where it regulates changes in cellular excitability, and insulin secretion, in response to glucose (MacDonald et al., 2002).In one study it was reported that a KCNB1 SNP in the 3′ untranslated region (rs1051295) is associated with decreased insulin sensitivity, increased triglyceride and increased waist/hip ratio in the Chinese Han population, which can increase the risk for type 2 diabetes; this was not observed in the individuals in the current study (Zhang et al., 2013).KCNB1 rs1051295 is also associated with risk of colon and rectal cancer, with an unknown mechanistic basis (Barbirou et al., 2020).We are not aware of KCNB1 coding region variants associated with gastrointestinal tract disorders and none were noted in the clinical workups in the present study. The absence of the variant in the patients' mother and presence in both siblings suggests the biological father was an obligate KCNB1 mutant carrier as the gene is inherited in a dominant manner, though gonadal mosaicism cannot be ruled out.The patients' biological father was reported to have no health issues aside from learning disability, dyslexia and "autistic features." As Uctepe et al. (2022) carrying a relatively mild KCNB1-related disease can pass on a more severe phenotype to their children.Interestingly, in the current case, both siblings presented a more severe phenotype than their biological father.Patient B's presentation was overall more severe than her brother's.In addition to frequent tantrums, the child reports severe, persistent muscle cramps.It was concluded that these cramps are of central nervous system origin.The child is currently prescribed baclofen (5 mg) to reduce muscle cramping.Both patients are prescribed antiepileptics regularly.Patient A suffers from anxiety and depression and is treated accordingly.As mentioned, patient A has friends in school, performs well, and is in a regular classroom 80% of the time.Patient B is only 6 years old, and not yet in a regular classroom, so it is not yet possible to evaluate her performance and behavior in an academic setting. There are limitations to this study.It is technically possible that the male patient's symptoms are a consequence of hypoxia during the newborn period despite normal MRI.The female patient also carries a variant of undetermined significance in acyl-Coenzyme A dehydrogenase (ACADM).It is plausible that some of her symptoms are due to this variant existing in tandem with another undiagnosed variant in the same gene in trans versus the KCNB1 mutation.Further, the mother also reported developmental delays and mental health concerns despite not carrying the KCNB1 gene variant.Additionally, there is a maternal uncle with seizures indicating a possible confounding disorder in the children. Given the complex genetic background, we functionally characterized the effects of the S114R substitution on Kv2.1 function.S114R markedly slowed the activation and deactivation and all but abolished steady-state inactivation at physiologically relevant durations.In the homozygous or heterozygous channel-mimicking conditions, S114R altered neither the peak Kv2.1 current nor the voltage-dependence of activation, but greatly slowed all channel gating processes.In addition, we utilized the reductionist oocyte expression system for this initial study, to evaluate the effects of the S114R variant on Kv2.1 ion conducting properties and gating kinetics.Oocytes are particularly well-suited for understanding the biophysical effects of ion channel variants in both the homozygous and heterozygous conditions, because each oocyte is injected with a precise amount of channel cRNA, the heterologously expressed currents are much larger than endogenous currents, and two-electrode voltage clamp recordings facilitate long recordings with challenging voltage protocols.Nevertheless, oocytes do not recapitulate the neuronal environment that is important for fully shaping Kv2.1 function in the brain.Future work could explore, for example, the potential effects on the non-conducting role of Kv2.1 in integrin-K + channel complexes, considered important for normal neuronal migration, proliferation, survival and death (Forzisi and Sesti, 2022) and implicated in the abnormal neocortical development observed in KCNB1 developmental epileptic encephalopathy (Bortolami et al., 2023). Recently, another N-terminal KCNB1 gain-of-function mutation, P17T, was shown to enhance currents as well as right-shift steady-state inactivation (Veale et al., 2022).The authors proposed that this rightshift in steady-state inactivation is in part the mechanism underlying the increase in current density, with more channels available for activation at depolarized voltages.S114R essentially abolishes steadystate inactivation in the homozygous condition, suggesting all channels should be available for activation; we observed no increase in current magnitude during a standard voltage family protocol, but we did observe higher sustained current during repetitive pulses in a voltage protocol designed to quantify the amount of channels available after inactivating pulses (Figure 3D). Previously, it was shown that deleting the first 139 amino acids of the N-terminus slowed activation and deactivation kinetics Kv2.1 and abolished inactivation (VanDongen et al., 1990), highly consistent with what we observed for S114R and suggesting this residue is pivotal for all three gating types.Interestingly, the functional changes bought upon by the above referenced N-terminal truncation could be reversed by deleting an additional 318 residues from the C-terminal end, suggesting both domains are important for modulating Kv2.1 inactivation (VanDongen et al., 1990).Additionally, a regulatory domain (NRD) consisting of 59 amino acids in the N-terminus of Kv2.1 was previously found to be important in gating.Replacement of the NRD in Kv2.1 with that of the same region in Kv2.3 slowed activation and deactivation and markedly slowed inactivation (Chiara et al., 1999).We did not see evidence for effects on Kv2.1 trafficking (there was no change in peak current magnitude) and accordingly, previous studies found a role for C-terminal, not N-terminal, motifs in Kv2.1 localization and trafficking (Lim et al., 2000;Jensen et al., 2017).Kv2.1 is a major molecular correlate of the delayed rectifier potassium channels in cortical and hippocampal pyramidal neurons (Guan et al., 2007).Kv2.1 characteristic U-type inactivation is thought to be important during repetitive stimulation of neurons where it can dictate channel availability more so than P/C-type inactivation (Cheng et al., 2011).The effects of the S114R variant are complex.By slowing activation, it decreases current at the earliest time points following a depolarization but does not affect peak current across a longer depolarization (Figure 2I versus Figure 2C).By slowing deactivation, the variant increases tail currents observable at hyperpolarized potentials following a depolarization (Figure 2K).Finally, by dramatically slowing inactivation the S114R variant increases the peak current sustainable across a train of pulses because it reduces accumulation of channels in the inactivated state (Figures 3D-F).It is therefore difficult to categorize S114R as either a gain-of-function or loss-of-function variant; the effect is highly context dependent.Because of its multiplex effects on various components of Kv2.1 gating, it is difficult to predict whether inhibitors or openers would best treat the Kv2.1-S114R-associated condition.Other than genome editing or similar approaches to correct the actual mutation, small molecules that promote channel inactivation and/or activation would be desirable, as the heterozygous Kv2.1/Kv2.1-S114Rchannels show similar deactivation kinetics to wild-type Kv2.1, whereas activation and inactivation are much slower. To our knowledge, this is the first study of its kind that compares the presentation of a KCNB1 disorder in a sibling pair.The information gathered from this study could help to elucidate symptoms of KCNB1 related disorder, aiding clinicians in diagnosing and treating KCNB1 encephalopathy patients. FIGURE 1 FIGURE 1 Kv2.1-S114R is associated with a multifaceted KCNB1 encephalopathy.(A) Cartoon depicting the expression of Kv2.1 channel in the soma and dendrites of neurons.(B) Cartoon depicting the topology of Kv2.1 channels, the location of previously characterized mutations discovered in epilepsy and neurodevelopmental disorder patients, and the location of S114 in the N-terminal domain.(C) Frontal and sagittal photos of patient A demonstrate an oblong-shaped face, wide eyebrows with lateral thinning, moderately thin and downsloping palpebral fissures with mild hypertelorism, wide nasal tip, low and wide columella, slight micrognathia, and wide malaligned teeth.(D) Frontal and sagittal photos of patient B demonstrate prominent forehead with mild bossing, mild hypertelorism, wide and fleshy nose, low and wide columella, widely spaced large teeth. FIGURE 3 S114R FIGURE 3 S114R greatly diminishes Kv2.1 inactivation.(A) Mean traces for wild-type (black) and S114R (blue) Kv2.1 channels, pulsed to +40 mV for 1 s, 10 s, and 20 s; voltage protocol, upper inset.Scale bars lower left inset (n = 10-31).(B) Mean traces for wild-type (black) and S114R (blue) Kv2.1 channels, expressed in oocytes measuring the fraction of non-inactivated channels in response to depolarizing voltage pulses.Scale bars lower left inset (n = 10).(C) Mean proportion of remaining non-inactivated current calculated from the circled portion of the traces as in B (n = 10).(D) Mean traces for wildtype (black) and S114R (blue) Kv2.1 channels, expressed in oocytes measuring residual current recovery.Arrows indicate peak current.Scale bars lower left inset (n = 12).(E) Residual current and recovery for wild-type (black) and S114R (blue) Kv2.1 channels for time points 0.01 s to 30 s versus peak current at +40 mV (n = 12).(F) Residual current and recovery for time points 0.01 s to 1 s as in E (n = 12).
6,687.8
2024-07-29T00:00:00.000
[ "Medicine", "Biology" ]
Muraymycin nucleoside-peptide antibiotics: uridine-derived natural products as lead structures for the development of novel antibacterial agents Muraymycins are a promising class of antimicrobial natural products. These uridine-derived nucleoside-peptide antibiotics inhibit the bacterial membrane protein translocase I (MraY), a key enzyme in the intracellular part of peptidoglycan biosynthesis. This review describes the structures of naturally occurring muraymycins, their mode of action, synthetic access to muraymycins and their analogues, some structure–activity relationship (SAR) studies and first insights into muraymycin biosynthesis. It therefore provides an overview on the current state of research, as well as an outlook on possible future developments in this field. Introduction The treatment of infectious diseases caused by bacteria is a severe issue. With multiresistant bacterial strains rendering well-established therapeutic procedures ineffective, the exploration of novel antimicrobial agents is of growing significance. The discovery of penicillin [1] and the proof of its in vivo efficacy [2] marked the starting point for the research on antibacterial drugs during the so-called "golden age" of antibiotics. Despite the early occurrence of first resistances [3][4][5], an inno-vation gap followed from the 1960s onwards, during which only few antibiotics were introduced into the market. Most of them were modifications of established substances already in clinical use. Current and future developments will have to consider these improved 2nd and 3rd generation antibiotics [6] alongside the search for completely unknown structures. For such novel agents, natural products appear to be a promising source [7][8][9]. Bacteria deploy different mechanisms to escape the toxic effect of an antibacterial drug [10][11][12]. These include the structural modification and degradation of a drug, as it is reported for aminoglycoside-modifying proteins [13], and alteration of the drug target, as can be found in macrolide-resistant bacteria that contain mutations in the bacterial ribosome [14]. Further mechanisms are an increased efflux [15] and a change in permeability of the cell wall [16,17]. Due to the evolutionary pressure exerted by antibiotics, bacteria featuring the aforementioned mutations survive, proliferate and may even develop resistances against multiple drug classes. Excessive application of antibiotics fuels the emergence of multiresistant strains such as hospital and community-associated methicillin-resistant Staphylococcus aureus (MRSA) [18,19] and vancomycin-resistant Enterococcus (VRE) [20]. This development raises the demand for antibiotics exploiting yet unused modes of action. Potential targets within bacteria include peptidoglycan biosynthesis, protein biosynthesis, DNA and RNA replication and folate metabolism [21]. Promising candidates meeting the requirements for new drugs are nucleoside antibiotics, i.e., uridine-derived compounds that address the enzyme translocase I (MraY) as a novel target, thereby interfering with a membrane-associated intracellular step of peptidoglycan biosynthesis. This review will focus on muraymycins as a subclass of nucleoside antibiotics, covering their mode of action, synthetic approaches as well as SAR studies on several derivatives. Furthermore, first insights into the biosynthesis of these Streptomyces-produced secondary metabolites will be discussed. Structures of naturally occurring muraymycins The muraymycins were first isolated in 2002 from a broth of a Streptomyces sp. [22]. McDonald et al. discovered and characterised 19 naturally occurring muraymycins ( Figure 1). These compounds belong to the family of nucleoside antibiotics which have a uridine-derived core structure in common. Their antibiotic potency is based on the inhibition of MraY, thereby blocking a membrane-associated intracellular step of bacterial cell-wall biosynthesis. The structure elucidation was carried out using one-and two-dimensional NMR experiments as well as FT mass spectrometry [22]. Muraymycins have a glycyl-uridine motif, which is connected via an aminopropyl linker to a urea peptide moiety consisting of L-leucine or L-hydroxyleucine, L-epicapreomycidine (a nonproteinogenic cyclic arginine derivative) and L-valine. The uridine structure is glycosylated in its 5'-position with an aminoribose unit and in some cases a lipophilic side chain is at-tached to the hydroxyleucine residue. The 19 compounds are divided into four different series (A-D) which mainly vary in the leucine residue and the lipophilic side chain or the amino sugar ( Figure 1). The aminoribose is missing in muraymycins A5 and C4, which may eventually be hydrolysis products. The series A and B have lipophilic side chains with varying chain lengths, which are either ω-functionalised with a guanidino or hydroxyguanidino-function in case of series A or unfunctionalised but terminally branched in case of series B. Muraymycins of series C contain unfunctionalised L-hydroxyleucine while in series D proteinogenic L-leucine occurs instead. Muraymycin A1 is one of the most active members of this family and shows good activity mainly against Gram-positive (Staphylococcus MIC: 2-16 μg/mL, Enterococcus MIC: 16-64 µg/mL) but also a few Gram-negative bacteria (E. coli MIC: down to 0.03 μg/mL). Since the activity against wild-type E. coli is clearly lower (MIC > 128 μg/mL) [22], it is assumed that this might be an effect resulting from low membrane permeability. There are other naturally occurring nucleoside antibiotics which address the same biological target, thereby inhibiting peptidoglycan biosynthesis. Figure 2 shows the structures of selected other classes of nucleoside antibiotics, with structural similarities being highlighted. A broad overview of antimicrobial nucleoside antibiotics blocking peptidoglycan biosynthesis is given by Bugg et al. in two review articles [23,24] and by Ichikawa et al. in a recent review [25]. Representing the first discovered nucleoside antibiotics, the tunicamycins were isolated in 1971 from Streptomyces lysosuperficus nov. sp. by Takatsuki and Tamura et al. [26][27][28]. They contain a uridine moiety, two O-glycosidically linked sugars, the so-called tunicamine and a fatty acid moiety, which typically is terminally branched and unsaturated. Two closely related nucleoside antibiotics were isolated later on and named streptoviridins (isolated in 1975 from Streptomyces griseoflavus subsp. thuringiensis [29][30][31]) and corynetoxins (isolated in 1981 from Corynebacterium rathayi [32]). These classes have merely the uracil nucleoside core structure in common with the muraymycins and the terminally branched lipophilic side chain resembles the acyl moiety in muraymycins of group B. Capuramycin, a nucleoside antibiotic isolated in 1986 from Streptomyces griseus, shares the uracil-derived nucleoside moiety with the muraymycins [33,34]. The antibiotic FR-900493, which is structurally closely related to muraymycins, was isolated from Bacillus cereus and characterised in 1990 [35]. In comparison to the muraymycins, only the urea peptide moiety and the lipopeptidyl motif are absent. The mureidomycins [36][37][38] and pacidamycins [39][40][41], both reported in 1989, the napsamycins (1994) [42] and the sansanmycins (2007) [43,44] are structurally closely related. They consist of a 3'-deoxyuridine unit with a unique enamide linkage and the non-proteinogenic N-methyl-2,3-diaminobutyric acid, which branches into two peptide moieties. They differ in the amino acid residues AA 2 , AA 4 and AA 5 , with AA 2 and AA 5 being aromatic in all four classes. The amino acid residue AA 4 is either methionine for mureidomycins, napsamycins and sansanmycins or alanine in case of pacidamycins. Remarkably, these natural products share a urea peptide motif with the muraymycins. They are mainly active against Gram-negative bacteria, which is a noteworthy difference to the muraymycins and other related nucleoside antibiotics. The liposidomycins (isolated in 1985) [45] and the related caprazamycins (isolated in 2003) [46,47] have a unique diazepanone ring, and in case of the caprazamycins a per- methylated rhamnose residue. They resemble the muraymycins in their uridine-derived core structure, which is also glycosylated in 5'-position with an aminoribose unit, and they contain a fatty acid moiety as well. Caprazamycins also display noteworthy antimicrobial activity against M. tuberculosis as well as most Gram-positive bacteria (Table 1) [46,48]. All aforementioned nucleoside antibiotics address the same biological target and most likely have the same mode of action by inhibiting MraY (see below), but their in vitro activity differs significantly. It is important to notice that a comprehensive comparison of minimum inhibitory concentrations (MIC values) is difficult because naturally occurring nucleoside antibiotics have been tested against different bacterial strains. However, synthetic analogues of the nucleoside antibiotics listed in Table 1 have been tested against some of the listed bacterial species. It can therefore be assumed that the parent natural products display similar activities even though there are no data available. Furthermore, the activity of a compound against different strains of a bacterial species can vary. Nonetheless, there are certain trends and differences that can be observed. Muraymycin A1 is mainly active against Gram-positive bacteria such as S. aureus or E. faecalis, but also against some Gramnegative E. coli strains [49]. Tunicamycin, capuramycin and FR-900493 only show antimicrobial activity against Gram-positive strains. For mureidomycin C (R 5 = Gly, AA 2 = AA 5 = m-Tyr, AA 4 = Met, B = uracil, see Figure 2) as a representative compound, no activity against Gram-positive bacteria was observed, but it displayed pronounced antibacterial activity against P. aeruginosa. This remarkable finding distinguishes the mureidomycins, pacidamycins, sansanmycins and napsamycins from other nucleoside antibiotics. On the other hand, caprazamycin B shows good activity against Gram-positive bacteria, Pseudomonas and M. tuberculosis [48]. The related liposidomycins display good activity against M. phlei, while they are not active against a range of other bacteria [45]. Mode of action To develop an effective antibiotic one needs to choose a target that is essential for bacterial survival or growth and offers selectivity to strike only bacterial cells (without cytotoxicity to human cells). There are mainly four classical target processes for antibiotics: bacterial cell wall biosynthesis, bacterial protein biosynthesis, DNA replication and folate metabolism [21]. Novel approaches that differ from these established modes of action are under investigation, but many new compounds in development still address bacterial cell wall biosynthesis. They are accompanied by a rich variety of prominent antibiotics in clinical use such as the penicillins [23,50,51]. All bacteria, i.e., Gram-positive and Gram-negative congeners, have a cell wall as part of their cell envelope. While its thickness differs among bacteria -Gram-positive strains usually have a thicker cell wall relative to Gram-negative ones -the principle molecular structure remains identical: Bacterial cell walls consist of peptidoglycan, a heteropolymer with long chains of alternating units of N-acetylmuramic acid (MurNAc) and N-acetylglucosamine (GlcNAc) that are cross-linked through peptide chains attached to the muramic acid sugar ( Figure 3) [52]. The biosynthesis of peptidoglycan is illustrated in Figure 4 and has been described in detail in several reviews (e.g., [51,[53][54][55][56][57]). It can be divided into three parts: first, the formation of the monomeric building blocks in the cytosol ( Figure 4, step A); second, the membrane-bound steps with the attachment to the lipid linker, transformation to a disaccharide and transport to the extracellular side of the membrane ( Figure 4, steps B, C); finally, polymerisation to long oligosaccharide chains and cross-linking occur ( Figure 4, steps D, F). In the cytosol, uridine diphosphate-N-acetylglucosamine (UDP-GlcNAc), that is formed from fructose-6-phosphate in four steps, is transformed into UDP-MurNAc-pentapeptide in a number of enzyme-catalysed reactions ( Figure 4, step A). The exact composition of the peptide chain varies in different organisms. Examples given in Figure 3 are frequently occurring ones and a more comprehensive list has been reported elsewhere [52]. The membrane-associated steps commence with the transfer of UDP-MurNAc-pentapeptide to the lipid carrier undecaprenyl phosphate, catalysed by translocase I (MraY), to give lipid I ( Figure 4, product of step B). The glycosyltransferase MurG attaches a GlcNAc sugar to furnish lipid II ( Figure 4, product of step C). This building block is then transported to the extracel- lular side of the membrane. It is speculated that there might be some kind of 'flippase' involved but this particular step is still unclear and requires further investigation [55]. On the extracellular side of the membrane, the building blocks are connected by transglycosylases to form long chains ( Figure 4, step D) and then are cross-linked by transpeptidases ( Figure 4, step E). Both enzymes are members of the family of penicillin-binding proteins [23]. As mentioned above, there are many antibiotics in clinical use that target at least one step of bacterial cell wall biosynthesis. Prominent examples besides penicillins are cephalosporins, cycloserine, vancomycin, fosfomycin and daptomycin [9]. All of them (except fosfomycin and cycloserine) inhibit late, extracellular steps of cell wall formation. Thus, there are still many steps not addressed by clinically used drugs, which implies that cell wall biosynthesis still offers promising novel targets for the development of antibiotics with new modes of action. Muraymycins and other nucleoside antibiotics target translocase I (MraY) that represents such a potential novel molecular target [22]. The chemical transformation catalysed by MraY is shown in Figure 5. The cytosolic precursor UDP-MurNAc-pentapeptide is linked to undecaprenyl phosphate, a C 55 -isoprenoid lipid carrier that is located in the cellular membrane. With concomitant release of uridine monophosphate (UMP), this furnishes a diphosphate linkage between the two substrates. The reaction is reversible and MraY accelerates the adjustment of the equilibrium state. Whereas this reaction was known for a long time [64,65], the structure of the MraY protein remained unclear. The mechanism of the MraY-catalysed reaction was investigated by kinetic studies by Heydanek, Neuhaus et al. in the 1960s. They proposed a two-step mechanism for lipid I formation that was later revised ( Figure 6A) [55,[66][67][68][69]. They assumed that this would contradict the two-step mechanism as a nucleophilic residue is essential for the previously proposed mechanism. They found D98 to be crucial for activity and proposed its role to deprotonate undecaprenyl phosphate. This was speculated to be followed by a one-step nucleophilic attack of the C 55 -alkyl phosphate at the UDP-MurNAcpentapeptide ( Figure 6B) [69]. In 2013, Lee et al. reported an X-ray crystal structure (3.3 Å resolution) of MraY from Aquifex aeolicus (MraY AA ) as the first structure of a member of the PNPT superfamily. MraY AA crystallised as a dimer and additional experiments showed that it also exists as a dimer in detergent micelles and membranes [71]. The previously proposed models are in agreement with the solved structure showing ten transmembrane helices and five cytoplasmic loops. The authors identified a cleft at the cytoplasmic side of the membrane that showed the highest conservation in sequence mapping. Furthermore, it is also the region where most of the previously identified, functionally important residues [69] are located [71]. The location and binding mode of the Mg 2+ ion in the crystal does not support the proposed model for a two-step mechanism [68]. In experiments with Mn 2+ exchange no interaction of the metal with D117 and D118 could be detected. Surface calculation of MraY AA showed an inverted U-shaped groove that could harbour the undecaprenyl phosphate co-substrate. The locations of this groove, the Mg 2+ and D265 do at least not contradict the proposed one-step mechanism. Nevertheless, there is still a need for further studies to fully understand the MraY-catalysed reaction at the molecular level [71]. In the context of a different MraY inhibitor, i.e., lysis protein E from bacteriophage X174, Bugg et al. reported a different site of inhibition in pronounced distance to the proposed active site. It has been demonstrated before that mutation of phenylalanine 288 (F288L) in helix 9 of MraY caused resistance against lysis protein E [72,73]. An interaction between F288 and glutamic acid 287 (E287) with the peptide motif arginine-tryptophan-x-xtryptophan (RWxxW, x represents an arbitrary amino acid) was found. Mutants F288L and E287A showed reduced or no detectable enzyme inhibition, thus indicating a secondary binding site for potential MraY inhibitors. Nevertheless, it remains unclear how binding at helix 9 can inhibit MraY function and further studies are probably inevitable [74]. In order to investigate the biological potencies of MraY inhibitors such as the muraymycins, in vitro assay systems are needed. A widely used and universal method to evaluate the in vitro activity of potential agents against certain bacteria is the determination of minimum inhibitory concentrations (MIC). MICs are defined as the lowest concentration at which a potential antimicrobial agent inhibits the visible growth of a microorganism [75]. They are easily determined and reflect several effects such as target interaction, cellular uptake and potential resistance mechanisms of the microorganism. MIC values are therefore widely used, also in studies on muraymycin analogues (e.g., [22,[76][77][78]) and have been the basis of many structure-activity relationship studies (see below). This bacterial growth assay, however, does not elucidate the inhibitory potency of the potential antimicrobial solely against the target protein MraY. Thus, another assay system that is not based on the interaction with whole cells but only with the Scheme 1: First synthetic access towards simplified muraymycin analogues as reported by Yamashita et al. [76]. target protein is required. For MraY, there are three different assays available that provide such inhibition data: i) a fluorescence-based and ii) a radioactivity-based assay as well as iii) a relatively new Förster resonance energy transfer (FRET)-based method. The fluorescence-based assay was developed by Bugg et al. [79,80] and uses a fluorescently labelled (dansylated) analogue of the MraY substrate UDP-MurNAc-pentapeptide. The reaction of this substrate analogue with undecaprenyl phosphate leads to an increase in fluorescence intensity that can be used as a measure for enzymatic activity (e.g., [74,78]). The assay reported by Bouhss et al. [81] uses a radioactively labelled UDP-MurNAc-pentapeptide and thin layer chromatography (TLC) separation of undecaprenyl-linked MurNAc-pentapeptide from unreacted substrate (e.g., [77,82] was followed by an aldol reaction of aldehyde 2 with N,Ndibenzylglycine tert-butyl ester (3) [89] and LDA as a key step of the synthesis (Scheme 1). The resultant products were the two 5'-epimers 4 (5'R,6'S) in 31% yield and 5 (5'S,6'S) in 14% yield, which could be separated by column chromatography. After debenzylation, the resultant primary amines were connected with amido aldehydes 6 substituted with different moieties R and R' by reductive amination with R being either a hydroxy group or a hydrogen and R' representing an alkyl, allyl, ester or a protected amino moiety. This led to many truncated muraymycin analogues based on the structures 7 and 8 [76]. Cbz deprotection and subsequent peptide coupling with the L-arginine-L-valine-derived urea dipeptide 9 gave various full-length muraymycin analogues 10 and 11 [76]. Some of the truncated and the full-length compounds were able to inhibit lipid II formation. These active compounds are discussed in the section on structure-activity relationship (SAR) studies. Starting from the uridine derivative 28 used in the synthesis of (+)-caprazol, Ichikawa and Matsuda built up muraymycin D2 and its epimer (Scheme 4). They used an Ugi four-component reaction with an isonitrile derivative 29 obtained from the uridine-derived core structure 28, aldehyde 30, amine 31 and the urea dipeptide building block 27. A two-step global deprotection then gave the desired muraymycin D2 and its epimer which could be separated by HPLC [96,97]. In 2012, Kurosu et al. also reported the synthesis of potential key intermediates for the total synthesis of muraymycins (Scheme 5) [98]. A fully protected ureidomuraymycidine tripeptide was prepared through lactone opening followed by urea formation and a final Mitsunobu ring closure as key steps. A Strecker reaction of the benzylimine 34 followed by several steps afforded the alcohol 35. A thermal lactonisation as a first key step of the synthesis led to a 1:1 mixture of the two epimers 36 and 37, and the undesired lactone 37 could be epimerised and converted into 36 by treatment with DBU [98]. Epimerisation and simultaneous lactone opening could be achieved in another key step using L-valine tert-butyl ester. Acetylation of the thus formed primary alcohol resulted in compound 38. This was followed by benzyl and Cbz deprotection and the subse-Scheme 3: Synthesis of the epicapreomycidine-containing urea dipeptide via C-H activation [96,97]. quent urea formation with the imidazolium salt 39 to furnish tripeptide 40. After Boc deprotection, the resultant amine was guanidinylated using isothiourea 41. The thus obtained precursor 42 was treated with DIAD and PPh 3 in a final step for an intramolecular Mitsunobu ring closure to finish the synthesis of the fully protected ureidomuraymycidine 43 (Scheme 5) [98]. In 2010, Ducho et al. reported an alternative synthesis of the naturally occurring uridine-derived muraymycin core structure (Scheme 6) [78,99]. The key step of their route was a sulfurylide reaction with high substrate-controlled diastereoselectivity [100][101][102]. This epoxide-forming sulfur-ylide reaction had been established before by Sarabia et al. [103,104]. After some initial confusion regarding the stereochemical configuration of the epoxide product, it could be unambiguously proven that the Ducho's synthesis of epicapreomycidine (Scheme 7) started from the (R)-configured Boc-protected Garner aldehyde 51 [106], which was transformed into the N-benzylimine 52. The latter was then diastereoselectively converted with a Grignard This novel tripartite approach was then used by Ducho et al. to synthesise the structurally simplified natural product analogue 5'-deoxy muraymycin C4 (65), which formally differs from the parent natural product only by absence of one oxygen atom (Scheme 9) [78,109,110]. Starting from protected uridine-5'aldehyde 44, the first key step of the synthesis was a (Z)-selective Wittig-Horner reaction with phosphonate 66 [111] in order to obtain the didehydro amino acid 67. The next important step of this route was an asymmetric catalytic hydrogenation [112,113] with the chiral Rh(I)-DuPHOS catalyst 68 to prepare the (6'S)-configured product 69 [109,110]. Subsequent hydrogenolytic cleavage of the Cbz group gave the nucleosyl amino acid 70. To complete the tripartite approach, the reductive amination with the aldehyde 64 furnished 71, and Cbz deprotection and peptide coupling with the epicapreomycidinecontaining urea dipeptide 58, followed by acidic global deprotection, gave the desired 5'-deoxy muraymycin C4 (65) (Scheme 9) [78]. In addition to the described synthetic routes, a range of other muraymycin analogues has been prepared. In the interest of conciseness, this synthetic work is not discussed here, but the biological properties of such analogues will be summarised in the following section on SAR studies. Structure-activity relationship studies With various structurally diverse compounds at hand, the stage has been set for SAR studies on muraymycins. The antimicrobial activities found by McDonald et al. introduced muraymycins as a promising subject of study [22]. The naturally occurring muraymycins isolated from Streptomyces guided first insights into the structural features essential for MraY inhibition. For the most active member of the family, i.e., muraymycin A1, antibiotic activity could be found against various bacteria ranging from Staphylococci with MIC values of 2 to 16 μg/mL, Entero-cocci with 16 μg/mL and higher to some Gram-negative bacteria (8 μg/mL). Against an E. coli mutant with increased membrane permeability, an MIC value below 0.03 μg/mL was obtained, suggesting that inhibition is a matter of cellular uptake of the compound. In vivo efficacy was demonstrated for muraymycin A1 with an ED 50 of 1.1 mg/kg in Staphylococcus aureus-infected mice. Five of the 19 naturally occurring compounds (i.e., muraymycins A1, A5, B6, C2 and C3) were capable of inhibiting both MraY and peptidoglycan synthesis at the lowest concentration tested (IC 50 = 0.027 μg/mL), which represented activities comparable to those of liposidomycin C (0.05 μg/mL) and mureidomycin A (0.03 μg/mL). As a general trend, higher antimicrobial activities were found for acylated compounds, in particular with longer and functionalised fatty acid side chains. with PhCH 2 as residues R at the hydantoin moiety gave the best results with inhibition of lipid II formation at 6.25 μg/mL, which is comparable to muraymycin C1. Good activity was also found for hydantoin derivative 77 with the 4-FC 6 H 4 substituent, showing inhibition of lipid II formation at 25 μg/mL. The only N-alkylated derivative inhibiting in the same order of magnitude was 83 with n-C 11 H 23 substitution. However, activities of the other compounds within this group also coincided with the previous observation that lipophilic compounds were more active. Overall, the tested monosubstituted hydantoin derivatives confirmed the assumed correlation between inhibitory activities and lipophilicity of the substituent. Yamashita et al. studied truncated muraymycin analogues lacking the lipophilic side chain as described in the section on synthetic access (compounds of type 7, 8 and 10) [76]. The ac- series, which showed good antibacterial activities (see above), muraymycin D2 (33) and its epimer lack the hydrophobic side chain at the leucine moiety [22]. It was postulated that this lipophilic side chain may not be necessary for target inhibition, but for cellular uptake through the lipid bilayer of the cytoplasmic membrane, as an increased lipophilicity is advantageous for this [77,114]. Consequently, several lipophilic derivatives 91a-d were prepared ( Figure 9). Long-chain lipophilic amino acids were incorporated into the muraymycin core structure as a simplified replacement of the O-acylated hydroxyleucine moiety. Compound 91a (highlighted in orange) with the pentadecyl side chain showed the best activity as an MraY inhibitor (IC 50 = 0.33 μM (with L-leucine moiety), IC 50 = 0.74 μM (with D-leucine moiety)), but relative to muraymycin D2 and its epimer, this implied a 33-fold and 8-fold, respectively, decrease of inhibitory activity. In bacterial growth assays, the analogue 91a exhibited the best MIC values ranging between 0.25 μg/mL and 4 μg/mL (see Table 2). These values were comparable to those of the naturally occurring congeners of the A and B series [22]. Generally, derivatives with the naturally occurring L-configuration in the leucine moiety showed slightly better activities. These lipophilic analogues were also tested for cytotoxicity towards Hep G2 cells and showed no cytotoxicity (IC 50 > 100 μg/mL) [114]. In another series of analogues with different peptide units, the pentadecyl side chain of 91a was kept. The L-epicapreomycidine (L-epi-Cpm) unit of 91a was replaced by L-capreomyci- Figure 9: Muraymycin D2 and several non-natural lipidated analogues 91a-d [77,114]. These compounds were all active against MRSA and VRE with varying MIC values ( Table 2). The most active analogues of this series were 92a and 92b (Figure 10, highlighted in orange) with MIC values between 1 μg/mL and 4 μg/mL. Derivatives with unnatural D-stereochemistry in the pentadecyl glycine motif possessed a similar antibacterial activity (potency within factor 2). Truncated analogues lacking the L-valine urea terminus (Cbz-protected 92d and N-terminally unprotected 92e) showed only a minor loss of activity (MIC = 4-8 μg/mL) ( Table 2). These results indicated that the guanidine motif of analogues 91a, 92a and 92b (MICs between 0.25 μg/mL and 4 μg/mL) is preferred, but that amino analogues 92c and 92f still show good activity (MICs between 2 μg/mL to 8 μg/mL). The different stereochemistry at the central leucine unit and the terminal truncation had no crucial effects on the antibacterial activity (Table 2). Truncated derivatives 92f-h ( Figure 10) without the L-valine urea terminus contained L-ornithine (L-Orn, 92f), L-arginine (L-Arg, 92g) and L-methionine (L-Met, 92h), respectively. They were also tested and showed reasonable activity against some bacterial strains (MIC = 4-8 μg/mL), which further indicated that significant variations in the peptide moiety are tolerated. The truncated analogue 93 ( Figure 10) only consisted of the N-alkylated nucleoside core structure. Its inhibitory activity was 6 to 12-fold reduced (IC 50 = 5 μM) and the antibacterial activity decreased with MIC values between 32 μg/mL and 64 μg/mL. In summary, these systematic SAR studies demonstrated the importance of the lipophilic side chain for the antibacterial activity. The urea dipeptide motif is important for antibacterial activity as well, but it could be diversified with simpler amino acids as well as being truncated in order to provide bioactive analogues. A graphical summary of these results is provided in Figure 11. In 2014, Ichikawa, Matsuda et al. continued their SAR studies with respect to urgently needed anti-Pseudomonas agents [115]. These Gram-negative bacteria possess an outer membrane which acts as an additional permeability barrier, making them generally less sensitive to antibacterial agents. In this context, the aforementioned muraymycin analogues (91a, 92a-h) were tested for MraY inhibitory activity again, with MraY enzyme from S. aureus (Table 3). However, antibacterial activities against several Pseudomonas strains were moderate to low with MICs between 8 μg/mL and >64 μg/mL. Analogue 92g was the most active congener in this series with MIC values between 8 μg/mL and 32 μg/mL. Compounds 92e and 92f showed nearly no activity (MIC = 32 to >64 μg/mL). More lipophilic truncated analogues 94 without the urea dipeptide unit ( Figure 12) were synthesised and tested, but they all showed nearly no activity. These results indicated the importance of the presence of a guanidine residue and a lipophilic side chain for potential antibacterial activity against Pseudomonas strains. Hence, several derivatives were prepared in which the positions and numbers of the guanidine groups and the lipophilic side chains were varied in order to optimise their relative orientation for best biological activity. This strategy resulted in the bioactive analogues 95-98 ( Figure 12). Analogue 95 with an interconversion of the lipid side chain and the guanidine group had a slightly reduced activity compared to lipidated analogue 92g. Analogue 96 showed an increased antibacterial activity towards some of the tested Pseudomonas strains. Analogue 97 is an interconverted version of 96 and displayed a comparatively poor activity. The most active analogue was compound 98 which is a hybrid type of the aforementioned analogues 95-97. The results indicate that a lipophilic side chain and guanidine groups are necessary for antibacterial potency. Compounds 95-98 showed antibacterial activity, with the branched-type compound 96 (MIC values between 8 μg/mL and 16 μg/mL) and the hybridtype compound 98 (MIC between 4 μg/mL and 8 μg/mL) being the most active congeners. A limitation of both analogues 96 and 98 is their increased cytotoxicity against HepG2 cells with IC 50 values of 4.5 μg/mL and 34 μg/mL, respectively. Further, the metabolic stability was studied in vitro for the analogues 95, 96 and 98 using human or rat liver microsomes and all of them proved to be reasonably stable [115]. In 2014, Ducho et al. reported the synthesis of 5'-deoxy muraymycin C4 (65, see above) [78]. Biological assays revealed that 65 inhibited the MraY enzymes of E. coli and S. aureus with potencies in the range of tunicamycins. The antibacterial activity of 65 was tested against some selected E. coli and S. aureus strains although the lack of a lipophilic moiety indicated that the compound should not be a potent antibiotic. However, an unexpected moderate activity against E. coli DH5 alpha was observed, whereas 65 was weakly active against E. coli strain ΔtolC but not active against the S. aureus Newman strain. Further studies indicated excellent plasma and metabolic stability and no cytotoxicity. Overall, the structurally simplified 5'-deoxy muraymycin scaffold 65 may therefore be useful for further antibacterial development. It should also be noticed that it has inspired the design of a novel oligonucleotide backbone modification [116,117]. A fragmented non-ribosomal peptide synthetase (NRPS) system appears to be responsible for the assembly of the urea tripeptide building block 105. However, the non-proteinogenic amino Figure 12: Muraymycin analogues designed for potential anti-Pseudomonas activity (most active analogues are highlighted in orange) [115]. Biosynthesis acids need to be formed first. It has been proposed that L-arginine (106) undergoes 3-hydroxylation (giving 3-hydroxy-Larginine (107)) and subsequent ring closure to furnish L-epicapreomycidine ((2S,3S)-capreomycidine, 108), that is then activated as thioester 109 (Scheme 10). This proposal is based on the elucidated formation of the epimeric amino acid L-capreomycidine ((2S,3R)-capreomycidine) as part of viomycin biosynthesis in Streptomyces vinaceus. In this producing organism, L-arginine is diastereoselectively hydroxylated to afford (3S)-3-hydroxy-L-arginine. The ring-closure reaction then occurs with formal inversion of the β-stereocenter (but quite likely through an aza-Michael addition to the α,β-unsaturated intermediate) [119][120][121]. The exact stereochemical course of epicapreomycidine formation in muraymycin biosynthesis is unclear though as the stereochemical configuration at C-3 of the intermediate 3-hydroxy-L-arginine (107) has not been identi-fied yet. It cannot be ruled out that an epimerisation reaction might be involved in the biosynthesis of 108, in particular with respect to other epimerisation steps in bacterial biosynthetic pathways [122]. Consequently, synthetic routes towards both 3-epimers of 3-hydroxy-L-arginine have been developed which would also enable the preparation of isotopically labelled congeners for biosynthetic studies [123,124]. It should also be noted that a biomimetic domino guanidinylation-aza-Michaeladdition reaction for the synthesis of the capreomycidine scaffold has been developed, which only furnished the target structures as stereoisomeric mixtures though [125]. The epicapreomycidine-derived thioester 109 is proposed to be converted into the urea dipeptide motif with valine derivative 110 and possibly hydrogen carbonate as a C 1 -building block for urea formation, thus furnishing 111. The 3-hydroxy-L-leucine Scheme 10: Proposed outline pathway for muraymycin biosynthesis based on the analysis of the biosynthetic gene cluster by Chen, Deng et al. [118]. MTA = 5'-deoxy-5'-(methylthio)adenosine. Scheme 11: Biosynthesis of the nucleoside core structure of A-90289 antibiotics (which is identical to the muraymycin nucleoside core) according to the studies of Van Lanen et al. [126]. 2-OG = 2-oxoglutarate. moiety might be obtained by stereoselective enzymatic β-hydroxylation of thioester-activated L-leucine 112, which leads to the formation of 113. Finally, peptide formation by condensation of 111 with 113 affords the complete thioesteractivated urea tripeptide unit 105 (Scheme 10). One interesting aspect of this biosynthetic proposal by Chen, Deng et al. is that they assume the putative dioxygenase Mur16 to catalyse β-hydroxylations of two structurally distinct amino acid substrates, i.e., L-arginine (106) and thioester-activated L-leucine 112. As pointed out, there is a lack of experimental insights into muraymycin biosynthesis beyond the elucidation of its gene cluster. However, Van Lanen et al. have studied the early steps of the biosynthesis of A-90289 nucleoside antibiotics in detail (Scheme 11) [126]. The A-90289 subclass is structurally closely related to caprazamycins and liposidomycins, and its aminoribosylated nucleoside core is identical to that of muraymycins ( Figure 2). This supports the assumption that the early steps of the biosynthesis of all these subclasses are probably highly similar, if not identical. (1) in muraymycin biosynthesis. Aldehyde 99 then undergoes the aforementioned aldol-type transformation to GlyU 101, catalysed by the enzyme LipK. However, aldehyde 99 also serves as a source of the aminoribosyl moiety. Thus, it is converted into 5'-amino-5'-deoxyuridine (115) in a transamination reaction mediated by LipO. This is followed by the LipPcatalysed displacement of the uracil with a phosphate moiety to afford 5-amino-5-deoxyribose-1-phosphate (116). Van Lanen et al. then studied the LipK-catalysed aldol-type formation of GlyU 101 in more detail [128]. Surprisingly and in contrast to Chen's and Deng's proposal, L-threonine (119) turned out to be the source of the enol(ate) component instead of glycine (100). Hence, LipK was revealed to be a transaldolase mediating a retro-aldol reaction of L-threonine (119) towards the enol(ate) and acetaldehyde (120), followed by a stereoselective aldol addition of the former to uridine-5'-aldehyde 99 (Scheme 12). Using synthetic reference compounds, it could be proven that (5'S,6'S)-GlyU 101 is the stereoisomer furnished in this reaction, so that no epimerisation at a later stage of the biosynthetic route is required for the formation of the A-90289 nucleoside antibiotics. Based on the elucidation of the LipK-mediated reaction, Van Lanen et al. then performed a PCR-based screening of a collection of ≈2500 actinomycete strains for similar transaldolaseencoding genes [129]. They could identify the gene sphJ from a Sphaerisporangium sp., which encoded the transaldolase SphJ having 51% amino acid sequence identity with LipK. Following detailed characterisation of this enzyme, the sphJ gene was employed as a probe to clone the entire genetic locus consisting of 34 putative ORFs. The expression of three selected genes (including sphJ) was monitored under different growth conditions. Under the thereby identified optimal conditions, the actinomycete produced a set of four unprecedented MraY-inhibiting nucleoside antibiotics named sphaerimicin A to D [129]. Hence, detailed studies on LipK-like transaldolases led to the discovery of novel antimicrobially active secondary metabolites. It remains to be proven that the results obtained for the early steps of A-90289 and sphaerimicin biosynthesis are also valid for the biosynthetic formation of muraymycins. Bioinformatic analyses of the biosynthetic gene clusters of A-90289 antibiotics, caprazamycins and muraymycins revealed six shared ORFs overall [128]. A sequence comparison of a range of transaldolases gave 47% identity and 78% similarity of Mur17 with LipK [129]. Overall, these insights suggest that the formation of the GlyU intermediate 101 and very likely also of the whole aminoribosylated nucleoside core structure occur in a conserved manner. Further studies on muraymycin biosynthesis are still pending. Conclusion In summary, this review describes a promising class of antimicrobially active natural products, the uridine-derived muraymycins. Muraymycins are one subclass of nucleoside antibiotics inhibiting the membrane protein translocase I (MraY), a key enzyme in the intracellular part of peptidoglycan formation. Synthetic methodology for the preparation of muraymycins and their analogues has been established, and first SAR insights revealed that the design of structurally simplified, biologically active muraymycin analogues is an auspicious approach. However, further SAR studies as well as investigations on the interplay of target inhibition and cellular uptake for the antibiotic activity are surely desirable. Studies on muraymycin biosynthesis may not only be of academic interest, but could also lead to semi-or mutasynthetic methodology for the preparation of novel muraymycin analogues. Several laboratories around the world currently perform research on muraymycins and other uridine-derived nucleoside antibiotics. Hopefully, this work will contribute to the development of urgently needed novel antimicrobial drugs.
8,485.4
2016-04-22T00:00:00.000
[ "Chemistry", "Medicine" ]
HAGR-D: A Novel Approach for Gesture Recognition with Depth Maps The hand is an important part of the body used to express information through gestures, and its movements can be used in dynamic gesture recognition systems based on computer vision with practical applications, such as medical, games and sign language. Although depth sensors have led to great progress in gesture recognition, hand gesture recognition still is an open problem because of its complexity, which is due to the large number of small articulations in a hand. This paper proposes a novel approach for hand gesture recognition with depth maps generated by the Microsoft Kinect Sensor (Microsoft, Redmond, WA, USA) using a variation of the CIPBR (convex invariant position based on RANSAC) algorithm and a hybrid classifier composed of dynamic time warping (DTW) and Hidden Markov models (HMM), called the hybrid approach for gesture recognition with depth maps (HAGR-D). The experiments show that the proposed model overcomes other algorithms presented in the literature in hand gesture recognition tasks, achieving a classification rate of 97.49% in the MSRGesture3D dataset and 98.43% in the RPPDI dynamic gesture dataset. Introduction Gestures and hand postures have been used for a long time as a way to express feelings and to communicate information between people. A gesture can represent a simple action, such as to allow people to cross a street, or complex body expressions belonging to a specific population language. Sign language uses both hand and body postures instead of sound patterns to establish communication. It is a very important type of language due to the fact that nine million people present some kind of hearing or speaking loss [1], while most people do not speak sign language, as most of the hearing impaired are illiterate in their local language. Gesture recognition can work on this problematic by creating a bridge between the languages, recognizing a given gesture and translating it into words in real time [2]. There are three types of gesture recognition systems: based on devices attached to the body [3], based on gesture tracking [4] and based on computer vision techniques [5]. The first category uses sensors, such as wearable devices with accelerometers and markers, to capture a gesture and its corresponding movement. However, this invasive technology limits the normal execution of the gesture. Gesture recognition systems based on tracking aim to follow the gesture trail, drawing a path through its execution using a marker. The limitation comes from the fact that these systems use only the traveled path, and they do not perform well with complex and very detailed movements, such as the ones involving different hand postures. The last category, systems based on computer vision techniques, uses a camera device to capture the gesture and extract features, such as the speed, direction and intensity of a given gesture. Due to variations in gesture execution, people executing them and the environment, the accuracy of such systems can be degraded in specific scenarios, like reduced illumination, very fast movement or interference from other people [6,7]. However, it is the least invasive category, allowing a more natural iteration between the user and the system without impairing the gesture execution. Usually, some steps between the image capture and the classification output are followed by these systems: image segmentation, feature extraction and pattern classification [8]. In the first step, the image background is removed, and only the body parts relevant to the gesture recognition are kept [9]. In this scenario, the Microsoft Kinect [10] appeared as an interesting solution to gesture recognition, presenting an important contribution in image segmentation for body detection. In the work of Tara et al. [11], the Microsoft Kinect is used to capture depth maps and to recognize static gestures. The depth maps are used to detect the hand through the definition of a distance threshold in which the hand is located. Lee et al. [12] proposed an approach with k-means and the convexity hull to find the fingers and provide a more accurate analysis of the gesture. The main proposition of Palacios et al. [13] consists of a segmentation algorithm where the user does not need to execute the gesture in the front of the body and near the depth sensor. In the second step of gesture recognition systems based on computer vision techniques, descriptors are extracted in order to computationally represent the gesture pattern [14]. Thus, the images are reduced to feature vectors by using mathematical models [15]. Oreifej and Liu [16] proposed a technique called Histogram of Oriented 4D Normals (Hon4d) that uses a 4D histogram approach for feature extraction, while Yang [17] proposed an algorithm for 2D and 3D spaces that extracts some features from the executed gestures: the location of the left hand with respect to the signer's face in 3D space; the angle from the face to the left hand; the position of the left hand with respect to the shoulder center; the occlusion of both hands. Doliotis et al. [18] proposed a feature extraction method using images generated by a Microsoft Kinect, retrieving a 3D pose orientation and full hand configuration parameters. It is also important to note that there is a common issue between many feature extraction methods in gesture recognition: the curse of dimensionality [19]. Some approaches have been proposed to solve this problem, like the reduction of the feature vectors [20,21] by selecting a smaller set of features that adequately keeps the original representation in order to distinguish the different gestures. Probabilistic models analyze the correlation between the features allowing their selection, like Principal Component Analysis (PCA) or Independent Component Analysis (ICA) [22]. Moreover, optimization function techniques can also be used to reduce the feature vectors, aiming to minimize the model error rate, such as swarm methods [23], which are designed to optimize high dimensionality functions. In the last step of gesture recognition of systems based on computer vision techniques, a classifier is trained using the extracted descriptors in order to recognize the gestures [24]. Barros et al. [5,25] presented a gesture recognition system that achieved higher classification rates in comparison to other methods using dynamic time warping (DTW) [26] and hidden Markov model (HMM) [27] classifiers. Kim et al. [28] also used DTW as a classifier to recognize gestures captured by a depth sensor. Godoy et al. [29] proposed a gesture recognition method trained on a few samples with HMM, achieving high classification rates. Neverova et al. [30] proposed a framework based on a multi-scale and multi-modal deep learning architecture, which is able to detect, locate and recognize a gesture. To complete this task, they used information obtained from different data channels of a depth image, decomposing the gesture into multiple temporal and spatial scales. Wu et al. [31] proposed a multilayered gesture recognition system, dividing the recognition phase into three layers: the first layer for fast distinguishing types of gestures based on PCA; the second layer is a particle-based descriptor to extract and identify dynamic information from gestures in each frame using DTW with adaptive weights; and finally, the static hand shapes are recognized in the third layer. Their study achieved significant results in a large dataset with 50,000 gestures [32]. In this paper, we propose a novel approach for dynamic gesture recognition with depth maps, called the hybrid approach for gesture recognition with depth maps (HAGR-D). HAGR-D uses a version of CIPBR (convex invariant position based on RANSAC) algorithm [33] for feature extraction, a combination of the binary particle swarm optimization [34] and a selector algorithm to make the feature selection and a hybridization between DTW and HMM classifiers for recognition. DTW is used to find the most probable gestures, while HMM refines DTW output. This paper is organized as follows. Section 2 describes the proposed model. In Section 3, experiments with gesture images captured by the Microsoft Kinect are shown. Finally, in Section 4, we present some concluding remarks. Hybrid Approach for Gesture Recognition with a Depth Map HAGR-D is an approach for gesture recognition that involves a method for feature extraction, a method for feature vector reduction and a hybrid classifier. Figure 1 presents the training architecture for the HAGR-D system, which starts with feature extraction using a variation of the CIPBR algorithm for depth maps. These vectors are used to train the DTW classifier. The feature selection method uses a combination between binary particle swarm optimization [34] and a selector algorithm [25] to reduce the feature vector that is used by the HMM to refine the DTW classification result. We present the depth CIPBR algorithm in Section 2.1, the feature selection method in Section 2.2 and a description of DTW and HMM hybridization in Section 2.3. Table 1 presents the notations and definitions used to describe the HAGR-D. Best position found by a particle g Best Best position found by the swarm Size of A pattern feature vector and position in a cost matrix row j Size of B pattern feature vector and position in a cost matrix column Depth CIPBR The depth CIPBR algorithm is an approach composed of a sequence of tasks to reduce a depth map of a hand posture into two signature sets proposed by Keogh et al. [35]. To complete these tasks, there are four modules connected in cascade, as presented in Figure 2. The first module, "radius calculation", uses a hand posture image that is segmented from the depth map generated by the Microsoft Kinect; Figure 3a. The hand posture contour is extracted from this image, generating the contour (Figure 3b), and the center of mass (C) of the hand posture is calculated from the image contour using the center moments [36]. Then, the point that has the lower Y coordinate is found, P ; Figure 3c. Finally, this module calculates the distance between the center of mass and point P . Figure 3d presents an output example of the "radius calculation" module. The dark gray point is the center of mass of the contour given by C; the red point is the highest point of the contour given by P ; and the line connecting these points is given by P C. The second module of Depth CIPBR, "draw maximum circumcircle", uses the line segment P C as radius to draw a circle inside the hand contour. If this circle exceeds the hand contour boundary, a triangle is calculated using the three most distant contour points from the point C, two of them being on opposite sides of the contour. The biggest circle inside this triangle is the maximum circumcircle Θ of the contour with the center at point C. The third module of Depth CIPBR, "calculate signatures", receives the maximum circumcircle Θ and points P and C as input. The hand contour points are substantially reduced using Andrew's monotone chain convex hull algorithm [37]. Andrew's algorithm outputs a set Ψ = {p 1 , p 2 , . . . , p n } from convex hull points, which is used to generate two signature sets. The first signature set is composed of distances (D) calculated as follows: • For each point ω ∈ Ψ, the length of the line segment ωC is calculated based on the Euclidean distance from ω to the point C; • Then, this length is subtracted from the circumcircle radius, in order to obtain the ωQ length, where point Qis the intersection between segment ωC and Θ. Therefore, the first signature set is composed of each distance D ωQ , ∀ω ∈ Ψ, calculated using the following Equation (1), where: • C is the center of mass of the hand posture contour; • ω x , C x are the x coordinates for points ω and C, respectively; • ω y , C y are the y coordinates for points ω and C, respectively; • radius is the radius of Θ calculated by the "draw maximum circumcircle" module. The second signature set consists of a vector of angles obtained by calculating the angle between a line composed of each point w ∈ Ψ of the convex hull hand shape point and the line segment P C. Both signature sets are obtained in a clockwise direction, always starting with point P . Finally, in the last module, "feature vector normalization", the signature sets are normalized. The first signature set is normalized dividing each distance by the radius calculated in the "draw maximum circumcircle" module. The normalized distance vector is represented by D = {d 1 , d 2 , . . . , d n }: The set of angles is normalized by dividing each angle by 360 • : Angle and distance sets are concatenated in the following order: angles first and distances at the end of the signature vector. Therefore, the final feature vector is F = { a 1 , a 2 , . . . , a n , d 1 , d 2 , . . . , d n }. Feature Selection Method Some classifiers used for gesture recognition are more sensitive to the curse of dimensionality [19], such as the HMM [38,39]. In order to overcome this obstacle, the feature selection method finds the smallest size possible for the feature vector and assigns the same size for the feature vectors of all gestures. This is also an important task, since many classifiers use inputs with the same predefined size. In this study, binary particle swarm optimization (BPSO) [34] finds the target size of the reduced feature vector, while the selector algorithm is used to resize the feature vectors. The objective that BPSO seeks to optimize is the minimum distance between the particle composed of zeros and ones and the gestures sequences. The number of ones in the particle denotes the size of the new feature vector. The next subsections explain in detail how these algorithms work. Particle Swarm Optimization Particle swarm optimization [40] solves an optimization problem with a swarm of simple computational elements, called particles, exploring a solution space to find an optimal solution. The position from each particle represents a candidate solution in n-dimensional search space (D) defined as X = {x 1 , x 2 , x 3 , . . . , x n }, where each x n is a position in the n-dimension, and the particle velocity is represented by The fitness function evaluates how well each particle presents itself in each iteration. When a particle moves and its new position has a better fitness value than the previous one, this value is saved in a variable called p best . To guide the swarm to the best solution, the position, where a single particle found the best solution until the current execution, is stored in a variable called g best . Therefore, to update the particle velocity and position, the following equations are used: (3) where i = (1, 2, 3, . . . , N ), N is the size of the swarm, c 1 represents the private experience or "cognitive experience" and c 2 represents the "social experience" interaction, usually used with a value of 2.05 [40]. Variables r 1 and r 2 are random numbers between zero and one and represent how much p best and g best will influence the particle movement. The inertia factor κ is used to control the balance of the search algorithm between exploration and exploitation. The x i represents the particle position in the i-th dimension. The recursive algorithm runs until the maximum number of iterations is reached. Binary PSO The binary PSO is a variation of the traditional PSO in discrete spaces. The major difference between this algorithm and its canonical version is the interpretation of velocity and position. In the binary version, the particle's position and velocity are represented by zeros and ones only. This change requires a reformulation in how velocity is calculated, according to the following equation: where rand is a random number between zero and one. Finally, to binarize all of the feature vectors, a threshold calculated through the mean of all of the feature vectors is used. BPSO calculates a distance from each x ij binary particle's position to the same j position in all binary vectors for the same gesture. After each iteration, all distances are added up to generate the fitness function output. Particles are improved as soon as the fitness values become smaller in comparison with the fitness obtained by the previous iteration. The particle fitness function is: where (x i1 , x i2 , . . . , x in ) is the particle's i-th position and (F j1 , F j2 , . . . , F jn ) is the j-th features in all vectors. Selector Algorithm BPSO chooses the target size for the reduced feature vector S . Then, the selector algorithm [25] reduces the CIPBR feature vector S to S , producing the final vectors of the proposed approach. In this process, some rules must be respected. First, if any vector has fewer points than the target size of S , zeros are added to the feature vector until it matches the desired length. Second, feature vectors larger than the target size of S are redefined using a selection algorithm. This algorithm consists of calculating a window W through the division of the current vector length by the target size of S . The current vector S is parsed, and each value in the W position is included in the new feature vector. If the new output vector S is even smaller than the desired length, the remaining positions are randomly visited in S and used to compose the new output vector S until the desired length is reached. DTW and HMM Hybridization In order to classify the depth CIPBR feature vectors, a hybridization between two classifiers that generated good results in the literature of dynamic gesture classification is proposed: DTW and HMM [5,28,29]. DTW gives the distance between two patterns that represents the degree of similarity between them using a cost matrix (CM ). Given two patterns A = {a 1 , a 2 , . . . , a N } and B = {b 1 , b 2 , . . . , b N }, the cost matrix cell CM i,j is the distance calculated between the element x i and y j . The similarity degree will be the sum of the lowest cost path in the matrix, which starts at CM 1,1 and finishes at CM i,j . DTW works well in classifying grouped patterns, but it is not very sensitive to very close patterns and might commit some mistakes. We observed that in most DTW misclassifications, the right output was near the compared gesture of the training dataset. Thus, we propose to refine DTW output with HMM to reduce the number of mistakes. A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. A simple way of observing an HMM is imagining it as a finite automaton deterministic with two alphabets, that is in every state, it will tell the likelihood of a hand posture change to another depending on the executed gesture. The HMM has the quality of being fast in training and running, but it can be very fragile in assertiveness, because all training depends on how likely matrices are initiated. Thus, with a higher initial matrix, or amount of the class involved, the HMM will loses some of its power. DTW has no training phase, but retains a certain number of examples, so it can make the comparison between the kept examples and some input pattern, returning the class belonging to the closest example to the input pattern. Because of this proximity, DTW often confuses a class of a gesture with its closest neighboring class. The canonical DTW [26], being the only classifier, faces the problem of proximity between classes of feature vectors generated by the CIPBR algorithm. Because of this proximity between classes, DTW has a greater tendency to return a foreign class present in the training set among the supposedly correct class examples. To work around this problem, DTW is used in the hybrid classifier in order to return no longer a class, but the closest sequences to the input pattern belonging to training set. Thus, the correct class is more likely to be among the returned sequences. At this stage, the trained HMM has a higher probability of returning the correct class, avoiding the transition matrices outliers, since HMM only needs to classify between the sequences returned by DTW. Figure 4 presents the sequence performed by the HAGR-D for gesture recognition. First, DTW classifies the gesture using the CIPBR algorithm for feature extraction and returns the k nearest gestures of the input sequence as candidates. The input sequence is then resized by the selector algorithm using the best size found by the BPSO in training time, and HMM decides the classification of the input sequence between the k candidate gestures returned by DTW. Algorithm 1 presents a pseudo-code for DTW and HMM hybridization. The model starts receiving a set with images from a gesture at Line 1. The Depth CIPBR algorithm extracts the features of the hand postures in Line 2, and DTW classifies this feature vector and outputs another vector with the k nearest gestures from the training dataset, Line 3. Then, the input gesture and the outputs of DTW are resized by the selector algorithm using the size that the BPSO decided in the training time, Line 6. Finally, the HMM classifies each resized vector, and the hybrid classifier outputs the most incident class as returned by the HMM, Lines 7-9. Experimental Results In order to evaluate the HAGR-D, two experiments are performed with public benchmarks: the MSRGesture3D dataset [41] and the RPPDI dynamic gesture dataset [5]. The next subsections explain these experiments in detail. MSRGesture3D The MSRGesture3D is a dynamic hand gesture dataset captured by the Kinect RGB-D camera. There are 12 dynamic hand gestures defined by American Sign Language (ASL) in MSRGesture3D, and each dynamic gesture was performed two or three times by each one of 10 subjects. The gestures presented in the dataset are: "bathroom", "blue", "finish", "green", "hungry", "milk", "past", "pig", "store", "where", "J" and "Z'.' The dataset contains only depth data images and is considered challenging mainly because of self-occlusion issues. We used the leave-one-subject-out cross-validation to evaluate the dataset as proposed in [41]. To find the initial states of HMM, we use a k-means clustering [42] technique, avoiding the random initial matrices, while the BPSO uses few dimensions in its particles to guarantee small final vectors. The works in [43,44] use similar approaches to determine the final size of the vectors in their studies. We compared the HAGR-D with a model using the same feature extraction approach, but with DTW or HMM alone as classifiers, "called depth CIPBR + DTW" and "depth CIPBR + HMM". Therefore, we are able to evaluate the performance of the proposed model with DTW before HMM and solely with HMM. The inputs for DTW are the raw sequences generated by depth CIPBR, and the inputs for the HMM are the output sequences of the feature selection method. Figure 5 presents the boxplot for each method. It is easy to see that the hybridization between DTW and HMM significantly improved the proposed model. Furthermore, in several iterations, the HAGR-D classification rate was very close to 100%, and only two sequences were misclassified, corresponding to the gesture "green" being classified as "store" and the gesture "blue" being classified as "where". Another point to be made is that outliers presented using solely HMM as a classifier no longer exist with HAGR-D. The size of the boxplot generated by the results of each classifier also elucidate a low variation between the results of each classifier, providing a certainty about the consistency of the results. Tables 2 and 3 present the confusion matrices of HAGR-D and "depth CIPBR + DTW", respectively, applied to the MSRGesture3D dataset. As can be seen, the hybridization of the "depth CIPBR + DTW" with HMM generating HAGR-D improved the classification rates, and most of the HAGR-D mistakes happened between the gestures "green" and "store", with 7% of the "green" being classified as "store". Figure 6 shows an example of each of these gesture sequences and how close their hand postures are. Figure 7 shows a representation of the two gesture vectors most confused by HAGR-D and how close they are. To represent this gesture in a 2D space, they are normalized in size using the "selector algorithm" with only two features as the final length. It is easy to see that sometimes, a few examples cross the division between classes, making classification more difficult. Finally, Table 4 presents the results obtained for the MSGesture3D dataset using the leave-one-subject-out cross-validation as a testing procedure in comparison with other methods in the literature. As presented, HAGR-D achieved the best classification result of 97.49%. [45] 95.29 HON4D + D disc [16] 92.45 HON4D [16] 87.29 ROP, Wang et al. [41] 88.50 Depth motion maps, Yang et al. [46] 89.20 Kurakin et al. [47] 87.70 Klaser et al. [48] 85. 23 Venkateswara et al. [49] use the same dataset in their study, but modifying the experiment using five subjects for training and five for testing their methods, achieving 94.6% as their best result, which is still above ours. RPPDI Dynamic Gesture Dataset The RPPDI dynamic gesture dataset is a set of images of seven dynamic hand gestures performed in front of a smartphone camera. Figure 8 illustrates one sequence example for each gesture in the dataset. Each gesture is performed several times, and Table 5 presents the number of sequences in each gesture. We used the same test configuration as proposed by Barros et al. [5,25] with 2/3 of the dataset for training the model and 1/3 for testing. The experiments in the RPPDI dataset used the same configuration for BPSO and HMM as presented in Section 3.1, and the results are compared to Barros et al.'s previous works. The CIPBR uses the Otsu threshold [50] as a binarization method in the first module to segment hand posture. Table 6 presents the results obtained, and Table 7 presents the confusion matrix of the HAGR-D system. The proposed method committed a few mistakes, misclassifying only one gesture, while achieving 100% accuracy in some iterations. Table 6. Comparison between the results in RPPDI dynamic gesture dataset. Remarks The proposed approach, HAGR-D, achieved the best results in two different datasets due to the combination of depth CIBPR for feature extraction and the hybrid classifier with DTW and HMM. The classifiers compensated the failures of each other by reducing misclassifications between different gestures with DTW and refining the classification output through validation of the most similar sequences using the HMM. The hybrid classifier improved the results in comparison with DTW and HMM applied individually. One of the limitations of HAGR-D is the definition of the number of sequences returned by DTW, k. In this study, such a parameter was empirically defined. Another limitation is the computation cost of DTW that might impair real-time application of the proposed model. Another point to be made is that the few mistakes committed by HAGR-D were due to very similar sequences. Nevertheless, many conditions must be fulfilled in order for HAGR-D to misclassify a given gesture: the number of similar postures, the distance from the hand to the sensor, the speed of gesture execution and occlusion. Conclusions and Future Works Hand gesture recognition for real-life applications is very challenging because of its requirements of robustness, accuracy and efficiency. In this paper, we proposed both a variation of the CIPBR algorithm for depth maps and a hybrid classifier for gesture recognition using DTW and HMM. The proposed approach, HAGR-D, presents better results than the ones in the literature, achieving a classification rate of 97.49% in the MSRGesture3D dataset and 98.43% in the RPPDI dynamic gesture dataset. The application of depth CIPBR for feature extraction showed good results, while the hybridization between the DTW and HMM classifiers significantly improved classification accuracy. Although the focus of classification in this paper relies on the task of hand gesture recognition, in future research, we intend to extend the application of the HAGR-D to other types of gestures, such as human body movements. Furthermore, the DTW has a high computational cost, which makes the HAGR-D execution slow; however, the FastDTW [51] is a variation of the traditional form of the DTW that promises to exponentially reduce the computational cost, and it will be addressed in our next experiments.
6,687.8
2015-11-01T00:00:00.000
[ "Computer Science" ]
The Histone Deacetylase Inhibitor JAHA Down-Regulates pERK and Global DNA Methylation in MDA-MB231 Breast Cancer Cells The histone deacetylase inhibitor N1-(ferrocenyl)-N8-hydroxyoctanediamide (JAHA) down-regulates extracellular-signal-regulated kinase (ERK) and its activated form in triple-negative MDA-MB231 breast cancer cells after 18 h and up to 30 h of treatment, and to a lesser extent AKT and phospho-AKT after 30 h and up to 48 h of treatment. Also, DNA methyltransferase 1 (DNMT1), 3b and, to a lesser extent, 3a, downstream ERK targets, were down-regulated already at 18 h with an increase up to 48 h of exposure. Methylation-sensitive restriction arbitrarily-primed (MeSAP) polymerase chain reaction (PCR) analysis confirmed the ability of JAHA to induce genome-wide DNA hypomethylation at 48 h of exposure. Collective data suggest that JAHA, by down-regulating phospho-ERK, impairs DNMT1 and 3b expression and ultimately DNA methylation extent, which may be related to its cytotoxic effect on this cancer cytotype. Introduction N 1 -(ferrocenyl)-N 8 -hydroxyoctanediamide (JAHA) is an organometallic histone deacetylase inhibitor (HDACi) analogue of suberoylanilide hydroxamic acid (SAHA), a US Food and Drug Administration-approved anticancer drug [1]. It was designed such that the three-dimensional spanning ferrocenyl group could replace the planar aryl "cap" group and act as a suitable bioisostere ( Figure 1). Since JAHA's inception, a number of metal-based analogues, including rhenium and ruthenium complexes, have appeared [2][3][4][5][6][7]. The ability of this compound to impair the growth of triple-negative, high malignant, MDA-MB231 breast tumor cells (IC 50 at 72 h = 8.45 µM) has already been reported [8]. In particular, cell cycle perturbation, and early stage reactive oxygen species production followed by mitochondrial dysfunction and autophagy inhibition accounted for the cytotoxic effect of exposure to JAHA at its 72h-IC 50 concentration [8], with a noticeable absence of apoptotic promotion characteristic of SAHA treatment on the same cells [9]. Here, we extended the investigation to the expression of AKT and extracellular-signal-regulated kinase (ERK) signaling, which is known to play a crucial role in tumor cell death/survival decision [10], in light of the documented ability of SAHA to deactivate both factors in different cancer cell systems [11][12][13]. Results and Discussion In a first set of assays, MDA-MB231 cells were exposed to a 8.45 µM concentration of JAHA for 18, 30 and 48 h and Western blot analyses were performed to evaluate the accumulation of total and activated (phosphorylated) AKT and ERK1 and ERK2 isoform proteins in control and JAHA-exposed cells. As shown in Figure 2A, a decrease of total AKT down to 67.5%˘4.8% and 54.7%˘12% vs. controls was observed at 30 and 48 h of treatment with JAHA, respectively. The pAKT/total AKT ratio did not change between treated and control samples in the time lapse of the experiment. On the other hand, although exposure to 8.45 µM JAHA caused a significant decrease of the accumulation of total ERK1/2 within 30 h of culture, followed by a prominent up-regulation, a drastic reduction also in the amount of its activated forms (pERK) was observed at earlier times of treatment (18 h = 38%˘1.4%; 30 h = 29.1%˘1.1% vs. controls), as shown in Figure 2B. Also, in this case, the pERK/total ERK ratio did not change between treated and control samples in the time lapse of the experiment, suggesting that JAHA treatment impaired gene expression and not the extent of protein activation. The ability of this compound to impair the growth of the triple-negative, high malignant, MDA-MB231 breast tumor cells (IC50 at 72 h = 8.45 μM) has already been reported [8]. In particular, cell cycle perturbation, and early stage reactive oxygen species production followed by mitochondrial dysfunction and autophagy inhibition accounted for the cytotoxic effect of exposure to JAHA at its 72h-IC50 concentration [8], with a noticeable absence of apoptotic promotion characteristic of SAHA treatment on the same cells [9]. Here, we extended the investigation to the expression of AKT and extracellular-signal-regulated kinase (ERK) signaling, which is known to play a crucial role in tumor cell death/survival decision [10], in light of the documented ability of SAHA to deactivate both factors in different cancer cell systems [11][12][13]. Results and Discussion In a first set of assays, MDA-MB231 cells were exposed to a 8.45 μM concentration of JAHA for 18, 30 and 48 h and Western blot analyses were performed to evaluate the accumulation of total and activated (phosphorylated) AKT and ERK1 and ERK2 isoform proteins in control and JAHA-exposed cells. As shown in Figure 2A, a decrease of total AKT down to 67.5% ± 4.8% and 54.7% ± 12% vs. controls was observed at 30 and 48 h of treatment with JAHA, respectively. The pAKT/total AKT ratio did not change between treated and control samples in the time lapse of the experiment. On the other hand, although exposure to 8.45 μM JAHA caused a significant decrease of the accumulation of total ERK1/2 within 30 h of culture, followed by a prominent up-regulation, a drastic reduction also in the amount of its activated forms (pERK) was observed at earlier times of treatment (18 h = 38% ± 1.4%; 30 h = 29.1% ± 1.1% vs. controls), as shown in Figure 2B. Also, in this case, the pERK/total ERK ratio did not change between treated and control samples in the time lapse of the experiment, suggesting that JAHA treatment impaired gene expression and not the extent of protein activation. It is widely acknowledged that DNA methyltransferase 1 (DNMT1), DNMT3a and DNMT3b are targets for signaling through ERK pathway and they are mainly involved in variants of the enzymatic activity, i.e., maintenance (DNMT1) and de novo methylation (DNMT3a and 3b) [14][15][16]. We therefore examined whether JAHA-triggered ERK1/2 deactivation could result in decreased methyltransferase expression in MDA-MB231 cells. As shown in Figure 2C, the results obtained by Western blot confirmed, at least in part, the hypothesis since JAHA treatment down-regulated DNMT1 and 3b vs. controls at every time point examined. In particular, the decrease of DNMT1 expression level was more prominent and steady (18 h = 24.4%˘1.6%; 30 h = 29.1%˘4.8%; 48 h = 29.2%˘7.6%), whereas that of DNMT3b peaked at 30 h from exposure (18 h = 40%˘3%; 30 h = 28.3%˘3.8%; 48 h = 57.8%˘1.3%). On the other hand, JAHA was not effective in modifying the expression level of DNMT3a at 18 and 30 h from exposure, whereas a late and less pronounced decrease (80%˘3.9%) could be observed after 48 h of treatment. To confirm the observed down-regulation of DNMTs, the DNA isolated from cells grown for 18, 30 and 48 h either in control conditions or in the presence of 8.45 µM JAHA, was analyzed by methylation-sensitive arbitrarily-primed polymerase chain reaction (MeSAP-PCR) [17,18] to unveil changes induced by the drug on global methylation status of the genomic DNA. The obtained data show that 48 h-treatment with JAHA was effective in modifying the global methylation pattern of tumor cell DNA, as shown by the different number, intensity and size of the bands in the matched control and exposed samples. In particular, as shown in Figure 3, the difference in the electrophoretic patterns of single-and double-digested DNA puts in evidence an increase of unmethylated CpG-containing sites related to a hypomethylated state of the genomic DNA after exposure to JAHA. No statistically-significant difference was found at earlier times (not shown). It is widely acknowledged that DNA methyltransferase 1 (DNMT1), DNMT3a and DNMT3b are targets for signaling through ERK pathway and they are mainly involved in variants of the enzymatic activity, i.e., maintenance (DNMT1) and de novo methylation (DNMT3a and 3b) [14][15][16]. We therefore examined whether JAHA-triggered ERK1/2 deactivation could result in decreased methyltransferase expression in MDA-MB231 cells. As shown in Figure 2C, the results obtained by Western blot confirmed, at least in part, the hypothesis since JAHA treatment down-regulated DNMT1 and 3b vs. controls at every time point examined. In particular, the decrease of DNMT1 expression level was more prominent and steady (18 h = 24.4% ± 1.6%; 30 h = 29.1% ± 4.8%; 48 h = 29.2% ± 7.6%), whereas that of DNMT3b peaked at 30 h from exposure (18 h = 40% ± 3%; 30 h = 28.3% ± 3.8%; 48 h = 57.8% ± 1.3%). On the other hand, JAHA was not effective in modifying the expression level of DNMT3a at 18 and 30 h from exposure, whereas a late and less pronounced decrease (80% ± 3.9%) could be observed after 48 h of treatment. To confirm the observed down-regulation of DNMTs, the DNA isolated from cells grown for 18, 30 and 48 h either in control conditions or in the presence of 8.45 μM JAHA, was analyzed by methylation-sensitive arbitrarily-primed polymerase chain reaction (MeSAP-PCR) [17,18] to unveil changes induced by the drug on global methylation status of the genomic DNA. The obtained data show that 48 h-treatment with JAHA was effective in modifying the global methylation pattern of tumor cell DNA, as shown by the different number, intensity and size of the bands in the matched control and exposed samples. In particular, as shown in Figure 3, the difference in the electrophoretic patterns of single-and double-digested DNA puts in evidence an increase of unmethylated CpG-containing sites related to a hypomethylated state of the genomic DNA after exposure to JAHA. No statistically-significant difference was found at earlier times (not shown). Literature data report that HDAC1 is able to bind DNMT1 in vivo thereby forming a complex active on chromatin remodeling [19]. In order to ascertain whether JAHA could down-regulate global DNA methylation also by binding to this complex and interfering with DNMT action, as suggested for the HDACi trichostatin A [20], we performed an enzyme-linked immunosorbent assay (ELISA)-like DNMT inhibition test with DNMT1-containing native nuclear extract from MDA-MB231 cells in the presence or absence of JAHA. The obtained results indicated that the Literature data report that HDAC1 is able to bind DNMT1 in vivo thereby forming a complex active on chromatin remodeling [19]. In order to ascertain whether JAHA could down-regulate global DNA methylation also by binding to this complex and interfering with DNMT action, as suggested for the HDACi trichostatin A [20], we performed an enzyme-linked immunosorbent assay (ELISA)-like DNMT inhibition test with DNMT1-containing native nuclear extract from MDA-MB231 cells in the presence or absence of JAHA. The obtained results indicated that the enzymatic activity was comparable for both control and JAHA-containing samples (Figure 4), thereby excluding a direct interaction of the drug. Cell Culture and JAHA Treatment MDA-MB231 breast tumor cells were maintained in RPMI 1640 medium plus 10% foetal calf serum, 100 U/mL penicillin, 100 μg/mL streptomycin, and 2.5 mg/L amphotericin B (Life Technologies, Carlsbad, CA, USA), at 37 °C in a 5% CO2 atmosphere. The cells were detached from flasks with 0.05% trypsin-EDTA (ethylenediaminetetraacetic acid), counted, and plated at the necessary density for treatment after achieving 60%-80% confluency. JAHA was synthesized as reported by Spencer et al. [1] and dissolved at 6.5 mM concentration in dimethyl sulfoxide (DMSO) as stock solution. Cell Culture and JAHA Treatment MDA-MB231 breast tumor cells were maintained in RPMI 1640 medium plus 10% foetal calf serum, 100 U/mL penicillin, 100 µg/mL streptomycin, and 2.5 mg/L amphotericin B (Life Technologies, Carlsbad, CA, USA), at 37˝C in a 5% CO 2 atmosphere. The cells were detached from flasks with 0.05% trypsin-EDTA (ethylenediaminetetraacetic acid), counted, and plated at the necessary density for treatment after achieving 60%-80% confluency. JAHA was synthesized as reported by Spencer et al. [1] and dissolved at 6.5 mM concentration in dimethyl sulfoxide (DMSO) as stock solution. Methylation-Sensitive Arbitrarily-Primed (MeSAP)-PCR The genomic DNA was purified from control and treated cells with the PureLinkTM Genomic DNA Kit (Life Technologies) according to manufacturers' instructions. Two micrograms of the DNA samples were digested with 10 U of AfaI restriction endonuclease (Life Technologies) to generate single digested DNA (SDD) samples. The cleavage site of the enzyme is (GT*AC). Half of SDD were further treated with 5 U of HpaII (Life Technologies), a methylation-sensitive restriction endonuclease unable to cut DNA if methylated cytosine is present in its recognition site (CG*GG), to generate double digested DNA samples (DDD). SDD and DDD samples were separately amplified by arbitrarily-primed PCR using two subsequent amplification cycles. In the first low stringency cycle, a permissive annealing temperature and a high salt and primer concentration were set to allow annealing of the arbitrary primer to the best matches in the template with the highest preference for all the genomic CpG sites since it is provided with a 3 1 tail complementary to these sites. The first PCR cycle was performed in the presence of 500 ng of DNA, a 21-mer arbitrary primer (5 1 -AACTGAAGCAGTGGCCTCGCG-3 1 ) and recombinant Taq DNA polymerase (Life Technologies), and cycle profile was 94˝C for 5 min followed by four cycles at 94˝C for 30 s, 40˝C for 60 s and 72˝C for 90 s The profile of the second high stringency cycle, performed just after the first one, was 94˝C for 1 min, followed by four cycles at 60˝C for 1 min and 72˝C for 2 min. The amplified DNA was resolved by non-denaturating 6% acrylamide-bisacrylamide (29:1 ratio) gel electrophoresis, stained with Gel Red nucleic acid gel stain (Biotium, Hayward, CA, USA), and analysed with SigmaGel v.1.0 image analysis software (SPSS, Chicago, IL, USA). ELISA Assay MDA-MB231 cells were grown in control conditions as already reported, and collected by scraping and centrifugation of the suspension. The cell pellet was re-suspended in a hypotonic buffer (20 mM Tris-HCl, pH 7.4, 10 mM NaCl, 3 mM MgCl 2 ) containing 0.5% NP-40 detergent and protease inhibitor cocktail, and the nuclear proteins obtained after centrifugation of the homogenate for 10 min at 3000 rpm in the cold. The nuclear pellet was then dissolved in an extraction buffer (100 mM Tris, pH 7.4, 2 mM Na 3 VO 4 , 100 mM NaCl, 1% Triton X-100, 1 mM EDTA, 10% glycerol, 1 mM EGTA, 1 mM NaF, 0.5% deoxycholate, 20 mM Na 4 P 2 O 7 ) and the nuclear proteins obtained as a supernatant after centrifugation for 30 min at 14,000ˆg in the cold and stored in aliquots at´80˝C after quantitation via Bradford assay. The DNMT-containing native nuclear extract was submitted to an ELISA-like DNMT inhibition test (EpiQuik™, Farmingdale, NY, USA, DNA Methyltransferase Activity/Inhibition Assay Kit, Epigentek, Farmingdale, NY, USA) according to manufacturer's instructions, in the presence or absence of JAHA to check whether the drug could directly bind the enzymes and interfere with their activity. Essentially, in this kit a unique cytosine-rich DNA substrate coated on the strip wells can be methylated by DNMT enzymes transferring a methyl group to cytosine from Adomet. The methylated DNA can be recognized with an anti-5-methylcytosine antibody and its amount quantified colorimetrically through an ELISA reaction. DNMT activity (optical density (OD) (mg/h)) was calculated according to the following formula: ((sample OD´blank OD)/(sample proteinˆincubation time))ˆ1000. Conclusions In conclusion, our data demonstrate that, opposed to SAHA, JAHA action on MDA-MB231 cells is addressed towards the sole deactivation of pERK1/2, leaving pAKT levels unaltered. On the other hand, the time-dependent variations in ERK1/2 level of phosphorylation are in line with those reported for SAHA-treated MDA-MB231 cells and attributable to the depletion of upstream molecules before 48 h of exposure [22]. Accumulation of total AKT appears to decrease with time whereas that of total ERK1/2 shows a U-shaped pattern with up-regulation at 48 h of exposure. This represents an additional aspect of diversity between JAHA and SAHA, whose lack of effect especially on total ERK content has been described in different cell model systems [11,12] and its biological implication remains to be determined. Our results show that JAHA inhibits mainly the accumulation of DNMT1 and DNMT3b and methyltransferase activity thereby influencing gene expression also in a manner alternative to that of histone acetylation. It is known that both DNMTs are responsible of the hypermethylation/silencing of tumor suppressor genes [23,24] and therefore it is conceivable that the DNA demethylation following JAHA treatment results in the rearrangement of the molecular landscape of transcriptional regulation and restoration of gene expression. This can ultimately account, at least in part, for the cytotoxic effect of the drug on this cancer cytotype. Characterization of the molecular basis of the different effect exerted by JAHA on DNMT3a with respect to DNMT1 and DNMT3b, and identification of the specific gene promoters targeted by the JAHA-triggered demethylation events will deserve future and more detailed investigation. These results add to a growing number of publications highlighting the use of metal-based HDACis as effective probes in cancer [5].
3,833.2
2015-10-01T00:00:00.000
[ "Biology", "Medicine" ]
Lycium Barbarum (Wolfberry) Reduces Secondary Degeneration and Oxidative Stress, and Inhibits JNK Pathway in Retina after Partial Optic Nerve Transection Our group has shown that the polysaccharides extracted from Lycium barbarum (LBP) are neuroprotective for retinal ganglion cells (RGCs) in different animal models. Protecting RGCs from secondary degeneration is a promising direction for therapy in glaucoma management. The complete optic nerve transection (CONT) model can be used to study primary degeneration of RGCs, while the partial optic nerve transection (PONT) model can be used to study secondary degeneration of RGCs because primary degeneration of RGCs and secondary degeneration can be separated in location in the same retina in this model; in other situations, these types of degeneration can be difficult to distinguish. In order to examine which kind of degeneration LBP could delay, both CONT and PONT models were used in this study. Rats were fed with LBP or vehicle daily from 7 days before surgery until sacrifice at different time-points and the surviving numbers of RGCs were evaluated. The expression of several proteins related to inflammation, oxidative stress, and the c-jun N-terminal kinase (JNK) pathways were detected with Western-blot analysis. LBP did not delay primary degeneration of RGCs after either CONT or PONT, but it did delay secondary degeneration of RGCs after PONT. We found that LBP appeared to exert these protective effects by inhibiting oxidative stress and the JNK/c-jun pathway and by transiently increasing production of insulin-like growth factor-1 (IGF-1). This study suggests that LBP can delay secondary degeneration of RGCs and this effect may be linked to inhibition of oxidative stress and the JNK/c-jun pathway in the retina. Introduction Glaucoma has been considered to be a neurodegenerative disease characterized by optic nerve (ON) atrophy and irreversible loss of retinal ganglion cells (RGCs) [1]. The loss of RGC bodies may be primary (caused by direct damage to axons or cell bodies, such as crush or transection of axons) or secondary (caused by the toxious effectors released from the neighboring dying cells because of primary damage or a cell death signal from the deafferented target) [2][3][4][5]. The delay of secondary degeneration of RGCs in glaucoma is believed to provide a promising avenue for treatment. Several animal models have been used in the study of glaucoma, including complete optic nerve transection (CONT), acute and chronic ocular hypertension models and the ON crush model. However, it is difficult to distinguish primary degeneration from secondary degeneration in these commonly used models because each involves insult to all RGCs [3]. For example, in the CONT model, all the axons of RGCs are cut and therefore all RGCs will die from primary degeneration. However, in the partial optic nerve transection (PONT) model, which was established about ten years ago, only axons in the dorsal part of ON are transected. The degeneration of the cell bodies of RGCs whose axons are transected during surgery is primary and the degeneration of the cell bodies of RGCs whose axons are intact during surgery is secondary. According to the literature, primary degeneration mainly happened in superior retinas and secondary in inferior retinas, they could be separated in location. [2]. Oxidative stress has been thought to be involved in secondary degeneration after PONT, even though stringent measures are taken to ensure adequate retinal circulation [6][7][8]. Inflammation has also been shown to be involved in secondary degeneration after brain trauma and spinal cord injury. However, its involvement in secondary degeneration of RGCs after PONT has not been studied. Lycium barbarum has been used as an ''upper class herb'' for hundreds of years in the Oriental world. It was used for the treatment of the diseases related to vision, the ''kidney'' and the ''liver'' [9]. We have shown that the polysaccharides extracted from Lycium barbarum (LBP) reduce the death of cultured cortical neurons challenged by beta-amyloid, glutamate and Homocysteine [10][11][12][13]. LBP also delay the degeneration of RGCs in a rat chronic ocular hypertension model [14] and a mouse acute ocular hypertension model [15] and reduce neuronal damage in a mouse transient middle cerebral artery occlusion model [16]. However, it is difficult to know whether LBP delayed primary or secondary degeneration in these models, and the mechanism or mechanisms underlying the neuroprotective effects of LBP for neuronal tissues remained unclear. The aims of this experiment were to confirm whether ON section caused retinal oxidative stress, to investigate the presence of retinal inflammation after ON section and to determine which kind of degeneration LBP could delay and which mechanism(s) might be involved in any neuroprotective effects of LBP, and we were largely successful in these aims. Ethics Statement The use of animals followed the requirements of the Cap. 340 Animals (Control of Experiments) Ordinance and Regulations in Hong Kong. All the experimental and animal handling procedures were approved by the Faculty Committee on the Use of Live Animals in Teaching and Research in The University of Hong Kong (CULATR #1850-09 and #1996-09). Animals and Procedure Adult female Sprague Dawley rats (10-12 weeks of age weighing 250-280 g) were used in this study. The rats were housed in a temperature-controlled room subjected to a 12-hour light/12-hour dark cycle and supplied with food and water ad libitum. The preparation of LBP was as previously described [8]. The final powder was stored in a dry-box and freshly dissolved in phosphate -buffered saline (PBS; 0.01 M; pH 7.4) before use. The treatment (LBP or PBS) began 1 week before surgery (CONT or PONT) until sacrifice at the scheduled time-points (see Fig. 1). The treatment was achieved with a feeding needle by gavage once daily. To investigate if the degeneration speeds were similar between superior and inferior retinas after CONT, the rats without treatment with PBS or LBP were sacrificed either 1 week or 2 weeks after CONT (n = 5 at either time-point). To evaluate the effects of LBP on the survival of RGCs after ON injury, the procedure was as described in Fig. 1. There were 4 to 16 animals in each group: CONT: n = 10, 8, 16 and 12 in PBS, 0.1 mg/kg LBP, 1 mg/kg LBP and 10 mg/kg LBP groups sacrificed 1 week after CONT. n = 7 and 6 in PBS and 1 mg/kg LBP groups sacrificed 2 weeks after CONT. PONT: n = 7 and 4 in PBS and LBP groups sacrificed 1 week after PONT. n = 9 and 10 in PBS and LBP groups sacrificed 4 weeks after PONT. Retrograde labelling of RGCs was achieved using Fluoro-Gold (FG) from the stump of the ON after CONT [17] or from superior colliculi (SC) 1 week before PONT [18]. Seven rats without treatment or ON injury were sacrificed 7 days after SC labeling as controls for both CONT and PONT experiments. Eight animals were used for 1, 19-dioctadecyl-3, 3, 39, 39-tetramethylindocarbocyanine perchlorate (DiI) tracing in vivo. The death of cells in ganglion cell layer (GCL) was studied using terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end labeling assay (TUNEL assay) and protein expression was examined with Western-blot analysis at 12 hours, 1 day, 4 days and 1 week after PONT; there was no drug treatment (n = 3 to 5 animals in each group). Protein expression after LBP or PBS treatment was also studied with Western-blot analysis both 1 day and 1 week after PONT (n = 3 to 5 animals in each group). The rats for RGC counting (both after FG and DiI labeling) and Western-blot analysis were sacrificed using inhalation of CO 2 . For TUNEL assay, the rats were sacrificed by injecting overdoses of Phenobarbital followed by perfusion with 0.9% NaCl and 4% paraformaldehyde (PFA). Surgical Procedure Anesthesia and the CONT procedure were conducted as previously described [17]. The PONT surgery was similar to that described by Fitzgerald et al [8]. The partial incision in the ON was made 1.0 mm from the optic disc and was achieved using a pair of Spring Vannas scissors (15000-08, F.S.T., Heidelberg, Germany) marked 200 mm from the tips of both blades, or using a diamond knife (G-31480, Geuder AG, Hertzstrasse, Heldelberg, Germany) with the blade fixed to a length of 200 mm. Retrograde DiI Tracing in vivo after PONT The method published by Fitzgerald et al. was adopted [8]. Briefly, the ON was partially cut and several crystals of DiI (Molecular Probes, Eugene, OR) were placed precisely into the cut sites to label the RGCs whose axons were transected ( Fig. 2A). The rats were sacrificed 4 days after DiI labeling. The retinas were processed for RGC counting as below. Optic nerves were collected, post-fixed in 4% PFA for 60 minutes and then placed into 30% sucrose in 0.1 M phosphate buffer solution overnight until they sank. They were then embedded into optimal cutting temperature embedding compound and sectioned longitudinally. The sections were mounted Quantification of RGCs After sacrifice, retinas were collected and post-fixed in 4% PFA for 60 minutes. Retinas were divided into the superior and inferior halves and each half was separated into three roughly equal sectors before being flat-mounted as the temporal, middle and nasal sectors (Fig. 3). Eight photographs (2006200 mm 2 ) in each sector were captured along the median line, starting from the optic disc to the edges at 500-mm intervals under a fluorescence microscope at 4006magnification [14,19]. The limitation of using photographs for cell counting rather than focusing through wholemounted retinas is that under-estimation may occur. However the counting method is unlikely to alter the results of this experiment and this method has the merit that the photographs can be kept longer than sections and be recounted. Using rats without treatment with PBS or LBP, we showed similar RGC surviving densities between superior and inferior retinas either 1 week or 2 weeks after surgery (see Results). Therefore, for rats treated with PBS or LBP, only inferior retinas were used after CONT. After PONT, surviving RGCs were counted separately in superior and inferior retinas because the degeneration speeds were different in superior and inferior retinas after PONT [2], and grouped together for the whole retinas. The counting was conducted by a double-blind method by two persons and the data were averaged (mean 6 SEM, numbers per mm 2 ). TUNEL Assay TUNEL staining was previously believed to detect apoptosis only, but more recently it has been shown to detect necrosis and other types of cell death as well [20]. To determine when cell death begins in the GCL, TUNEL staining was used to examine retinas at different time-points after PONT. After sacrifice, the eyeballs were post-fixed in 4% PFA overnight at 4uC, dehydrated with a graded series of ethanol and xylene, and then embedded in paraffin. Cross-sections (4 mm) were cut using a microtome (Micro HM 315R, Heidelberg, Germany). Manufacturer's instructions for the TUNEL assay were followed (Roche Diagnostics GmbH, Mannheim, Germany). Sections were counterstained with 49, 6diamidino-2-phenylindole (DAPI) after the TUNEL reaction to confirm that the TUNEL staining was located in the nuclei. For consistency of analysis, only sections with ON head were selected for observation. Three sections were selected from each animal. The positive-staining cells in the GCL in the inferior retinas were counted under a microscope using a 4006magnification. The data were expressed as mean 6 SEM, numbers per inferior retina. Western-blot Analysis After sacrifice, the inferior retinas were collected in PBS on ice. The procedure including the use of lysis buffer, secondary antibody and the developing reagents were as previously described [17,19]. After transfer onto polyvinylidene difluoride membrane, the membranes were blocked with 5% non-fat dry milk or 3% bovine serum albumin in Tris-buffered saline with 0.05% Tween Statistical Analysis Student's t-test was used for comparisons of two groups. For more than two groups, one-way ANOVA was used for multiple comparisons followed by Dunn's or Student-Newman-Keuls method as post hoc tests. Data were analyzed statistically with the Sigmastat software (Sigmastat 3.5; Systat Software Inc., Chicago, IL, USA). The P = 0.05 level was considered to be statistically significant. RGCs Degenerated Significantly after CONT and PONT The average densities of FG-labeled RGCs in the normal retinas were as follows: the whole retinas: 2088.1664.4 RGCs/ mm 2 ; the normal superior retinas: 2046.5692.4 RGCs/mm 2 ; and the normal inferior retinas: 2144.4689.8 RGCs/mm 2 . There was no difference between the superior and inferior retinas (Student's ttest, P.0.05). The surviving RGC densities decreased significantly in the expected areas after both CONT and PONT in animals treated with PBS or LBP (Student's t-test, P,0.001, Fig. 4 & Fig. 5). The surviving densities of RGCs after CONT from animals without treatment with PBS or LBP were as follows: 1510.7665.6 in the superior retinas and 1402.6674.7 in the inferior retinas 1 week after CONT; 234.2619.8 in the superior retinas and 214.868.4 in the inferior retinas 2 weeks after CONT. There were no significant differences between the superior and inferior retinas at both time-points after CONT (Student's t-test, P.0.05). LBP did not Prevent the Primary Degeneration of RGCs after CONT One PBS group and three LBP groups with different dosages (0.1 mg/kg, 1 mg/kg and 10 mg/kg) were examined 1 week after CONT. No significant difference between the PBS group and any LBP group was detected; in addition, no significant difference among the three dosages of LBP was seen (one-way ANOVA for multiple comparisons and Dunn's method as post hoc tests: Fig. 4A, 4C & 4D). We have previously found that 1 mg/kg LBP can significantly reduce the death of RGCs 2 weeks and 4 weeks after ocular hypertension produced by laser photocoagulation [14], and therefore 1 mg/kg LBP was adopted in the later experiments (CONT 2 weeks and PONT). Two weeks after CONT, no significant difference between PBS and LBP groups was detected (Student's t-test, P.0.05, Fig. 4A, 4E & 4F). LBP Delayed Secondary Degeneration of RGCs in the Inferior Retina 4 Weeks after PONT DiI labeled the cell bodies of RGCs whose axons were transected after PONT and which would be expected to die from primary degeneration. There were 460.9652.8 RGCs/mm 2 and 191.2648.7 RGCs/mm 2 , labeled in the superior and inferior retinas respectively. The difference was significant (P = 0.001, Fig. 2B, 2D & 2E) and the ratio was about 2.4:1 between superior and inferior retinas. These findings indicate that both superior and inferior retinas are vulnerable to primary and secondary degeneration after PONT. However, in the inferior retinas, significantly more RGCs would be affected by secondary injury since the inferior retina has significantly fewer RGCs with axons transected by PONT surgery. LBP had no effect on the survival of RGCs in whole retinas either 1 week or 4 weeks after PONT; comparison of the PBS and LBP groups showed no difference between groups at either timepoint (Fig. 5A). When dividing the retinas into superior and inferior halves, there was no difference in the superior retinas between PBS and LBP groups either 1 week or 4 weeks after PONT (Fig. 5B, 5D & 5E). LBP protected about 18% of RGCs in the inferior retinas 4 weeks after the PONT but not 1 week after PONT (one way ANOVA, p,0.05, Fig. 5B, 5G & 5H). Combining the results from DiI labeling and the survival of RGCs, our data show that LBP appears to delay secondary degeneration of RGCs rather than to affect primary degeneration. DiI Labeled Axons Located in the Dorsal ON The sections from the optic nerves with retrograde labeling of the RGCs by DiI showed that the travel path of DiI from the cut site to the retinas was limited to the dorsal part of the nerve (Fig. 2C). Oxidative Stress and JNK Pathway(s) Involved in Degeneration of RGCs in the Inferior Retina after PONT In the inferior retinas, TUNEL staining showed that the number of positive-staining cells increased significantly 1 week after PONT (one way ANOVA, P,0.01). However, there were no changes at 12 hours, 1 day and 4 days. The positive staining was shown in the nuclei, which was confirmed by counter-staining with DAPI (Fig. 6). The protein level of TNF-a did not increase after PONT in the inferior retinas (Fig. 7A). The expression of MnSOD increased significantly 1 day after PONT and returned to normal level 4 days after PONT (Fig. 7B). The p-JNK/p-c-jun pathway was also involved in the degeneration of RGCs in the inferior retinas. Although the expression of p-JNK1 did not change, the level of p-JNK2/3 increased 1 day after PONT and was maintained until 1 Figure 4. Effects of LBP on survival of RGCs 1 week and 2 weeks after CONT. RGCs were labeled by FG. The arrows indicate microglia which were easily distinguished from RGCs and not counted. The blue arrowheads indicate RGCs. (A, C, D) Orally feeding of 0.1 mg/kg, 1 mg/kg and 10 mg/ kg LBP showed no significant effects on the survival of RGCs 1 week after CONT (compared with PBS group) and no significant difference among the three different dosages of LBP groups was detected. (A, E, F) 1 mg/kg LBP showed no significant effects on the survival of RGCs 2 weeks after CONT (compared with PBS group). (n = 10, 8, 16, 12 in PBS, 0.1 mg/kg LBP, 1 mg/kg LBP, 10 mg/kg LBP groups sacrificed 1 week after CONT and n = 7 and 6 in PBS and 1 mg/kg LBP groups sacrificed 2 weeks after CONT.). doi:10.1371/journal.pone.0068881.g004 Figure 5. Effects of LBP on RGC survival 1 week and 4 weeks after PONT. The RGCs were labeled with FG. (A) LBP did not increase the survival of RGCs either 1 week or 4 weeks after the PONT when the densities of surviving RGCs were produced from the whole retinas (NS: not significant). (B) When the retinas were divided into the superior and inferior halves, LBP did not delay the degeneration of RGCs 1 week after PONT. However, it reduced the degeneration of RGCs in the inferior retina (*P = 0.027) but not in the superior retina 4 weeks after the PONT. (F -H) The photographs of RGCs labeled by FG in both the superior and inferior retinas are about 1.5 mm away from the optic disc. In the superior retinas, the densities of RGCs were similar between the PBS and LBP groups. In the inferior retinas, the density of RGCs in the LBP group was higher than that in the PBS group. Microglia (white arrows) were easily distinguished from RGCs and not counted. (n = 7 and 4 in PBS and LBP groups 1 week after PONT. n = 9 and 10 in PBS and LBP groups 4 weeks after PONT.). doi:10.1371/journal.pone.0068881.g005 week (Fig. 7C). P-c-jun increased with the same tendency as p-JNK2/3 (Fig. 7D). LBP Inhibited Oxidative Stress and Activation of JNK Pathway as well as Transiently Increasing the Expression of IGF-1 in the Inferior Retina After LBP treatment, the expression of MnSOD increased significantly 1 day after PONT (Fig. 8A & Fig. 9A). On the other hand, LBP treatment significantly decreased the expression of p-JNK2/3 and p-c-jun both 1 day and 1 week after PONT (Fig. 8B, 8C & Fig. 9B, 9C). The effects of LBP on the expression BDNF and IGF-1 were as follows: after PONT, LBP did not change the expression of BDNF either 1 day or 1 week after PONT (Fig. 8D & Fig. 9D). However, LBP increased the expression of IGF-1 1 day after PONT, but the effect was not maintained at 1 week (Fig. 8E & Fig. 9E). Discussion After CONT, most RGCs died rapidly from primary degeneration. After PONT, more RGCs die from secondary degeneration at a later time-window in addition to primary degeneration [2,6]. Our results showed that LBP did not delay primary degeneration of RGCs after CONT. However, LBP did delay secondary degeneration of RGCs 4 weeks after PONT. Levkovitch-Verbin et al. showed that although the genetic profile was similar for primary and secondary degeneration of RGCs, minocycline was only effective for secondary degeneration, indicating a potential difference between the two types of degeneration [21]. Our result was consistent with this in that LBP only delayed secondary degeneration but not primary degeneration. In the PONT model, the increasing expression of MnSOD or SOD2, which was demonstrated by immunohistochemistry (IHC), was used as an indicator of oxidative stress [7,22,23]. MnSOD is an anti-oxidant enzyme and can detoxify in cells and tissues by converting toxic superoxide into hydrogen peroxide and diatomic oxygen. Administration of adeno-associated virus containing the SOD2 gene into eyes significantly reduces oxidative stress and nitrative stress in a rat acute ocular hypertension model [24]. The protective effect of LBP for RGCs was related to the anti-oxidative mechanism [19]. In order to determine if the anti-oxidant ability of LBP for RGCs was related to MnSOD, we investigated the expression levels of MnSOD in the rats treated with LBP or vehicle, and our results confirmed the anti-oxidant effect of LBP in retinas after injury. JNKs are the kinases involved in both apoptotic and nonapoptotic cell death [25,26]. C-jun is a transcription factor activated by phosphorylation of JNKs and is involved in the transcription of various proteins, including some pro-apoptotic proteins [26]. Previous studies using the PONT model and IHC staining have shown that JNKs are activated at the primary injury sites and c-jun is activated both at the primary and the secondary injury sites in the retina [7,27]. There are three isoforms of JNKs: JNK1, JNK2 and JNK3; IHC cannot differentiate among these isoforms. We used Western-blot analysis, to differentiate JNK1 from JNK2/3 according to the molecular weights. Our results confirmed the inhibition of the JNK/c-jun pathway by LBP; this effect has been shown previously using different models [28,29]. However, this is the first time that these effects of LBP have been demonstrated in the retina. In addition, our results showed that p-JNK2/3 rather than p-JNK1 were activated in the inferior retina after PONT. A similar result has been shown in cultured RGC-5 cells: advanced glycation end products-albumin from bovine serum increased the production of p-JNK2/3, but not p-JNK1 in vitro [30]. BDNF belongs to the neurotrophin family and is expressed both in SC [31,32] and retina [33]. The level of BDNF increases in retina following ON transection [34] and after periocular injection of in situ hydrogels containing Leu-Ile. This is an inducer for neurotrophic factors, which increase the expression of BDNF in the retina and promote RGC survival after ON injury [35]. IGF-1 is also a neurotrophic factor which is a key molecule determining the survival of RGCs during the early stage of ON injury [36]. However, the effects of LBP on the expression of BDNF and IGF-1 have not been previously studied. Our results show that LBP can produce a transient increase in the expression of IGF-1 in the inferior retina, but the source of this IGF-1 is not clear. Future study using IHC with this model may help to address this issue. It is known that DiI could be transported by either active processes or by diffusion [6,[37][38][39][40]. In this experiment, DiI was used to label RGCs whose axons were transected after PONT. Although it has been reported that DiI can label cells in close proximity to labeled cells in fixed tissues [37], this phenomenon has not been reported in vivo [40]. Perhaps the time available for DiI labeling for fixed tissues was much longer than that in vivo; diffusion to neighboring tissue was obvious in fixed tissues but not for the tissues in vivo. Therefore, we did an in vivo study where diffusion of DiI was limited. Our results also showed that the axonal transport of DiI was limited to the dorsal region in the ON Our results confirmed the neuroprotective effects of LBP for RGCs and showed the possible mechanism. The future target of our study is to provide the basis for the use of LBP in clinical conditions. The electroretinogram is used widely by ophthalmologists and optometrists for the diagnosis of retinal diseases and can evaluate the retinal function by measuring the electrical responses of various cell types [41][42][43][44][45]. Therefore, we have also used the electroretinogram to evaluate the effect of LBP after PONT and this experiment is currently in process.
5,609.8
2013-07-19T00:00:00.000
[ "Biology" ]
Rashbons: Properties and their significance In presence of a synthetic non-Abelian gauge field that induces a Rashba like spin-orbit interaction, a collection of weakly interacting fermions undergoes a crossover from a BCS ground state to a BEC ground state when the strength of the gauge field is increased [Phys. Rev. B {\bf 84}, 014512 (2011)]. The BEC that is obtained at large gauge coupling strengths is a condensate of tightly bound bosonic fermion-pairs whose properties are solely determined by the Rashba gauge field -- hence called rashbons. In this paper, we conduct a systematic study of the properties of rashbons and their dispersion. This study reveals a new qualitative aspect of the problem of interacting fermions in non-Abelian gauge fields, i.e., that the rashbon state induced by the gauge field for small centre of mass momenta of the fermions ceases to exist when this momentum exceeds a critical value which is of the order of the gauge coupling strength. The study allows us to estimate the transition temperature of the rashbon BEC, and suggests a route to enhance the exponentially small transition temperature of the system with a fixed weak attraction to the order of the Fermi temperature by tuning the strength of the non-Abelian gauge field. The nature of the rashbon dispersion, and in particular the absence of the rashbon states at large momenta, suggests a regime of parameter space where the normal state of the system will be a dynamical mixture of uncondensed rashbons and unpaired helical fermions. Such a state should show many novel features including pseudogap physics. I. INTRODUCTION Cold atoms are a promising platform for quantum simulations. Controlled generation of synthetic gauge fields 1-3 has provided impetus to the realization of novel phases in cold atomic systems. The recent generation of synthetic non-Abelian gauge fields in 87 Rb atoms 3 is a key step forward in this regard. While a uniform Abelian gauge field is merely equivalent to a galilean transformation, even a uniform non-Abelian gauge field nurtures interesting physics. [3][4][5] The clue that a uniform non-Abelian gauge field crucially influences the physics of interacting fermions came from the study of bound states of two spin- 1 2 fermions in its presence. 6 The remarkable result found for spin-1 2 fermions in three spatial dimensions interacting via a s-wave contact interaction in the singlet channel is that high-symmetry non-Abelian gauge field configurations (GFCs) induce a two-body bound state for any scattering length however small and negative. The physics behind this unusual role of the non-Abelian gauge field that produces a generalized Rashba spin-orbit interaction, was explained by its effect on the infrared density of the states of the noninteracting two-particle spectrum. The non-Abelian gauge field drastically enhances the infrared density of states, and this serves to "amplify the attractive interactions". A second most remarkable feature demonstrated in ref. [6] is that wave function of the bound state that emerges has a triplet content and associated spin-nematic structure similar to those found in liquid 3 He. The above study 6 motivated the study of interacting fermions at a finite density in the presence of a non-Abelian gauge field. 7 At a finite density ρ (∼ k 3 F , k F is the Fermi momentum), the physics of interacting fermions in a synthetic non-Abelian gauge field is determined by two dimensionless scales. The first scale is associated with the size of the interactions −1/k F a s where a s is the swave scattering length, and the second one, λ k F , is determined by the non-Abelian gauge coupling strength λ. For small negative scattering lengths (−1/k F a s 1), the ground state in the absence of the gauge field is a BCS superfluid state with large overlapping pairs. The key result first demonstrated in ref. [7] is that at a fixed scattering length, even if small and negative, the non-Abelian gauge field induces a crossover of the ground state from the just discussed BCS superfluid state to a new type of BEC state. The BEC state that emerges is a condensate of a collection of bosons which are tightly bound pairs of fermions. Remarkably, at large gauge couplings λ/k F 1, the nature of the bosons that make up the condensate is determined solely by the gauge field and is not influenced by the scattering length (so long as it is non-zero), or by the density of particles. In other words, the BEC state that is attained in the λ/k F 1 regime at a fixed scattering length does not depend on the value of the scattering length, i. e., the BEC is a condensate of a novel bosonic paired state of fermions determined by the non-Abelian gauge field. These bosons were called as "rashbons" since their properties are determined solely by the generalized Rashba spin-orbit coupling produced by the gauge field. As shown in ref. [7], rashbon is the bound state of two fermions at infinite scattering length (resonance) in the presence the non-Abelian gauge field. The crossover from the BCS state to the "rashbon BEC" state (RBEC) induced by the gauge field at a fixed scattering length is to be contrasted with the traditional BCS-BEC crossover [8][9][10][11] by tuning the scattering length [12][13][14] , but with no gauge field. Gong et al. 15 have investigated the crossover including the effects of a Zeeman field along with a non-Abelian gauge field. Certain properties of rashbons in the EO gauge field (explained later) have been investigated in references [16] and [17]. It was shown in ref. [7] that the Fermi surface of the non-interacting system (with a s = 0) in presence of the non-Abelian gauge field undergoes a change in topology at a critical gauge coupling strength λ T (of order k F ). For weak attractions (−1/k F a s 1), the regime of the gauge coupling strengths where the crossover from the BCS state to the RBEC state takes place coincides with the regime where the bare Fermi surface undergoes the topology change. The properties of the superfluid state (such as the transition temperature) for λ λ T was argued to be primarily determined by the properties of the constituent anisotropic rashbons (see Sect. V of ref. [7]). It is, therefore, necessary and fruitful to undertake a detailed study of the properties of rashbons and their dispersion, and this is the aim of this paper. In this paper, we study the properties of rashbons and their dependence on the nature of the non-Abelian gauge field, i.e., we obtain properties of rashbons for the most interesting gauge field configurations. This study entails a study of the anisotropic rashbon dispersion, i. e., determination of its energy as a function of its momentum by the study of the two-body problem in a non-Abelian gauge field with a resonant scattering length (1/λa s = 0). In addition to the determination of the properties of rash-bons, we report here a new qualitative result. It is shown that when the momentum of a rashbon exceeds a critical value which is of the order of the gauge coupling strength, it ceases to exist. Stated otherwise, when the center of mass momentum of the two fermions that make up the bound pair exceeds a value of order of the gauge coupling strength, the bound state disappears. To uncover the physics behind this result, the two-fermion problem in a gauge field is investigated in detail for a range scattering of lengths and centre of mass momenta. The study reveals a hitherto unknown feature of the non-Abelian gauge fields: while the non-Abelian gauge field acts as attractive interaction amplifiers for fermions with centre of mass momenta q much smaller than the gauge field strength (q λ), the gauge field suppresses the formation of bound states of fermions with large centre of mass momenta (q λ). In fact, it is demonstrated here that when q λ, a positive scattering length (very strong attraction) is necessary to induce a bound state of the two fermions, quite contrary to q λ where a bound state exists (essentially) for any scattering length. The results we report here have two significant outcomes. (1) A full qualitative picture of the BCS-BEC crossover scenario in the presence of a non-Abelian gauge field is obtained (see Fig. 1) based on the results reported here. Most notably, it is shown that the transition temperatures of a system of fermions with a very weak attraction can be enhanced to the order of the Fermi temperature (determined by the density) by the application of a non-Abelian gauge field. (2) Our two body results at large centre of mass momenta suggest that the normal state of the fermion system in non-Abelian gauge field will be a "dynamic mixture" of rashbons and interacting helical fermions. These could therefore show many novel features such as pronounced pseudogap characteristics (see ref. 20 and references therein). The next section, II, contains the preliminaries which includes the formulation of the problem. Sec. III contains a report on the properties of rashbons, and this is followed by sec. IV which discusses the bound state of two fermions for arbitrary centre of mass momentum and scattering length for specific high symmetry gauge fields. The importance of the results obtained here is discussed in sec. V, and the paper is concluded with a summary in sec. VI. II. PRELIMINARIES The Hamiltonian of the fermions moving in a uniform non-Abelian gauge field that leads to a generalized Rashba spin-orbit interaction is 21 where Ψ(r) = {ψ σ (r)}, σ =↑, ↓ are fermion operators, p is the momentum, 1 is the SU(2) identity, τ µ (µ = x, y, z) are Pauli matrices, p λ = i p i λ i e i , e i 's are the unit vectors in the i-th direction, i = x, y, z. The vector λ = λλ = i λ i e i describes a gauge-field configuration (GFC) space; we refer λ = |λ| as the gauge-coupling strength. Throughout, we have set the mass of the fermions (m F ), Planck constant ( ) and Boltzmann constant (k B ) to unity. In this paper we specialize to λ = (λ l , λ l , λ r ) as this contains all the experimentally interesting highsymmetry GFCs. Moreover, it is shown in Ref. [6 and 7], that this set of gauge fields captures all the qualitative physics of the full GFC space. Specific high symmetry GFCs are obtained for particular values of λ r and λ l : λ r = 0 corresponds to extreme oblate (EO) GFC; λ r = λ l corresponds to spherical (S) GFC, and λ l = 0 corresponds to extreme prolate (EP) GFC. The interaction between the fermions is described by a contact attraction in the singlet channel Ultraviolet regularization 22 of the theory described by H = H R + H υ is achieved by exchanging the bare interaction v for the scattering length a s via 1 υ + Λ = 1 4πas , where Λ is the ultraviolet momentum cutoff. Note that a s is the s-wave scattering length in free vacuum, i. e., when the gauge field is absent (λ = 0). The one-particle states of H R are described by the quantum numbers of momentum k and helicity α (which takes on values ±) : |kα = |k ⊗ |αk λ . The oneparticle dispersion is ε kα = k 2 2 − α|k λ |, where k λ is defined analogously with p λ and |αk λ is the spin-coherent state in the direction αk λ . The two-particle states of H can be described using the basis states |qkαβ = |( q 2 + k)α ⊗ |( q 2 − k)β where, q = k 1 + k 2 is the center of mass momentum and k = (k 1 − k 2 )/2 is the relative momentum of two particles with momenta k 1 and k 2 . Note that q is a good quantum number for the full Hamiltonian (H). The non-interacting two-particle dis- In the presence of interactions, bound states emerge as isolated poles of the T -matrix, and are roots of the equation is the energy of the bound state. Here E th (q) is the scattering threshold and E b (q) is the binding energy, both of which depend on q as indicated. In the absence of the gauge field (λ = 0), the bound state exists only for a s > 0 and E b (q) = −1/a 2 s is independent of q. The threshold is E th (q) = q 2 /4. Physically, this corresponds to the fact that a critical attraction in necessary in free vacuum (λ = 0) for the formation of the two-body bound state. As shown in ref. [6], state of affairs change drastically in the presence of a non-Abelian gauge field. For q = 0, the presence of the gauge field always reduces the critical attraction to form the bound state and in particular, for special high symmetry GFCs (e.g. λ = (λ l , λ l , λ r ) with λ r ≤ λ l ) two body bound state forms for any scattering length. 6 III. PROPERTIES OF RASHBONS The bound state that emerges in the presence of the gauge field when the scattering length is set to the resonant value 1/a s = 0 is the rashbon. As argued above, the binding energy of the rashbon state for all the GFCs considered here (except for the EP GFC) is positive. The energy of the rashbon state E R (q = 0) determines the chemical potential of the RBEC. Other properties of the RBEC are determined by the rashbon dispersion E R (q), and in particular the transition temperature will be determined by the mass of the rashbons. The curvature of the rashbon dispersion E R (q) at q = 0 defines the effective low-energy inverse mass of rashbons. The dispersion is in general anisotropic and the inverse mass is, in general, a tensor. However, due to their symmetry, for the GFCs considered in this paper (λ of the form (λ l , λ l , λ r )), E R (q) = E R (q l , q r ), where q l is the component of q on the x−y plane, and q r is the component along e z . Thus, the inverse mass tensor is completely specified by its principal elements -in-plane inverse mass (m −1 l ) and the "perpendicular" inverse mass, (4) An effective mass m ef defined as is useful in the discussions that follow. In addition to the anisotropy in their orbital motion, rashbons are intrinsically anisotropic particles. Their pair-wave function has both a singlet and triplet component; the weight of the pair wave function in the triplet sector η t is the triplet content. The triplet component is time reversal symmetric, but does not have the spin rotational symmetry -it is therefore a spin nematic. Keeping this interesting aspect in mind, we shall also investigate and report the triplet content of rashbons, and its dependence on the gauge field. Before presenting the results we make a general observation. The threshold energy (E th ) becomes increasingly flat as a function of q in the small q/λ regime as one approaches spherical gauge field in the GFC space. In fact, for the spherical GFC, it is exactly constant in the small q/λ regime (see Fig. 3). The mass is therefore determined entirely by the variation of the binding energy with q (this may be contrasted in free vacuum case discussed before). It is reasonable therefore to expect that the effective mass of rashbons is always greater than twice the bare fermion mass and for it to be the largest for the spherical GFC. Fig. 2 (a) shows the in-plane, perpendicular, and effective masses for different GFCs. Rashbons emerging from spherical GFCs have the highest m ef and that from EP GFCs have the least. It is interesting to note that apart from the spherical GFC, there is yet another GFC (λ r ≈ 0.65λ -see fig. 2 (a)) where the low energy dispersion is isotropic, i. e., rashbon has a scalar mass. The triplet content is shown in fig. 2 (b) for different GFCs. η t is minimum (1/4) for spherical GFC and maximum (1/2) for EP GFC. A detailed study of rashbon dispersion as a function of its momentum q (centre of mass momentum of the fermions that make up the rashbon) revealed a hitherto unreported and rather unexpected feature. The full rashbon dispersion as a function of q for the spherical (S) GFC is shown in Fig. 3. The rashbon energy increases with increasing q and eventually for q/λ 1.3, there is no two body bound state! This curious result motivated us to perform a more detailed investigation of the dispersion of the bound fermions (bosons) at arbitrary scattering lengths (away from resonance which corresponds to rashbons), in order to uncover the physics behind this phenomenon. This study, conducted for specific high symmetry GFCs, is presented in the next section. IV. DISPERSION OF BOSONS AT ARBITRARY SCATTERING LENGTHS FOR SPECIFIC GFCS In this section we investigate the dispersion of the bosonic bound state of two fermions at arbitrary scattering length. Results of the boson dispersion obtained by solving eqn. (3) will be presented for the S and EO GFCs. A. Spherical GFC Spherical (S) GFC corresponds to λ r = λ l and hence produces an isotropic boson dispersion as discussed before. The boson dispersion depends only on q = |q|. Solving eqn. (3), the boson dispersion obtained for various scattering lengths is as shown in fig. 3(a). The key features of this spectrum are the following. For any scattering length, however large and positive, there exists a critical center of mass momentum q c such that when q > q c the bound state ceases to exist. This is best understood by fixing attention on a particular momentum q. When the momentum is "small", there is a bound state for any attraction. This is in fact the case for all q < q o , where q o = 2 λ √ 3 . For q > q o , a critical attraction described by a nonzero scattering length a sc is necessary for the formation of a bound state. For q = q + o , the critical scattering length is a sc = − 2 √ 3 λ . On increasing q, a stronger attractive interaction is required to produce a bound state and when q reaches ∼ 4λ 3 , a resonant attraction is necessary to produce a bound state. For q 4λ 3 , a very strong attractive interaction described by a small positive scattering length is necessary to produce a bound state. In fact, for q λ, the critical scattering length scales as a sc ∼ 1 λq . The dependence of a sc on the centre of mass momentum is shown in Fig. 3(b). How do we understand these results? Here the ε 0 − γ model introduced in Ref. 6 comes to our rescue. The model states that if the infrared density of states g s (ε) ∼ ε γ for 0 ≤ ε ≤ ε 0 , where ε is the energy measured from the scattering threshold, then the critical scattering length is given by √ ε 0 a sc ∝ γ Θ(γ)/(2γ − 1), where Θ is the unit step function. Note that for γ < 0, the critical scattering length vanishes. It is evident that there is a drastic change in the infrared density of states at q = q o . In fact, this special momentum q o is such that the threshold energy corresponds to that state where the relative momentum k between the pair of fermions vanishes. Clearly, for q < q o , there are many degenerate k states that produce a nonzero density of states at the threshold. In fact, when q = 0, the density of states diverges as 1/ √ ε, i.e., γ = −1/2. the contributions from the ++, −−, +− and −+ channels. It can be shown that in the regime q λ, of the ++ channel has a density of states that has ε 3/2 behaviour. The +− and −+ channels have a higher threshold which is λq larger than the threshold of the ++ channel; the density of states of the +−/−+ channels goes as √ ε form this higher threshold. These arguments provide an estimate of ε 0 ≈ qλ. The result on the critical scattering length is then a sc ∼ 1 √ q , precisely as obtained from the full numerical solution shown in Fig. 3(b). As a by product of the analysis of the boson dispersion, we were able to obtain an analytical expression for the mass of bosons (which is isotropic in this case) where, At a given λ, as expected, mass for a small positive scattering length a s > 0 is twice the fermion mass. Mass at resonance is the rashbon mass which is equal to = 3 7 (4 + √ 2)m F ≈ 2.32m F . Interestingly, the value of m B /m F for a small negative scattering length limit is (integer) 6. B. Extreme oblate GFC Extreme oblate (EO) GFC corresponds to λ r = 0 with λ l = λ √ 2 . It can be easily shown that for this GFC, E(q l , q r ) = E(q l , 0) + Fig. 3), for any given scattering length, the bound state disappears after some critical q l . as a function of q l provide all the nontrivial features of the two-body problem arising from this gauge field. Fig. 4 shows the boson dispersion for various scattering lengths. Remarkably, we find that the dispersion has very similar features as found for the spherical GFC, i.e., for any given scattering length there is a q c such that for q > q c , the two-body bound state ceases to exist. Clearly, this is a generic feature of the boson (bound fermion-pair) dispersion in a gauge field. For this GFC, m r is just twice the fermion mass. The in-plane mass (m l ) extracted from the two-body dispersion is shown in fig. 5. m l for small positive scattering length is again twice the fermion mass. The resonance value which corresponds to rashbon is m l 2.4m F . This result agrees with refs. [16 and 17]. It is again interesting to note that value of m l /m F in the deep BCS limit is (integer) 4. C. Discussion The analysis of the dispersion of the boson (boundstate of two fermions obtained in a gauge field) reveals that the boson ceases to exist when the momentum of the boson exceeds a critical value. For the case of rashbons (bosons obtained at resonance scatteirng length), the critical momentum is of the order of the strength of the gauge field. The analysis presented here shows that this is again because of the influence of the gauge field in altering of the infrared density of states. When the momentum is smaller than the magnitude of the gauge coupling, the gauge field works to enhance the infrared density of states. On the other hand, for large momenta, the gauge field has the opposite effect, i. e., it depletes the infrared density of states. V. SIGNIFICANCE OF THE RESULTS The above results allow us to infer many key aspects of the physics of interacting fermions in the presence of a non-Abelian gauge field. First, these results allow us to estimate the transition temperature. For large gauge couplings, the transition temperature as noted above will be determined by the mass of the rashbons. We have argued (and demonstrated) that the mass of the rashbons is always greater than twice the fermion mass. Thus the transition temperature of RBEC will always be less than that of the usual BEC of bound pairs of fermions obtained in the absence of the gauge field by tuning the scattering length to small positive values. Tc in the small λ/kF limit is obtained from mean field theory (analytical approximation is shown in the text). Tc in the large λ/kF limit is obtained from the condensation temperature of the tightly bound pairs of fermions (analytical form for the S GFC can be obtained from eqn. (6) and eqn. (8)). Horizontal dashed line corresponds to rashbon Tc. Vertical line indicates the gauge coupling corresponding to the Fermi surface topology transition 7 . However, there is something remarkable that a synthetic non Abelian gauge field can achieve. Consider a system with a weak attraction (small negative scattering length). In the absence of the gauge field, the transition temperature in the BCS superfluid state is exponentially small in the scattering length. Interestingly, the transition temperature can be brought to the order of Fermi temperature by increasing the magnitude of the gauge field strength (keeping the weak attraction, small negative scattering length, fixed). While T c in the BCS regime is determined by the pairing amplitude (∆), in the BEC regime it is determined by the condensation temperature of the emergent rashbons. 19 The mean field estimate of the former (i.e. for small k F |a s |, a s < 0 and small λ/k F ) is obtained by simultane- Tc + 1 , where ξ kα = ε kα − µ. In this limit, the chemical potential at T c is almost equal to that of the noninteracting one at zero temperature. i.e., µ(T c , a s , λ) ≈ µ(0, 0 − , λ), and ∆ (T =0) /T c ≈ π/e γ where 7 ∆ (T =0) is the pairing am-plitude at zero temperature and γ is Euler's constant (≈ 0.577). The T c on the RBEC side can be extracted from the effective mass (m ef ) as condensation temperature of the bosonic pairs : where we recall that m ef = (m r m 2 l ) 1/3 . Using the information of mass given earlier (eqn. (6) for S GFC and fig. 5 for EO GFC) one can obtain T c in this regime as a function of λa s in S and EO GFCs. In particular, rashbon T c in S case is ≈ 0.188T F and in EO case it is ≈ 0.193T F . The rashbon T c can be obtained for various GFCs, using m ef shown in fig. 2 (a). Since, among all GFCs, the rashbon mass corresponding to S GFC is the largest, it also corresponds to condensate with the smallest T c . The results obtained in both BCS and RBEC limits for k F a s = −1/4 in S and EO GFCs are shown in fig. 6. We can see, as advertised, that T c has increased by two orders of magnitude with increasing gauge coupling strength λ. These considerations also allow us to infer an overall qualitative "phase diagram" in the T − a s − λ space as shown in Fig. 1. What is the nature of the system above T c ? There is a regime in the parameter space shown in Fig. 1, where the normal state can be quite interesting. Consider for example λ ≈ 1.5k F . The ground state will be "very bosonic" i. e., a condensate of rashbons in the zero momentum state. On heating the system above the transition temperature T F , the system becomes normal. Rashbons are excited to higher momenta states, and eventually break up into the constituent fermions since there is no bound state at higher momenta. There, should, therefore be a temperature range where the sytem is a dynamical mixture of uncondensed rashbons and high energy helical fermions -a state that should show many novel features such as, among other things, a pseudogap. VI. SUMMARY The new results of this paper are: 1. A systematic enumeration of the properties of rashbons, including closed form anlytical formulae, for various gauge field configurations. 2. A detailed study of the rashbon (boson) dispersion, which results in a new qualitative observation. Although a zero centre of mass momentum bound state exists for any scattering length for many GFCs, the bound state vanishes when the centre of mass momentum exceeds a critical value. Thus, although the gauge field acts to promote bound state formation for small momenta, it acts oppositely, i. e., inhibits bound state formation for large momenta. We provide a detailed explanation of the physics behind the phonemon. These results allow us to make two important inferences. 1. For a fixed weak attractive interaction, the exponentially small transition temperature of a BCS superfluid can be enhanced by orders of magnitude to the order the Fermi temperature of the system by increasing the magnitude of the gauge coupling. 2. There is a regime of T − a s − λ parameter space where the normal phase of the system will have novel features. We hope that these results will stimulate further experimental and theoretical studies on this topic.
6,837.6
2011-08-24T00:00:00.000
[ "Physics" ]
Reducing the channel diameter of polydimethylsiloxane fluidic chips made by a 3D-printed sacrificial template and their application for flow-injection analysis Fluidic chips have attracted considerable interest in recent years for their potential applications in analytical devices. Previously, we developed a method to fabricate polydimethylsiloxane (PDMS) fluidic chips via templates made using a low-priced commercial Fused Deposition Modeling (FDM) type 3D printer and polymer coatings. However, in general, methods using a template cannot form a flow channel thinner than the template thickness and the width. In this study, the inner wall of a PDMS fluidic chip was coated with PDMS to create a chip with a channel inner diameter smaller than a template. Then, by measuring the flow signal of methyl orange with a single line, the basic properties of the non-coated and coated chip were investigated. As a result, almost the same flow profile was obtained in non-coated and coated chips at the same linear velocity and the same sample injection length. By coating and narrowing the channel width, it is possible to save the amount of sample and carrier solution. Measuring hydrazine in water using a coated chip was also tried. The calibration curve indicated good linearity in the range of 1–6 ppm. However, a concentration point of 7 ppm deviated. The reason for this deviation was presumably due to inadequate mixing of the sample and reagent. By decreasing the flow rate, the calibration curve indicated good linearity in the range of 1–7 ppm. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s44211-022-00070-1. Introduction Fluidic chips have garnered considerable interest in recent years for their potential application to analytical devices [1][2][3]. Flow-injection analysis (FIA) is an effective method in analytical chemistry [4][5][6], and some research on FIA using fluidic chips has been reported [7][8][9]. One advantage of performing FIA on a fluidic chip is that the flow path can be compacted. The flow path in conventional FIA is composed of tubes; however, bending with a large curvature is difficult, so there is a limit to the densification of the flow path. Conversely, the curvature of the flow path in a fluidic chip can be increased so that a flow path with a complex and dense shape is possible. Since the FIA flow profile is closely related to the liquid flow, observing liquid flow facilitates FIA research. Fluidic chips can comprise transparent materials, such as glass, PDMS, and resins, thereby allowing the observation of liquid flow in the flow path clearly. On the contrary, in conventional FIA systems, polytetrafluoroethylene tubes or polyetherether ketone tubes, which obstruct the visualization of liquid flow, are generally used for the flow path. Recently, investigating fabricating fluidic chips using 3D printing has proliferated in many fields [10]. Using 3D printing in the fabrication of fluidic chips has the advantage of enabling the accurate placement of dense channels designed on a computer. PDMS is one of the transparent materials that are often used for fluidic chips [11], but a special 3D printer is required to print this material directly [12]. Therefore, it is common to use a template method when manufacturing a PDMS fluid chip [11,13,14]. In a previous study [15], we developed a method to fabricate polydimethylsiloxane (PDMS) fluidic chips via templates made using a low-priced commercial FDM-type 3D printer and polymer coatings, and it was demonstrated that this fluidic chip can be used for FIA. However, in general, the template method cannot form a flow channel thinner than the template thickness and width. In FIA, the amount of the sample and the carrier solution can be saved by reducing the inner diameter of the flow path. Thus, the development of a reduction method is useful. In this study, the inner wall of a PDMS fluid chip was coated with PDMS to form a chip with a channel inner diameter smaller than a template. Then, the flow profile of this improved channel chip was measured. PDMS chip fabrication For PDMS chip fabrication, a 3D-printed template composed of ABS was coated with PEG2000. This coated template was immersed in PDMS prepolymer, and subsequently the PDMS prepolymer was cured. Space was created between the ABS template and PDMS by removing the liquid PEG2000 from the channel. A flow path was formed by dissolving the ABS template with a solvent. The PDMS chip fabrication procedure is described in detail in our previous report [15]. Inner wall coating to reduce channel diameter Because the PDMS prepolymer is a highly viscous liquid, it is difficult to pour it into the elongated flow path. Therefore, PDMS prepolymer and heptane were mixed in a 1:1 ratio to reduce the viscosity, and the diluted PDMS prepolymer was injected into the channel. Air was injected into the flow channel with an air pump. It was left stationary at room temperature under air flow until the prepolymer in the channel was cured to some extent. This curing time at room temperature took between half a day and 2 days. To cure the prepolymer completely, the chips were placed on a hot plate and heated at 70 °C under flowing air. The inner diameter of the channel was reduced by repeating the prepolymer coating and curing process multiple times. The coating process is illustrated in Fig. S1. Adobe illustrator CS6 was used for analyzing the image. Flow-injection measurement system Liquid flow was controlled with a syringe pump (KD Scientific, USA). Regarding the syringe, all-plastic syringes (HENKE SASS WOLF) were used. SEC2000-DH (BAS Co., Ltd.) was used as the light source. An SEC2000 UV/VIS spectrometer (BAS Co., Ltd.) was used as an absorption detector. A SEC-2F spectroelectrochemical flow cell (BAS Co., Ltd.) was used as the flow cell. To extend the optical path length of the flow cell, the internal gasket was made using PDMS. The internal structure of the flow cell is shown in Fig. S9. A manual injector (7725i, Rheodyne) was used for sample injection. The system was constructed by connecting each instrument and chip with Teflon or polyethylene tubing. Samples were measured without any pretreatment. Microsoft Excel software was used for graph drawing and data integration. Measurement of methyl orange A hydrochloric acid (0.1 M) aqueous solution was used as the carrier solution. As a sample, 1 × 10 −4 M methyl orange solution was prepared by dissolving in an approximately 1 × 10 −3 M sodium hydroxide aqueous solution. The detection wavelength was 510 nm. The flow rate and injected sample volume for each chip are shown in Table S1. The sample injection volume was controlled by the volume of the sample loop. Results and discussion Reduction of channel diameter Figure 1 shows a 3D model of a template, a 3D-printed template, and a fluidic chip (before the reduction of channel size). The fabrication procedure is described in a previous report [15]. A Hilbert-shaped channel was used to investigate the effect of coatings on structures with many bends. The thickness and width of the template are related to the diameter of the channel. In this study, since the nozzle diameter of the 3D printer was approximately 0.4 mm, the theoretical minimum printable length in the x-y direction was considered to be approximately 0.4 mm. We set the height of one layer to 0.3 mm, so 0.3 mm can be considered the minimum printable length in the z direction. In general, thinner templates appear to be beneficial in forming narrow channels, but they present some problems; if the template is too thin, it will be difficult to maintain its structure and strength, and it will be difficult to coat it with PEG. Therefore, in this study, the template was printed at a size larger than the theoretical minimum printable length. After forming a large channel, it was gradually narrowed with a coating. Figure 2 shows photographs before and after coating the chip three times. Figure S2 shows photographs of the changes in the chip after each coating. Figure S3 shows photographs of top and cross-sectional views of the noncoated and coated channels. During coating of the chips shown in Fig. 2 and Fig. S2, air was flowed from the same inlet three times. As the number of coatings increased, the channel diameter became smaller. In addition, after coating, the rectangular shape of the corner of the flow channel became rounded. It is speculated that the reason for this rounding is the surface tension [16,17]. The surface tension is a phenomenon in which a liquid forms the smallest possible surface by attracting liquid molecules to each other due to an intermolecular force at the boundary with the air. In this study, the surface tension of PDMS prepolymer and heptane is considered to play an important role in the shape of the coating. When air was flowed after injecting PDMS prepolymer into the flow path, the PDMS prepolymer applied to the surface was moved to the outlet by air flow. Since liquid PDMS moved with the flow of air, the thickness of the coating was dependent on the location. The coating tended to thicken from the inlet to the outlet. Near to the inlet, the PDMS prepolymer was removed by the flow of air, resulting in a lesser amount of PDMS prepolymer and a thinner coating. Conversely, near the outlet, the PDMS prepolymer continued to be supplied, resulting in a thicker coating. However, a constant channel width is desirable. One way to reduce the difference in chip coating thicknesses was to reverse the direction of air flow as the coating was repeated. Figure S4 shows a photograph after coating three times with and without changing the air flow direction. When the direction of air flow was changed, the variation in coating thickness along the flow path was reduced. The air flow rate during this coating was less than 0.2 L/min. Fig. 1 a 3D model of a sacrificial template. b Image of the ABS template on the print bed after 3D printing. c Image of the PDMS chip fabricated by the template method. The contrast and brightness of the pictures were adjusted Fig. 2 Photographs of the chip a before coating and b after coating three times. The contrast and brightness of the pictures were adjusted The air flow rate appears to be a factor affecting the thickness of the coating. In this study, the effect of the air flow rate on the coating thickness was investigated. Figure 3 shows a comparison of the coating thickness after one coating under different air flow rates. Detailed results are shown in Fig. S5. The coating tended to become thinner as the air flow rate increased. It is presumed that the coating becomes thinner, because the amount of PDMS prepolymer removed increases with increasing the air flow rate. In this experiment, a chip with a short channel length of approximately 10 cm was used, because the air pump had a limited ability to flow air. In general, the longer is the channel, the larger is the pressure required for permeation. In the case of a chip with a channel length of approximately 2 m, a large amount of pressure is required to pass a high volume of air through the channel. However, the pump used in this study could not apply large pressures. Therefore, it was not possible to flow air at a high rate through a long flow path. To widen the range of the flow rate, the channel length was shortened to reduce the required pressure. To investigate the reproducibility of the coating thickness, two chips were coated at the same time, and the average channel widths were compared. Calculating the average channel width was performed in the following order: (1) the channel length was measured. The method for measuring the channel length of chip is shown in Fig. S7; (2) the channel volume was measured. The volume of the channel was calculated by measuring the weight of the chip before and after filling the channel with water; (3) the channel cross-sectional area was calculated by dividing the volume by the length; (4) the channel cross-sectional shape was investigated. To calculate the channel width from the cross-sectional area, it is necessary to investigate the cross-sectional shape. In Fig. S3, the cross-section of the channel can be approximated as being circular for both non-coated and coated channels. Hence, the average channel width can be calculated by considering the cross-section of the channel as being circular; and (5) the average channel width was calculated. The experimental results are shown in Fig. S12. The coating thicknesses were slightly different even under almost the same experimental conditions. The thickness of the coating is considered to depend on the roughness and shape of the wall surface. Therefore, it is likely that differences in the channel shape and wall of each chip caused the difference in thickness of coating. Flow-injection measurements Experiments were conducted to demonstrate that the PDMS chip, whose flow path is narrowed by coating, is useful for flow-injection experiments. First, by measuring the flow signal of methyl orange with a single line, the basic properties of the non-coated and coated chip were investigated. In this experiment, the measurements were performed under the same linear velocity and the same sample injection length. The parameters of each chip are given in Table S1. A schematic illustration of flow-injection system for methyl orange is shown in Fig. 4a. Figure S6 shows the photos of the flow-injection analysis system and PDMS chips. In this system, it was possible to visually observe a solution of methyl orange flowing through the channel of the chip. The flow profiles measured at the same linear velocity and the same sample injection length are shown in Fig. 4b. As a result, almost the same flow profile was obtained in noncoated and coated chips at the same linear velocity and the same sample injection length. This result indicates that it is possible to save the amount of sample and carrier solution by coating and narrowing the channel width. Next, a measurement of hydrazine in a water using this chip was tried. Hydrazine is designated by Japan's Ministry of Health, Labor and Welfare as one of the items to be examined in water quality tests as "Items for further study". Therefore, it is useful to analyze hydrazine for ensuring the safety of water. Furthermore, the analysis of hydrazine is also important for water management in various facilities. It is known that hydrazine has the property of removing dissolved oxygen in water. Thus, hydrazine is often added to water to prevent the corrosion of metal materials [18]. For hydrazine measurements in this study, DMAB was reacted with hydrazine to form a yellow-colored azine complex [19,20]. The reaction is illustrated in Scheme 1. By measuring the absorption of this azine complex, the concentration of hydrazine can be estimated. A schematic illustration and photo of the flow-injection system used for hydrazine are shown in Fig. 5a and Fig. S8a, respectively. In these measurements, the coated chip shown in Fig. S6c was used. In this system, an EtOH-Water mixture (EtOH(99.5):Water = 1:1) was used as the carrier. When 100% water carriers were used, air bubbles were observed in the flow path, which prevented measurements. The appearance of these bubbles in the fluidic chip could be easily confirmed by visual observation. A photo of air bubbles in the flow path is shown in Fig. S10. To suppress the generation of air bubbles, an EtOH-Water mixture was used as the carrier. One of the advantages of using a transparent PDMS chip is that it enables the direct observation of phenomena in the flow path. Thus, the transparency of the PDMS chip is useful for FIA research. The flow profiles and calibration curve of peak area integration are shown in Fig. 5b, c, respectively. The calibration curve indicated good linearity in the range of 1-6 ppm. However, the concentration point of 7 ppm was deviated. To investigate the cause, the flow rate of both the carrier and reagent solutions was lowered to 0.14 mL/min for measurements. The flow profiles and calibration curve of the peak area integration are shown in Figs. S11a and S11b, respectively. The calibration curve indicated good linearity in the range of 1-7 ppm. For a flow rate of 0.3 mL/min, it is assumed that the concentration point of 7 ppm was deviated due to insufficient reactions of the sample and the reagent. It is speculated that this deviation can be improved by changing the method of mixing the solutions in the chip. In Fig. 5b, a peak appears in the flow profile even at 0 ppm. The cause of this peak is considered to be the schlieren phenomenon, which results from an optical inhomogeneity of transparent media. In this study, the carrier is EtOH:water = 1:1, and the solvent of the reagent solution was EtOH:water = 9:1. Therefore, when a standard solution using an aqueous hydrochloric acid solution as a solvent was injected, a peak was observed due to the difference in the refractive indices of EtOH and water. Figure S8b shows the flow profiles when an aqueous hydrochloric acid solution, EtOH, and EtOH:water = 3:1 were injected. When the injected sample was an EtOH or EtOH-water mixture, the difference in refractive index for the carrier was smaller than that for water, and hence the peak was also smaller. Conclusions In previous work, we manufactured a PDMS fluid chip using a 3D-printed ABS template and a polymer coating. However, this method could not form a flow channel thinner than the template. This problem was addressed in this study; we achieved a reduction in the flow path diameter of the PDMS chip by coating the flow path with PDMS. Then, by measuring the flow signal of methyl orange with a single line, the basic properties of the non-coated and coated chip were investigated. As a result, almost the same flow profile was obtained in non-coated and coated chips at the same linear velocity and the same sample injection length. This result indicates that it is possible to save the amount of sample and carrier solution by coating and narrowing the channel width. Next, a measurement of hydrazine in water using this chip was tried. The calibration curve indicated good linearity in the range of 1-6 ppm. However, the concentration point of 7 ppm was deviated. When the flow rate of both the carrier and reagent solutions was lowered to 0.14 mL/min for measurements, the calibration curve indicated good linearity in the range of 1-7 ppm. At a flow rate of 0.3 mL/min, it was assumed that the concentration point of 7 ppm was deviated due to an insufficient reaction of the sample and the reagent. It is speculated that this can be improved by changing the method of mixing the solutions in the chip. Supporting information A schematic illustration of the process of coating a PDMS chip is shown in Fig. S1. Photographs demonstrating the changes in the chip after each coating are shown in Fig. S2. Photographs of the top and cross-sectional views of the non-coated and coated channels are shown in Fig. S3. Photographs after coating three times with and without changing the air flow direction are shown in Fig. S4. A comparison of the coating when the air flow rate was changed is shown in Fig. S5. Photos of the FIA system and the PDMS chips used for the measurements are shown in Fig. S6. The method of measuring the channel length of the chip (Fig. S6b and S6c) is shown in Fig. S7. Photos of the FIA system and the flow profile are shown in Fig. S8. The flow cell and their components are shown in Fig. S9. Photos of air bubbles in the flow path are shown in Fig. S10. The flow signal of the calibration graph for hydrazine (flow rate of 0.14 ml/min for both the carrier and the reagent solutions) and the calibration curve of the peak area integration are shown in Fig. S11. A comparison of the parameters of two chips coated at the same time is shown in Fig. S12. The calculated parameters, flow rate, and injected sample volume for each chip used in Fig. 4b are given in Table S1. This material is available free of charge on the Web at http:// www. jsac. or. jp/ anals ci/.
4,727.8
2022-02-15T00:00:00.000
[ "Engineering", "Materials Science" ]
Al plasma jet formation via ion stream compression by surrounding low-Z plasma envelope In our earlier papers it was demonstrated that the plasma pressure decreases with the growing atomic number of the target material. In this context a question arose about the possibility to collimate the Al plasma outflow by using the plastic plasma as a compressor. For that purpose a plastic target with an Al cylindrical insert of 400 m in diameter was used. The experiment was carried out at the PALS laser facility. The laser provided a 250 ps (FWHM) pulse with the energy of 130 J at the third harmonic frequency ( 3 = 0.438 m). The focal spot diameters ( L) 800, 1000, and 1200 m ensured predominance of the plastic plasma, its transversal extension being large enough for the effective Al plasma compression. To study the Al plasma stream propagation and its interaction with the plastic plasma, a 3-frame interferometric system and 4-frame x-ray camera were used. The information on distribution of electron temperature in the outflowing Al plasma was provided by x-ray spectroscopy. The experimental results reported in the paper are discussed by virtue of a simple theoretical analysis. INTRODUCTION In 2006 we reported a simple method of plasma jet generation based on irradiation of flat massive targets with atomic number Z ≥ 29 (Z = 29 corresponds to Cu) by the third harmonic of a single partly defocused laser beam [1].Our experiments at the Prague Asterix Laser System (PALS) laser facility have proved that annular target irradiation plays a decisive role in the plasma jet forming [2].However, this mechanism acts properly only in the case of heavy target materials.If the target is made of light materials like plastic (CH) or Al, no plasma jets are observed, despite the initial laser intensity distribution is the same.However, our investigations of the plasma stream emitted from a joint of light and heavy target materials (Al-Cu or CH-Cu) [3] have shown that the plasma jet is not propagating normally to the target surface, but it is deflected to the side of the heavier material.The theoretical analysis [? ] allowed us to deduce that the ratio of plastic and copper plasma pressures amounts to 1.35 and to conclude that the lighter is the plasma the higher is its pressure.Simultaneously a natural question arose about the possibility of the Al plasma jet creation by using plastic plasma as a compressor.The compression of ablation plasma by the plastic plasma envelope opens possibility for exploiting the plasma jets of low atomic number material e.g. in laboratory astrophysics. EXPERIMENTAL SETUP, CONDITIONS AND RESULTS The experiment was carried out with the use of the PALS iodine laser facility.The plasma was generated by a single beam of the third harmonic laser radiation ( = 0.438 m) with parameters: E L =130 J, ≈ 250 ps and focal spot diameters L = 800, 1000, and 1200 m.The laser irradiated a plastic target with an Al cylindrical insert of 400 m in diameter. Two diagnostics have been used to study the Al plasma jet formation: (i) a three-frame interferometric system using the second laser harmonic ( = 0.638 m) and a four-frame x-ray pinhole camera, registering the soft x-ray plasma radiation in the range of 10-1000 eV.The exposure time of the x-ray camera was below 2 ns.The electron temperature distribution in the outflowing Al plasma was provided by x-ray spectroscopy. To ensure that the plastic plasma volume is large enough for effective Al plasma compression, we started our investigations with L = 800 m.Then the focal spot diameter was gradually increased by 200 m up to 1200 m.The interferometric measurements have shown that the Al plasma compression grows with increasing the focal spot diameter, the best result corresponding to the largest L .Therefore our presentation and discussion concentrates on results obtained for L = 1200 m. In order to distinguish the Al plasma from the whole plasma bulk probed by the interferometry, an x-ray framing camera was used.This diagnostic is capable of resolving the Al and plastic plasma components due to a large difference in their radiation intensities, as demonstrated by sample results presented in Fig. 1a,b.The Al plasma jet in a form of a narrow bright streak can clearly be distinguished on the plastic plasma background, Fig. 1b.In the early period of the plasma evolution, the constrained Al plasma jet forms close to the target surface.Its diameter is approximately equal to 100 m and it propagates with an average velocity of 7 × 10 7 cm/s.This velocity is considerably larger than the axial velocity of the pure Al plasma (∼5 × 10 7 cm/s).It means that the interaction of the plastic and Al plasmas results not only in the Al plasma jet creation but also in its acceleration. Differences between the plasma configurations of the pure Al plasma and the Al/plastic plasma have been determined from the interferometric measurements.In Fig. 1c drawn.For z > 2 mm, the Al plasma compression leads to an increase in the on-axis electron density by approximately a factor of 2 in comparison with that of the plasma launched on the bare Al target.On the other hand, the life-time of such a plasma configuration is relatively short and the electron density decreases at the axis.Changes of the electron density at the axis for both targets used vs. time corresponding to z = 2.1 mm, i.e., at the electron density local maximum (see Fig. 1c), are presented in Fig. 1d.The Al plasma compression at this cross-section lasts about 10 ns.Later on, the electron density drops considerably below the value characteristic for the bare Al plasma.In contrast to this, the latter plasma conserves its structure for longer time. The information on distribution of electron temperature in the outflowing Al plasma was provided by x-ray spectroscopy [4].The Al K-shell self-emission spectra were recorded using the imaging-mode x-ray spectrometer equipped with a mica (004) crystal spherically bent to a radius of 150 mm.The calibrated x-ray spectra corresponding to the above described laser-irradiated Al targets are presented in Fig. 2a.The intensity ratios of selected spectral transitions in the H-and He-like ions provide a wellestablished method for rough estimates of the electron temperature T e [5].Close to the target surface, the best fit T e values were determined from different combinations of parent lines and their satellites, at larger distances the ratios of dominant lines were used.The electron temperatures determined from spatially resolved spectral lineouts of bare and constrained-flow Al targets are shown in Fig. 2b.The results of electron temperature measurement can be explained as follows: The lower electron temperature of Al plasma at the target surface in the case of the constrained-flow Al target results from the lower laser radiation intensity in the center.As mentioned above, the laser intensity distribution in the transverse beam cross-section has a depression, which just covered the Al insert.The higher electron temperature in the case of the bare Al target corresponds to the higher intensity of the laser radiation in the off-centre region.With growing distance from the target surface the electron temperature of the Al plasma launched on the constrained-flow Al target becomes higher than that produced from the pure Al target.The predominance of this temperature over the other is induced by the Al plasma compression, which grows with time.It seems that this predominance should be considerably higher for z > 2 mm, where the Al plasma compression is very effective.However the resonance lines intensities were too low for the reliable electron temperature determination at these distances.Nevertheless, the tendency of branching off the electron temperatures with the increasing distance from the target is clearly seen. THEORETICAL ANALYSIS OF THE EXPERIMENTAL RESULTS According to results presented in the paper [6], the ratio of evaporated masses of aluminium (Al) and plastic (CH), i.e. mass per irradiated surface unit in g/cm 2 , can be estimated by formula: EPJ Web of Conferences where = /C V is a coefficient of thermal conductivity, is the initial target density, C V is the specific heat, and is the Spitzer's coefficient of electron heat conductivity.The estimation gives m Al / m CH =1.99.At later time the energy absorbed by the plasma is completely transformed into the kinetic energy of the plasma expansion, the estimation of the ratio of the plastic and the Al plasma expansion velocities gives the value u * CH /u * Al = m Al / m CH )1/2 =1.41.Since the plastic plasma overtakes the Al one, it induces an enhancement of the plastic plasma pressure beyond that of the Al plasma.It results in the plasma motion towards the axis.The time-averaged plasma pressure at the axis in the case of the radial compression can be estimated as (p 1 + p 0 )/2, where p 1 = p 0 ( 1 / 0 ) .Due to the low plasma temperature the thermal conductivity is neglected.Difference in the plasma pressures (p 1 + p 0 )/2-p 0 = (p 1 − p 0 )/2 leads to the plasma reflection from the axis.Consequently, the equation for the plasma momentum can be written as follows: where: t = R Al /c s , c s = 8.43 × 10 6 cm/s -sound speed.The equation ( 2) gives the compression value CONCLUSIONS In this work we have demonstrated the possibility of the Al plasma jet creation by using the plastic plasma as a compressor.In our experiments we took an advantage from the fact that the lighter is the plasma, the higher is its pressure.Based on theoretical analysis, we can conclude that the pressure difference in plasmas with different atomic numbers results from differences in their expansion features. The estimation of the ratio of the plastic and Al plasma expansion velocities gives the value of 1.41.As a result, the plastic plasma overtakes the Al one.It induces an enhancement of the plastic plasma pressure beyond the pressure of Al plasma and, in consequence, the Al plasma motion towards the axis. Figure 1 . Figure 1.Sequences of X-ray plasma images formed at Al (a) and plastic target with Al cylindrical insert (b) and diagrams characterizing the electron density distribution along the axis for the Al target and the plastic target with the Al insert (c), and the density evolution (d) with time. Figure 2 . Figure 2. X-ray spectroscopic results: a) spatially resolved K-shell spectral emission, and b) the electron temperature corresponding to both target types.
2,415
2013-11-01T00:00:00.000
[ "Physics" ]
ON INFORMATICS — Bookkeeping plays a vital role in dealing with records of day-to-day financial transactions from invoices until payment. It is also a method of documenting all company transactions in order to create a collection of accounting documents. Studies show that an evolution of bookkeeping management from manual record keeping to electronic record keeping had simplified most burden of bookkeepers as well as more reliable and accurate. Bookkeeping includes, in particular, classifying items correctly and entering financial details into an accounting system. However, with the rise of artificial intelligence, automated bookkeeping system is common to large businesses tasks at real time with hassle free. The system will function more than just journal management but also a decision-making tool to any businesses. Despite the benefits of the system, many small and medium enterprises especially in Malaysia still hesitate to implement the system. Artificial intelligence will further improve automated bookkeeping making it simpler and efficient for all levels of businesses. This paper presents an Artificial Intelligence perspective and methods used in automated bookkeeping focuses on invoices processes such as Optical Character Recognition (OCR), for document recognition, machine learning and auto journal record entries. Besides that, its challenges to be implemented in small and medium enterprise. The result of these studies highlighted benefits in the automated bookkeeping process to suit Malaysian small and medium enterprises. Future work will look at the suggested intelligence features to be implemented for a more efficient automated bookkeeping for small and medium enterprise. I. INTRODUCTION Manual bookkeeping as well as electronic bookkeeping are two common methods widely used in business organizations to maintain and observe their records. Bookkeeping is the systematic measurement technique of any business process and exercise that plays an important aspect in developing the growth of an organization. It is crucial specially to maintain business revenue and expenses. The manual record keeping system consists of paper-based journals or entries and they are separated into sections for receipts and settlements [1], [2]. In electronic bookkeeping, software is used to maintain journals automatically. Transaction processes are entered conveniently and simply by the bookkeeper and matches the significant account [3], [4]. Implementing an artificial intelligence (AI) indicator such as pattern recognition, and expert business rules in promoting automated functionalities to a few stages of record keeping will give value added into record-keeping technologies. Electronic bookkeeping methods have been widely used by business organizations as well as the small and medium enterprises (SME) because it is time effective and proper journals management. Many business owners especially in the SMEs hardly understand financial information provided by the accounting department; hence technology came with easier automated accounting for business organizations [5], [6] has come out with the advantages of automated bookkeeping and accounting which declare that most business owners are not capable of challenging accounting; thus, they need a more simple, reliable, able to collaborate and cost-effective software to accommodate their accounting tasks. According to [7], many SMEs in Malaysia, lack proper record keeping and have poor information and communication technology (ICT) adoption in managing their business accounting. This situation indicates that SMEs are not responsive to the significance and security of accounting records. Most of the SME business owners do not use automated bookkeeping. Without a clear understanding of why automated bookkeeping will benefit most SME owners, business will have difficulty in terms of real time transaction, especially in this digital era. The main goal of this study was to highlight AI implementation in a few stages of the automated bookkeeping processes, primarily in invoicing and journal record entries based on previous studies as well as to analyse the SME benefits and challenges in implementing automated bookkeeping, based on a case study to provide key elements concerning bookkeeping management. The sections below describe certain important understanding dealing in the areas of bookkeeping management, its relationship to AI adaption and the implementation challenges for SMEs. A. Bookkeeping Bookkeeping is the systematic technique of any business processes and exercise, which plays an important role in developing the growth of an organization [1], [8], [1] further indicated that organizational sustainability assessments play a critical role in monitoring and evaluating success for goals and achieving sustainable growth and corporate sustainability. Invoicing refers to the coupons provided and obtained by each unit and person in the purchasing and selling of goods during the business processes [9]. This is an integral part of bookkeeping. Meanwhile, the expansion of technology in businesses, using electronic commerce, is seen as a valuable cost reduction alternative to the digitalization of billing [10]. The Journal Record Entry commonly provides a regular review of cash receipts or bills and a weekly report of receipts and invoices as well as expenditures [11]. The journal is the primary and essential book for reporting the events of everyday transactions. Recording a correct entry into the report would display the business' proper financial position not only to people individually but also to external customers. B. AI implementation in Invoice Recognition and Automated Journal Entries Artificial intelligence (AI) is the analysis and integration of techniques that enable behaviour requiring human intelligence to be carried out on computer devices. A number of viewpoints, in the study of intelligence can be perceived from the perspectives of philosophy, psychology, cognitive science, arithmetic and medicine, where all of these domains do have an impact on the discipline and must be considered in modelling a human intelligent behaviour on a computer [12]. [12], in his study, stated that machine learning (ML), is a field of artificial intelligence that offers a great potential [13], for the creation of genuinely robust systems that can be responsive to environmental changes, contextual problems or circumstances. The capacity to establish a system that can learn and react from examples will make it possible for artificial intelligence to solve the problem of extracting information especially from the expertise, evaluating the knowledge, and then executing it. Artificial intelligence (AI) is a vital technology which supports the day-to-day social life as well as the economic activity, and the usability of AI contributes significantly to the sustained development of Japan's economies and solves numerous social problems [14]. AI has attracted scrutiny as a key to development in developed countries such as Europe and the United States, and emerging countries like China and India. The emphasis is mainly on the development of new AI technologies for information communication technology (ICT) and robotic technology (RT). The advancement of artificial intelligence technology has increasingly entered the accounting sector, which plays a significant role in enhancing business performance, deducing errors, minimizing, and managing corporate risks, enhancing business efficiency, and enhancing human resource efficiency. The implementation accounting software has largely replaced manual accounting such as filling in receipts and financial statements, which has made it possible for many accountants to reduce complex accounting operations [15]. Today, the enterprises go towards the digital and automated approach and have started digitalizing bills [10]. This will cause less difficulties to the generation of data transaction and push the requirements to revolutionize the data to generate a meaningful piece of information and insight to the business. The insight into the business data would be beneficial to decision-makers in making more informed decisions [16]. A rule based expert system has made a lot of benefits in billing decision-making. This reduces erroneous claims and claim rejections, increases customer satisfaction, and improves company's revenue as real time performances are also taking place [17]. [18] in their studies found that automation via automated journal entry will reduce repetitive tasks [19] human errors, time consuming and stress, and that journal entry automation could support accountants effectively in all aspects especially data entry and bookkeeping. [20] mentioned that when manual entries and tasks are automated, more focus would be given to analytical services by experts in the field. Optical Character Recognition (OCR) is a machine readable and editable system that infers images captured by a scanner [21]. The OCR process consists of pre-processing, segmentation of images and recognition. The segmentation process is the main role and the most significant and complex process of the overall OCR processes [21]. With OCR empowered by AI, invoice recognition and data extraction could be done without the same set of rules or templates [22]. The dominant portion of recognition is done through the pre-process input image. These images are digitized through a scanner, digital camera or software [23], [24]. Poor results of OCR classification will result in a higher error rate and error in information extracted for the next step such as journal entry. C. Small and Medium Enterprise in Malaysia The process of innovation in small and medium enterprises (SME) is challenging and the strong correlations between promotion factors and innovation still have not been adequately clarified compared to large businesses [25]. According to the SME Corporation Malaysia, SMEs in Malaysia are described as a pyramid of entrepreneurs. SMEs are categorized into the manufacturing sector and other sectors. In the manufacturing sector, the ratio of employees in small enterprises is from 5 to or equal to 75 with sales turnover ranging from RM300, 000 to RM15mil while in the other sectors, the ratio of employees is between 5 to 30 with sales turnover of RM300,000 to RM3million annually. Meanwhile at the top part of the pyramid the number of employees is between 75 to 200 with sales turnover of RM15million to RM50million while in the other sectors the ratio of employees is between 30 to 75 with sales turnover of RM3million to or below RM20million [7]. D. Challenges for implementation Bookkeeping in SME. SMEs have less priority in implementing accounting functions within the company due to lack of knowledge in accounting [26]. The management of accounting systems and methods should explore and discover a new costeffective way, through accessible resources and properties for potent decision-making [27]. An inappropriate and inaccurate documents and records processes will result in unsuccessful business to any organization [28] Poor recordkeeping is a major factor in contributing to failure in business progress. According to [29], lack of knowledge and skills in financial and management are the main causes for the closure of SMEs. [30] describe the problems as the introduction of management accounting standards including the lack of access to finance for SMEs to implement modern accounting practices, insufficient understanding of creativity due to a lack of skills and experience, restricted use of new technologies and shortage of human capital. [31] concurs with the other authors on the need for SMEs in empowering accounting knowledge to sustain their existence in the business world. [8] further state that the majority of the SME owners have minimum basic knowledge of accounting and maintain their own record-keeping due to the steep cost implicated in preparing financial statements. Next, the structure of the paper are as follows: section II briefly discussed methodology which consists of method selection and approach in the subject area. Section III will discuss the findings, analysis and discussion. Finally, section IV is the conclusion of the research. II. MATERIAL AND METHOD This study consists of content analysis of past reviews to acquire the necessary information. The research goal was to analyse the AI approach in the bookkeeping processes focusing on invoice recognition and automated journal entry. The reviews were sorted based on their keywords, methods, description, and results. The reviews were gathered from various sources mainly indexed by Scopus and WOS. Other than that, the challenges and benefits faced in implementing this system in SMEs were measured. The study adopted a few case studies supported by secondary data through a systematic literature review approach which identified, evaluated, and deduced all available research relevant to the theme of the study. According to [32], case studies are deemed suitable for contemporary phenomenon research. The selections for this study were based on the criteria. This was done to narrow down the search based on the scope area as shown in Table 1. A. Case Study This section describes three case studies. Their findings would be used further in the analysis and discussion in the next few sections. The case studies are represented by Case Study A, Case Study B and Case Study C. 1) Case Study A (CS-A): Case study A (CS-A) has hundreds of various activities scheduled and arranged across Europe and Finland. They are an expert in the fields of visual design, decoration, stage prepping and setting up, interior design and the creation of powerful events that integrate all their different abilities. They realise and fully understand that each occurrence reflects the brand of the client; thus, they prioritise consistency in all their tasks. They use Finago's Procountor, an automated financial reporting device, as do several other Finnish and Nordic firms. Accountor Finago is one of the elements of the Accountor Group's SMEs Software business., Procountor, is a user-friendly platform with hundreds of thousands of users, but sales invoices are submitted manually from the company administration or the ERP applications to Procountor. Once the number of invoices hikes up, it becomes tedious to submit them manually. Hence, they reach-out to Scoro by Youredi, to accelerate and automate the process of invoices. Not only that, Scoro is an all -in-one enterprise management software that helps teams effectively collaborate, execute tasks more effectively and track revenues. Implementing Procountor saves them a great deal of time and is not error-prone, free of manual data entry and tasks. Now they can be assured that almost all invoices will be sent to Procountor in real time without any vital detail being lost. As all invoices from (CS-A) are now moved directly from the Scoro enterprise management system to the financial management platform through the cloud-based Procountor, they will easily continue to grow their core business rather than just perform administrative tasks. Now they can concentrate on what they do best to provide excellent event opportunities to their clients. 2) Case Study B (CS-B): Case Study B (CS-B) is a family-owned business that distributes beer and non-alcoholic beverages. They distribute the products across Alabama and North Carolina. The company realized that they needed automatic software to manage sudden spikes in invoices. They started processing an average of 2000 invoices per month, excluding the product's invoices manually. Delays in processing approvals and controlling invoices became an issue since they were having a multi-tiered Case Study B (CS-B) is a family-owned business that distributes beer and non-alcoholic beverages. They distribute the products across Alabama and North Carolina. The company realized that they needed automatic software to manage sudden spikes in invoices. They started processing an average of 2000 invoices per month, excluding the product's invoices manually. Delays in processing approvals and controlling invoices became an issue since they were having a multi-tiered approval, but they were not centralized and visible; hence follow -ups and data tracking took longer as they had to chase for invoice paper and signatures. The problems prevented their work from running smoothly. After they implemented the automatic software (Beanworks) to manage their piles of invoices, they were able to make remote management. Communication between the branches speeded up and managers were able to access invoices and make amendments in cloud, resulting in easily slated for payment. During the covid-19 pandemic, managers could approve invoices easily through mobile apps and the team members were able to process them at any place. The team was saving more time on data entry and manual tasks. Invoices were automatically captured, coded, and routed to the approver. approval, but they were not centralized and visible; hence follow -ups and data tracking took longer as they had to chase for invoice paper and signatures. The problems prevented their work from running smoothly. After they implemented the automatic software (Beanworks) to manage their piles of invoices, they were able to make remote management. Communication between the branches speeded up and managers were able to access invoices and make amendments in cloud, resulting in easily slated for payment. During the covid-19 pandemic, managers could approve invoices easily through mobile apps and the team members were able to process them at any place. The team was saving more time on data entry and manual tasks. Invoices were automatically captured, coded, and routed to the approver. 3) Case Study C (CS-C): Case Study C (CS-C) is an accountancy practice firm offering a range of services including accounting and tax advice, bookkeeping and business development. The founder leverages digital innovation in order to automate outdated and inefficient administrative processes. When they decided to eliminate manual data entry, they chose AutoEntry, driving significant returns of investment (ROI), for the firm consequently. (CS-C) wanted to adopt a data entry solution that was quick to process, simple to use and highly accurate. It also wanted one that could capture data from a range of documents, including purchase and sales invoices as well as bank statements, to effectively serve its range of clients. As part of its due diligence, it decided to try a free trial of AutoEntry in early 2017. It loved how intelligent AutoEntry was, whilst being so easy to operate. The firm now uploads over 300 documents a month onto AutoEntry either via the web or the mobile app, and it can monitor the progress of these items through its personalised dashboard, helping to streamline service delivery. At the same time, it had the capacity to take on more bookkeeping clients, helping to increase its turnover by over 50% by automating its bookkeeping data entry. III. RESULT AND DISCUSSION The breakdown of challenges in implementing bookkeeping is shown in the Table 2 based on previous literature. From the Table we can conclude that knowledge constraints mostly the challenges faced by the business owners to implement bookkeeping followed by poor business management, cost, and record errors due to human errors. Hence, an automated function of bookkeeping was introduced to simplify record-keeping tasks with less supervision. The function of automated bookkeeping is extended to 'no-data-key-in' where AI, which functions in the electronic software, will extract all the data information from the OCR -scanned image. Hence human errors will be reduced especially when data and information are manually keyed-in by staffs in the normal electronic system. B. Artificial Intelligence Approach In many areas of AI application, machine learning (ML) algorithms were applied, and researchers put a lot of work into improving the accuracy of the ML algorithm. ML is used as an injection of the AI approach in OCR recognition as well as other processes in automation such as detecting invoices based on templates [22]. The evolution on the AI approach is as shown in Table 3. The results of this study were collected through research articles using the inductive approach based on the criteria set out and discussed earlier. The automated system is able to intelligently recognize and identify invoices based on a few templates stored in the database and learn from the system for a new invoice for which templates are not available in the database by doing a template-matching technique from previous learning. By maintaining basic OCR processes, the additional processes would be template matching and information exporting. Based on Table 3, the AI solutions on the invoice recognition processes were continuously done by the researcher to produce an automated process. [33] proposed an automatically input from the invoice into a computer by scanning the document, while [34] proposed an automatic invoice document classification to classify the types of invoices. By and by, researchers proposed more automation techniques such as character recognition, automated data extraction from invoices and intelligent invoice recognition based on templates. All of these projects showed that automated invoice recognition was getting attention towards AI functionalities. References AI Project Method Results Additional information [33] Proposed a solution to increase operational efficiency of financial staff in which Arabic numerals and Chinese characters were automatically input from the invoice into a computer Using the linear whole block moving method in each vertical segment, a new fast algorithm is put forth to detect and rectify the slant image. Highest accuracy rate for Chinese character is 97.2% and accuracy rate for Arabic is 95.2% The adhesion of form line and characters makes character segmentation difficult, becoming a major factor in increasing the recognition rate. [34] Proposed an automatic invoice document classification system to classify invoices based on the analysis of the graphical information present in the document. Using k−Nearest Neighbor (k−NN) classifier since no training phase is required. The closed world classification achieves 99% of correct classification in case of 1-NN while the open world classifier performance reaches 79% accuracy. OCR techniques and label indexing will improve the obtained results and could provide a less compelling alternative to bar code identification systems. [35] Explore the utility of Artificial Neural Networksbased approach to the recognition of characters. A unique multilayer perception of neural network is built for classification using backpropagation learning algorithm. Technique used on 6 different geometrical features to extract 48 parameters are fed into ANN. Recognition rate of 84.8% for 10 class problem in which out of 75 samples, 65 samples are correctly recognized. Other kinds of pre-processing and neural network models may be tested for a better recognition rate in the future. [36] A classification system to recognize the first page of invoices from scanned documents. Natural Language Processing (NLP) Logistic regression scores the best with average 95.02% accuracy. Errors are partly because of OCR errors. This work mainly uses words, the smallest unit in document layout, to extract features. [37] OCRMiner system designed to extract the indexing metadata of structured documents obtained from an image scanning process and OCR. Text Blocks method classified by rule based and machine learning (ML) classifier. OCRMiner system, enables the integration of text analysis techniques, with positional layout features of the recognized documents blocks achieving an average of 80.1% precision. Various kinds of OCR errors detected during the experiment. [38] Receipt extraction and OCR verification to improve text detection. Connectionist Text Proposal Network (CTPN) explores rich context information of an input image, making it powerful to detect horizontal text. The system scores 71.9% of F1 score for both detection and recognition task. Improving OCR verification on handwriting. [22] Intelligently identify invoice information based on template matching. The optical character recognizing OCR is used to transform the image information into text so that the information derived can be used directly. High precision of 95 % and an average runtime of 14 milliseconds Information including money, goods, and purchaser were identified accurately. [39] Automatic approach to classify invoices into three types: handwritten, machine-printed and receipts. Recommend this approach to as a preprocess step for OCR systems. Other than that, there are other areas that benefited with the implementation and advances of machine such as computer vision and object recognition, prediction, semantic analysis, natural language processing and information retrieval [22]. The techniques available for these solutions are Decision Trees, Random Forests, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Bayesian Network. To measure accuracy of their system, refer to Table 3. Based on the findings, OCR is a preferable method used to recognize invoice documents and featured extraction to automate journal entry. Various methods and ML classifiers are involved in assuring that OCR produced an accurate output. This is vital in the automation of invoice documents so it could extract information data for automated journal entries and the classification of journals. Hence a lot of research has been done to increase the accuracy of OCR functionalities (Table 3). Based on the research objective focusing on the AI approach in automating bookkeeping processes especially in invoice processing and journal entry, both processes mutually rely on the OCR process. Good accuracy of the OCR processes will produce a better result in template matching and character reorganization to be used in the next process of the automatic journal entry and classification of the journal. The automation of invoice processing and journal entry with respect to AI and ML is concluded as per Table 4, according to types and usage. Based on Table 4, three types of ML are identified which are supervised learning, unsupervised learning, and reinforced learning. Used cases and subcategory are provided to differentiate each type of learning. To further strengthen the findings of the study, three case studies were reviewed, namely CS-A, CS-B and CS-C. Based on comparative study, the model produced good elements and benefits of automated bookkeeping in SMEs, ( Table 5). A few factors were identified in implementing automated bookkeeping based on case studies in the United Kingdom and the United States. Most of them agreed that implementing invoice automation and journal entry automation made their daily tasks a lot easier and reduced their workloads. Other than that, automated bookkeeping implementation is also time consuming and could shorten reporting analysis. A real-time process of invoicing and journal entry has made a huge difference in the company's growth. Based on the case study, the company will benefit a lot in terms of productivity increases, higher growth rate and it can focus more on its business goals as reporting analysis can be done weekly compared to before implementing the automation process. In financial auditing, auditors can achieve the desired file on archive file without having to refer to the managers. After identifying the challenges and benefits gained by the SMEs in the previous case studies, the same approach could be adapted by the Malaysia's SMEs to have better accounting management especially in record keeping. This means the Malaysian government should play a role to overcome the cost for the SMEs in early implementation. Support such as subsidiary and tax reduction by the government could be introduced in the early phase of the implementation of the automated functions in the SMEs. This will remove the dependency of outsourcing accounting tasks, for the SMEs. Record keeping plays an important role in determining a company's direction and financial flow in every organization's decision -making. With the injection of AI, automated bookkeeping management has made the system more reliable and less human dependent especially in handling errors. Automated bookkeeping will also benefit accountants and auditors in handling taxes. It is necessary to apply the recent accounting technology with understanding and to explore more effective ways in the implementation [41]. IV. CONCLUSION Based on the preliminary study, we could summarize that the automated bookkeeping system plays a positive role in SMEs' performance as well as overcoming challenges faced by enterprises. The system helps SMEs to increase their growth rate and keep their inhouse record keeping effectively. The on-going process of the AI methods and technology to increase the accuracy of the processes, are actively done by research across the globe. Challenges discussed in this study mainly focus on knowledge constraints, yet again, cost constraints also play a major part in adapting the AI technology. We cannot deny the increasing cost of implementing technology in small enterprises, especially in Malaysia, looking at the brighter side, of automated bookkeeping simplifying the initial processes of in-house financing in every business sector. This limitation could be overcome with the support of the Malaysian government which always supports SMEs with various fund-injection programmes. Hence, more research should be conducted to investigate the role and relationship that could be played by the Malaysian government to encourage automation in Malaysian SMEs. Other than that, the readiness to implement automated functions in Malaysian organizations is something to ponder in future studies. Functions such as double entry in financing and tax management are also processes to focus on automation in future. The case studies in this paper were made to really understand the benefits of implementing automation functions in SMEs, which can be adapted to Malaysian SMEs as value-added functions.
6,427.4
2021-09-13T00:00:00.000
[ "Computer Science", "Business" ]
Genetic data sharing and artificial intelligence in the era of personalized medicine based on a cross‐sectional analysis of the Saudi human genome program The success of the Saudi Human Genome Program (SHGP), one of the top ten genomic programs worldwide, is highly dependent on the Saudi population embracing the concept of participating in genetic testing. However, genetic data sharing and artificial intelligence (AI) in genomics are critical public issues in medical care and scientific research. The present study was aimed to examine the awareness, knowledge, and attitude of the Saudi society towards the SHGP, the sharing and privacy of genetic data resulting from the SHGP, and the role of AI in genetic data analysis and regulations. Results of a questionnaire survey with 804 respondents revealed moderate awareness and attitude towards the SHGP and minimal knowledge regarding its benefits and applications. Respondents demonstrated a low level of knowledge regarding the privacy of genetic data. A generally positive attitude was found towards the outcomes of the SHGP and genetic data sharing for medical and scientific research. The highest level of knowledge was detected regarding AI use in genetic data analysis and privacy regulation. We recommend that the SHGP’s regulators launch awareness campaigns and educational programs to increase and improve public awareness and knowledge regarding the SHGP’s benefits and applications. Furthermore, we propose a strategy for genetic data sharing which will facilitate genetic data sharing between institutions and advance Personalized Medicine in genetic diseases’ diagnosis and treatment. www.nature.com/scientificreports/ Subject recruitments. An electronic format of the questionnaire consisted of an introduction of the study's aims, including the importance of voluntary contribution in the study and a consent statement. The questionnaire was distributed via different social media platforms in Saudi Arabia including Twitter, WhatsApp and Telegram. Saudis are very active in these platforms, for example, they ranked seventh in the world in terms of Twitter users (12.7 million). All Saudi citizens aged ≥ above or equal than 18 years were targeted to participate in the study. More than 844 responses were received, and exclusion criteria were (a) None-Saudi, (b) less than 18 years old, and (c) incomplete responses. Study instruments. The questionnaire was designed, validated and the electronic format was created using Google Forms. The validated version of the survey consisted of six sections: (1) social and demographic information including, age, gender, educational level, and nationality, (2) participants' awareness of genetic diseases (6 items), (3) participants' awareness of the SHGP (8 Items), (4) Saudi citizens' knowledge and attitude of genetic data privacy of the SHGP (9 Items), (5) attitude toward the use of AI in the genome and the privacy management of genetic data (6 Items) and (6) attitude toward sharing genetic data in scientific research (2 Items). Statistical analysis. All responses were imported and categorized into Excel spreadsheets for descriptive and statistical analyses. The statistical software programs SAS (version 9.4) and SPSS (version 25) were used to perform t-tests and multivariate statistics ANOVA to analyse several significant variables, including the level of public knowledge and awareness regarding the SHGP, genetic data privacy/sharing and AI use. Statistical significance was considered at a P value of less than 0.05 for all analyses. Excluding responses. We excluded 40 respondents who chose "non-Saudi" since we could not confirm if they lived in the SA. Informed consent statement. Informed consent was obtained from all subjects involved in the study. www.nature.com/scientificreports/ Awareness of genetic diseases among participants. The SHGP was launched to study the causes of the high prevalence of genetic disorders and detect rare inherited diseases among Saudi citizens. Therefore, we investigated the level of public awareness about different aspects of genetic diseases in SA as shown in Table 2 and Supplementary Fig. 2. Approximately 74.3% of study participants were aware of the high prevalence of genetic diseases among Saudis. Almost all participants (93.8%) knew genetic diseases negatively impact affected individuals and their families. Most participants (90.2%) were aware of the role of consanguinity in the increase of genetic disease incidence. Interestingly, only 19.8% of participants had undergone genetic testing, but nearly all participants (95.6%) had a positive attitude and high awareness of the importance of pre-marital screening in reducing the prevalence of inherited diseases. Further analysis revealed that overall awareness of genetic diseases was significantly higher in females than males (p = 0.0094) as shown in Table 3. Awareness and attitude toward the SHGP. Despite the massive media campaign launched in 2021 about the SHGP, only 40.5% of study respondents had heard of the SHGP as shown in Table 4. Moreover, 73.8% of participants were not aware of the benefits and applications of the SHGP. The vast majority of participants (82.1%) assumed that the SHGP would document the first genetic map of Saudi citizens. Approximately 86.3% of respondents chose "yes" for the possible contribution of the SHGP to gene therapy development. Furthermore, 87.2% of participants had a positive attitude toward the contribution of the SHGP in the localization of genomic techniques and genetic research. Only 4.6% of participants were among the sample donors in the program, but 68.8% of them were willing to participate. More than 80% of participants were optimistic about the contribu- Knowledge and attitude toward genetic data privacy of the SHGP. Nine items in the survey questionnaire focused on examining the level of knowledge and attitudes toward genetic data privacy of the SHGP ( Interestingly, there was uncertainty regarding the level of knowledge of the importance of the privacy and the security of genetic data, as the responses were divided between the lowest level (28.4%), medium level (20.6%), and highest level (26.2%); the remainder was not sure. A majority of participants (79.7%) felt the highest level of positive attitude and support for obtaining the patient's consent before sharing their genetic data. Similarly, the highest level of attitude and support were reported regarding the need for a general policy for the privacy of genetic data (78.1%). Importantly, most participants (75.4%) showed the highest level of positive attitude toward the importance of organizing seminars to introduce the knowledge related to privacy and security of genetic data. Positively, most participants supported genetic data sharing in scientific and medical research and establishing a national policy to protect genetic data privacy when shared between Saudi institutions (Fig. 1). Attitudes toward the use of AI in the privacy of genetic data. As massive genetic data are generated and become big data, rapid and accurate analysis is required AI to provide clinical reports for health diagnoses or other related tasks in research or medical fields. Thus, we investigated the attitude and opinions of Saudi society about the involvement of AI in the privacy of genetic data and its role in SHGP data analysis (Table 6). Surprisingly, 92.8% of participants agreed that AI could be used to analyse genetic data. Furthermore, most participants (80.6%) agreed to AI contributing to solving genetic disorders. A vast majority of participants (90.7%) agreed that AI technologies could provide solutions to ensure the privacy of genetic data. Most participants (88.8%) agreed with employing AI in managing the privacy of genetic data. However, the participants were divided regarding the threat of AI use in the privacy of genetic data as 41.2% chose "agree" and 58.8% chose "do not agree". Positively, 90.7% of participants agreed that AI could be used in the SHGP. The statistical analysis showed that the attitude toward using AI in the SHGP was significantly different by educational level (F (3, 801) = 4.68, p = 0.0030). Participants with a postgraduate degree (p = 0.0110, M = 5.043) had more attitude toward using AI in the privacy of genetic data and the SHGP than those with a bachelor's degree (M = 4.574). Moreover, there was a statistically significant difference by marital status (F (3, 801) = 7.28, P < 0.0001). People who were married (P < 0.0001, M = 5.040) had more attitude toward the use of AI in the privacy of genetic data and the SHGP than single people (M = 4.574). Furthermore, specific age groups were significantly different (F (4, 800) = 4.35, P = 0.0018). People who were 38 years old to less than 48 years old (P = 0.0100, M = 5.055) had more attitude toward the use of AI in the privacy of genetic data and the SHGP than people who were of age 18 years old to less than 28 years old (M = 4.666). www.nature.com/scientificreports/ Discussion The SHGP was recently established to detect and study the causes of genetic disorders. In this study, we found that most participants were aware of the high prevalence of genetic diseases among Saudis (Table 2). Most participants considered consanguinity as a factor in genetic diseases. Nearly, all participants had a positive attitude and sufficient awareness of pre-marital screening in reducing the prevalence of inherited diseases. These results are consistent with our previous study such that other reports showed the Saudi community has a high level of awareness toward genetic testing [28][29][30][31] . One possible reason for these positive findings, is that in 2002, the Saudi government passed a law requiring pre-marital genetic testing 32 . Interestingly, the results of our study also revealed that females had significantly higher awareness of genetic diseases than males. We then examined the awareness and attitude toward the SHG, and found inadequate awareness about the SHGP and its benefits and applications (Table 4). Thus, there is a need for greater efforts to educate people about the SHGP and human genome in general. Furthermore, we documented that a high parentage of participants assumed that the SHGP would establish the first genetic map of the Saudi community. There is a positive attitude among the responses regarding the contribution of the SHGP in gene therapy and the localization of genomic techniques. Moreover, the responses showed encouraging results (68.8%) in willingness to participate in the SHGP sample collection initiative. The participants were generally optimistic about the SHGP outcomes, potentially lowering the prevalence of genetic diseases and their negative impacts. In addition, the analysis revealed that knowledge and attitudes concerning the SHGP were not statistically significant in comparison with the effect of status and age. However, there was a significant correlation between educational attainment and awareness level as people with postgraduate degrees were more aware of the SHGP than those with bachelor's degrees. Regarding the level of knowledge and attitude toward genetic data privacy and management of the SHGP data, an insufficient level of knowledge was reported (Table 5). The participants did not have enough knowledge regarding the process of preserving and managing genetic data, and less than half did not know the institutions responsible for storing the genetic data in the SA. Regrading genetic data privacy and security, uncertainty and a low level of knowledge were detected among respondents. A high rate of concern about patient privacy was reported as most participants called for informed consent before sharing their genetic data. Similarly, the highest level of attitude and support was detected for applying general policy to genetic data privacy. Importantly, most responses exhibited the highest level of positive attitude toward the importance of organizing seminars www.nature.com/scientificreports/ to introduce the knowledge related to privacy and security of genetic data. We noticed some contradictions responses in a few questions related to genetic data privacy and genetic data sharing. For instance, 43.8% of participants did not know the institutions responsible for storing Saudi genetic date while 33.6% of them chose the highest level of knowledge regarding the management of genetic data with high privacy in the SA (Table 5). These contradictions could be a result of low level of knowledge and awareness about these issues among the participants. Positively, most participants supported genetic data sharing in scientific and medical research and the establishment of a national policy to protect the privacy of genetic data when it is shared between Saudi institutions (Fig. 1). We found that the public support genetic data sharing if their privacy and personal information are secured. Consistent with this, a study conducted in Riyadh, the SA, showed that 78.4% of the participants are in favour of building a database of hereditary diseases and managed by the government 28 . However, several reports have shown that the public is always concerned about data misuse, and being identified, and stigmatized with genetic diseases 23,24,26,27 . For example, surveys were conducted in Pennsylvania (the United States) and Bavaria (Germany) about Personalized Medicine showed that most participants were worried about genetic data misuse 33 . Notwithstanding, the general public trusts researchers in the hope of finding cures for complex diseases. Based on these findings, we propose a strategy for sharing the SHGP data that ensure the privacy and security of genetic data (Fig. 2). The sharing of genetic data will broaden opportunities for researchers and medical practitioners to accelerate gene therapy discovery, improve the diagnosis of genetic diseases and develop personalized medicine for patients 27,34,35 . Consistent with this idea, other investigators have called to establish a national genomic datasharing policy in the SA that allows data to be freely shared among institutions to enhance bio-marker discovery and computational biology analysis, improving the treatment of genetic disease complications 34,35 . A lack of a genetic data sharing policy will limit the use, access, and analysis of the SHGP data. A genetic data sharing policy will regulate the privacy of genetic data if it is shared with a third party and how it is shared. Also, the policies should regulate how the genetic data is collected, stored, and provided in its legal state. We also further investigated public attitudes toward the use of Al in the analysis of genetic data and privacy regulation in the SHGP (Table 6). We reported the highest positive attitude toward AI use in genetic data analysis. Furthermore, most participants trusted the ability of AI to solve genetic disorders. In terms of the privacy of genetic data, a vast majority of responses indicated that AI technologies could ensure and manage privacy. However, the participants had divided opinions regarding the threat of AI use in privacy regulations. Almost all participants had positive attitudes toward the use of AI in the SHGP. Furthermore, our statistical analysis revealed that the attitude toward using AI in the SHGP was significantly different by educational level (F (3, 801) = 4.68, p = 0.0030). Participants with a postgraduate degree (p = 0.0110, M = 5.043) had a higher positive attitude on employing AI in the privacy of genetic data and the SHGP than those with a bachelor's degree (M = 4.574). Moreover, People aged 38-48 (P = 0.0100, M = 5.055) had a more positive attitude about the use of AI in the privacy of genetic data. Surprisingly, participants showed a higher level of positive attitude and knowledge toward Al applications than the SHGP and its benefits. This result could be because Al is trending now in the SA. More specifically, the Proposed strategy for genetic data sharing of the SHGP. First, a national policy for genetic data sharing should be established. Second, advanced technologies should be used to ensure genetic data security and privacy. Third, laws governing genetic data regulation must be enforced. Finally, national awareness campaigns and educational programs should be launched among clinician, physicians, researchers, and the general public. www.nature.com/scientificreports/ government has established Saudi Data and Artificial Intelligence Authority (SDAIA). In addition to this, several media campaigns have presented information about AI and its Applications 36 . Despite some concerns about AI use in health care and genomic data, such as inaccuracies, discrimination, and bias in the database, AI algorithms will revolutionize genomics and proteomics data analysis, improving precision medicine in genetic disease diagnosis and treatment 23,37 . AI algorithms, more specifically, deep learning based algorithms are currently being employed in clinical diagnosis and analysis of complex and large-scale genomic databases. However, AI based algorithms may require huge databases to train to improve genomic data analysis and drug discovery. Therefore, genetic data sharing will definitely improve the use of AI in the SHGP and Personalized Medicine. Furthermore, AI and privacy technologies could provide solutions for genetic data sharing, for example, cryptography, differential privacy and other approaches 23,24,38,39 . In the current study, we analysed and assessed Saudi public awareness, knowledge, and attitudes toward the SHGP, genetic data privacy and the role of AI in the management of privacy and the analysis of genetic data. To the best of our knowledge, this study is the first population-based survey of Saudi public awareness and knowledge toward the SHGP. We anticipate that the outcome of this study can help decision-makers involved in SHGP management and genetic data regulation plan public communication strategically, implement SHGP findings, and establish a national genetic data sharing policy. Conclusion This study provides insights regarding the Saudi society's awareness, knowledge, and attitude towards the SHGP, the sharing and privacy of genetic data resulting from the SHGP, and the role of AI in managing privacy and analysing genetic data. We reported moderate awareness and attitude towards the SHGP and minimal knowledge regarding its benefits and applications. In addition, a low level of knowledge was observed regarding sharing and privacy of genetic data. A generally positive attitude was found towards the outcomes of the SHGP and genetic data sharing for medical and scientific research. Furthermore, the highest level of knowledge was detected regarding AI use in genetic analysis and privacy regulations. We identified gender, status and educational level as important factors in public awareness and knowledge of the SHGP. Furthermore, we proposed a strategy for genetic data sharing in Saudi Arabia. We recommend that awareness campaigns and educational programs be launched by institutions that manage the SHGP to increase and improve public awareness and fill the knowledge gaps regarding these issues. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
4,314.8
2022-01-26T00:00:00.000
[ "Medicine", "Computer Science" ]
Ultra-high resolution X-ray structures of two forms of human recombinant insulin at 100 K The crystal structure of a commercially available form of human recombinant (HR) insulin, Insugen (I), used in the treatment of diabetes has been determined to 0.92 Å resolution using low temperature, 100 K, synchrotron X-ray data collected at 16,000 keV (λ = 0.77 Å). Refinement carried out with anisotropic displacement parameters, removal of main-chain stereochemical restraints, inclusion of H atoms in calculated positions, and 220 water molecules, converged to a final value of R = 0.1112 and Rfree = 0.1466. The structure includes what is thought to be an ordered propanol molecule (POL) only in chain D(4) and a solvated acetate molecule (ACT) coordinated to the Zn atom only in chain B(2). Possible origins and consequences of the propanol and acetate molecules are discussed. Three types of amino acid representation in the electron density are examined in detail: (i) sharp with very clearly resolved features; (ii) well resolved but clearly divided into two conformations which are well behaved in the refinement, both having high quality geometry; (iii) poor density and difficult or impossible to model. An example of type (ii) is observed for the intra-chain disulphide bridge in chain C(3) between Sγ6–Sγ11 which has two clear conformations with relative refined occupancies of 0.8 and 0.2, respectively. In contrast the corresponding S–S bridge in chain A(1) shows one clearly defined conformation. A molecular dynamics study has provided a rational explanation of this difference between chains A and C. More generally, differences in the electron density features between corresponding residues in chains A and C and chains B and D is a common observation in the Insugen (I) structure and these effects are discussed in detail. The crystal structure, also at 0.92 Å and 100 K, of a second commercially available form of human recombinant insulin, Intergen (II), deposited in the Protein Data Bank as 3W7Y which remains otherwise unpublished is compared here with the Insugen (I) structure. In the Intergen (II) structure there is no solvated propanol or acetate molecule. The electron density of Intergen (II), however, does also exhibit the three types of amino acid representations as in Insugen (I). These effects do not necessarily correspond between chains A and C or chains B and D in Intergen (II), or between corresponding residues in Insugen (I). The results of this comparison are reported.Graphical abstract Conformations of PheB25 and PheD25 in three insulin structures: implications for biological activity? Insulin residues PheB25 and PheD25 are considered to be important for insulin receptor binding and changes in biological activity occur when these residues are modified. In porcine insulin and Intergen (II) PheB25 adopts conformation B and PheD25 conformation D. However, unexpectedly PheB25 in Insugen (I) human recombinant insulin adopts two distinct conformations corresponding to B and D, Figure 1 and PheD25 adopts a single conformation corresponding to B not D, Figure 2. Conformations of this residue in the ultra-high resolution structure of Insugen (I) are therefore unique within this set. Figures were produced with Biovia, Discovery Studio 2016. Electronic supplementary material The online version of this article (doi:10.1186/s13065-017-0296-y) contains supplementary material, which is available to authorized users. Introduction A definitive account of the 1.5 Å resolution structure (PDB 4INS) of hexagonal porcine insulin, which differs in sequence by only one amino acid at B30 (and D30) from human insulin ( Fig. 1), was published by Baker et al. [1]. Success in the use of pig insulin to control diabetes ultimately lies in its ability to mimic the activity of the human form, which is a consequence of near perfect structural isomorphism. However, the use of nonhuman forms of insulin to control diabetes is known to lead to both allergic reactions and other complications resulting from antibody production in some patients [2]. For this reason the use of recombinant forms of human insulin which have now been developed is becoming more commonplace, on the assumption that their structure-function properties are even more closely related to the natural hormone. There are 2 independent molecules in the asymmetric unit of the crystal structure of hexagonal porcine insulin [1]: molecule 1 comprising peptide chains A1 and B1, and molecule 2, comprising peptide chains A2 and B2 (the 4 chains are now usually designated A, B, C and D). Peptide chains A and C are identical in sequence, as are chains B and D. Chains A and B, and chains C and D are linked by disulphide bridges Cys7A-Cys7B, Cys7C-Cys7D, Cys20A-Cys19B and Cys20C-Cys19D, respectively. Chain A also has an internal stabilizing disulphide bridge Cys6A-CysA11 and there is a corresponding S-S bridge in chain C, Cys6C-CysC11. As shown in Additional file 1: Figure S1 there are 3 AB and 3 CD dimers in the unit cell grouped around a crystallographic threefold axis. In the 2Zn crystals, three non-crystallographic insulin dimers are assembled around two Zn ions on the threefold axis. Each Zn ion is coordinated to three symmetry-related Nε atoms of HisB10 and to three water molecules. Water oxygen atoms (282) were also assigned and included in the refinement which converged to a value of R = 0.153 for 10,119 significant I obs (hkl). Seven of the amino acid side-chains were assigned less ordered conformations, refined with separate atomic coordinate sets and occupancy factors. Commercial human recombinant insulin is now available from several sources. The present study describes the ultra-high resolution (0.92 Å) low temperature structure of Insugen (I) human recombinant insulin, Fig. 1 and Additional file 1: Figure S2a. The unpublished structure of a second recombinant form of human recombinant insulin, from Intergen, at the same resolution, deposited as structure 3W7Y in the Protein Data Base (in June 2013) shows a number of surprising differences when compared with the Insugen (I) structure reported here. These two structures will be referred to as Insugen (I) and Intergen (II). The Insugen (I) and Intergen (II) 2Zn hexagonal HR insulin structures are predominantly isomorphous with that of porcine 2Zn insulin [1]. In both of these new structures the A and B-chains of molecule 1 are in the T-state [3]. Implications for biological activity HR insulin, Fig. 1, is currently used by the majority of insulin dependent diabetic patients, porcine insulin having been phased out some years ago [2]. The safe therapeutic use of genetically engineered human insulin depends on its structure being absolutely identical to that of the natural molecule, thereby reducing the possibility of complications resulting from antibody production. It has been noted that the use of human recombinant insulin in combination with other drugs may blunt the signs and symptoms of hypoglycaemia [2]. It has been reported [4] that several regions of the insulin molecule are closely related to its biological activity. These include: (a) the positions of the Cys residues that form disulphide bridges; (b) the N-terminal (A1-A5) of the A-chain; moreover the hydrophobic core of vertebrate insulins contains an invariant isoleucine residue at position A2. Lack of variation may reflect this side-chain's dual contribution to structure and function: IleA2 is proposed both to stabilize the A1-A9 α-helix, see Fig. 4b, and to contribute to a "hidden" functional surface exposed on receptor binding. In fact GlyA1 and IleA2 are stabilized by a network of aqueous H-bonds involving some 18 water molecules in Insugen (I) (see "Results"; Additional file 1: Figure S5a). Also in "Results", Additional file 1: Figures S5b, c show similar networks in Intergen (II) using the deposited 3W4Y and porcine insulin using the deposited 4INS pdb file. Additional file 1: Figures S5c, d and e show end on views of these networks. Substitution of IleA2 by alanine results in segmental unfolding of the A1-A8 α-helix, lower thermodynamic stability and impaired F binding [5]; (c) (e) moreover crystallographic analysis of the insulin molecule has suggested that the structure comprising both ends of the A-chain (GlyA1, GlnA5, ThrA19 and AsnA21) plus B-chain residues ValB12, ThrB16, GlyB23, PheB24 and PheB25 is important for insulin receptor binding [6]; (e) in addition to the invariant cysteines, only ten amino acids (GlyA1, IleA2, ValA3, TyrA19, LeuB6, GlyB8, LeuB11, ValB12, GlyB23 and PheB24) have been fully conserved during vertebrate evolution [7]; this observation supports the hypothesis derived from alanine-scanning mutagenesis studies that five of these invariant residues (IleA2, ValA3, TyrA19, GlyB23, and PheB24) interact directly with the receptor and five additional conserved residues (LeuB6, GlyB8, LeuB11, GluB13 and PheB25) are important in maintaining the receptor-binding conformation [7]. Baker et al. [1] in the definitive account of the 1.5 Å X-ray structure of 2Zn porcine insulin, concluded that the major flexibility observed at the A-chain N terminus residues A1-A6, and the B-chain C terminus residues B25, B28, B29 and B30 may be important for the expression of insulin activity, especially in view of the rigidity of the rest of the structure. Baker et al. [1] also point out that B25.1 Phe (PheB25) is turned in towards the A-chain whereas B25.2 Phe (PheD25) turns out away from the A-chain. A summary of the residues involved in these considerations of biological activity is given below in Fig. 2. Each residue of interest has been ranked according to the number of times it appears in the discussion: α (mentioned 4 times) to δ (mentioned once). Residues left blank in Fig. 2 are not thought to affect the biological activity. Positionally invariable cysteines forming the disulphide bridges have been designated α. 1 1 In the publication of Baker et al. [1] the pig insulin asymmetric unit is defined as: molecule 1 (chains A1, B1) and molecule 2 (chains A2, B2). For example residue B25.2 Phe refers to phenylalanine 25 in chain B of molecule 2. However in the PDB deposition of this structure, 4INS, molecule 1 is designated by chains A and B, and molecule 2 as chains C and D. All. pdb files referred to in the present publication follow this later format so B25. See also "General comments", "Peptide side chain electron density and conformations in Intergen (II) [PDB 3W7Y]", "Comments on the solvated propanol in Insugen (I)", "PheB24 and PheB25 in Insugen (I) and Intergen (II)" for further discussions of the implications of structure for biological activity. Materials Insugen (I) Human recombinant insulin (Insugen-30/70) was supplied by Biocon (India) Ltd. See Additional file 1: Table S1a. Human recombinant insulin, Intergen (II) was produced by the INTERGEN Company and purchased by Sakabe [8] from the SEIKAGAKU Company. Details are to be found in Additional file 1: Figure S2b. Other chemicals including HCl, zinc acetate, acetone, trisodium citrate and NaOH were purchased from Fisher Scientific (UK) and Sigma-Aldrich (UK). Crystallization of Insugen (I) The crystals were prepared at room temperature by a batch method similar to that of Baker et al. [1], modified as follows: 0.01 g of insulin as a fine powder was placed in a clean test tube; 1 mL of 0.02 M HCl was added to dissolve the protein; on addition of 0.15 mL of 0.15 M zinc acetate the solution became cloudy due to precipitation of the protein; 0.3 mL of acetone and then 0.5 mL of 0.2M trisodium citrate together with 0.8 mL of water were added and the solution became clear; the pH was checked and increased with NaOH to a pH between 8 and 9 for different batches, thus ensuring complete dissolution. It was then adjusted to the required value of pH 6.3. If any slight turbidity occurred, it was removed by warming the solution. The solution was then filtered using a Millipore membrane/acetate cellulose acetate filter. This removes any nuclei which will encourage precipitation or formation of masses of small crystals. The solution was then warmed to 50 °C by surrounding the test tube with preheated water in a Dewar. This allowed the solution to cool slowly to room temperature. The test tube was lightly sealed with cling film; crystals formed within a few days and were of a suitable size for X-ray diffraction within 2 weeks; the test tube containing crystals was kept at 4 °C prior to data collection. The crystal used for data collection was about 0.2 mm 3 . Crystallization of Intergen (II) The following details were supplied by Sakabe [8]. In contrast to the Insugen (I) crystals, Intergen (II) crystals were grown using the vapour diffusion hanging drop method at 293 K. The reservoir solution contained 0.1 M sodium citrate, and 22% (w/v) DMF, and 0.08% (w/v) zinc chloride, pH 8.67 while the protein solution was insulin, Intergen (II) dissolved in 0.02 N HCl to a final concentration of 10 mg/mL. The starting volume of the reservoir solution was 1 mL, and the volume of the drop was 20 μL of protein and reservoir solution in a 1:1 ratio. In 4 or 5 days, crystals were observed to have formed, and after 10 days to 2 weeks, insulin crystals of a size suitable for X-ray diffraction studies were present, typically about 0.5 mm × 0.5 mm × 0.3 mm. The crystal used for 3W7Y data collection was about 1.2 mm × 0.7 mm × 0.5 mm [8]. X-ray data collection Insugen (I) crystal at Diamond Light Source, MX beamline I02 Crystals grown at room temperature were passed through a 30% glycerol solution, prepared in mother liquor, prior to cryo cooling in liquid nitrogen. Crystals were screened with three test shots, separated by 45° using 0.5 s exposure and 0.5° oscillation. Data were collected at 16,000 keV (λ = 0.77 Å) and 100 K with the Pilatus 6 M detector as close to the sample as possible (179.5 mm). The EDNA strategy [9] was used to obtain a start angle and 180° of data were collected with 0.1° oscillation and 0.1 s exposure. The resolution of useful diffraction data achieved and used for structure analysis was 0.92 Å. The spacegroup is H3 (146) and the unit cell is a = b = 81.827 Å, c = 33.849 Å, α = β = 90° γ = 120°. Further details can be found in Additional file 1: Table S1. X-ray data collection for Intergen (II) crystal at the Photon Factory beamline BL-6C (Ibaraki, Japan) The following details were supplied by Sakabe [8]. A synchrotron data set to 0.7 Å was collected at the Photon Factory beamline BL-6C using wavelength λ = 0.97974 Å. Data were measured on a specially designed Weissenberg type instrument known as "Galaxy", employing a fully automated high speed imaging plate detector. The detector comprised a vertically focussing 1 m long bent mirror of Pt-coated fused silica at a distance of 21 m from the SR source point and 7 m from the focal point. The low resolution limit was 50.0 and high resolution limit 0.7 Å; the number of reflections observed was 91.73%; R merge for I obs = 0.05579 for 57006 hkl's. The resolution of useful diffraction data achieved and used for structure analysis was 0.92 Å [10][11][12][13][14]. The space group is: H3 (146); the unit cell is: a = b = 81.120 Å, c = 33.930 Å, α = β = 90° γ = 120°. X-ray data processing for Insugen (I) crystal Manual processing of the data was carried out using XDS [15] to integrate and Aimless [16] to scale and merge intensities. The purpose of manual scaling was to optimise the included data to maximise the final resolution to 0.92 Å. Structure solution and initial refinement Insugen (I) Molecular replacement was carried out with the published structure 3E7Y as a search model in the program MOLREP [17], followed by ten cycles of least squares refinement using the program REFMAC [18]. Further details can be found in Additional file 1: Table S1. Presence of Zn in the Insugen (I) Crystal A fluorescence mca scan, Fig. 3, was carried out to confirm the presence of zinc in the crystals. Model building and further least squares refinement Insugen (I) Model inspection and rebuilding were performed using the program WinCoot 0.7 [19] and further isotropic refinement was carried out with the program PHENIX [20]. Water molecules were added at the end of refinement using the automated method provided in PHE-NIX. Refinement of the Insugen (I) crystal structure was continued using the program SHELX-97 interfaced with SHELXPRO [21]. This facilitated the overall inclusion of Fig. 2 Analysis of residues in the porcine insulin structure of Baker et al. [1] which may be important factors involved in the biological activity. α indicates most likely and γ is least likely to be active. The positionally invariable cysteines that form the disulphide bridges are also included as being very likely to be involved, rated α H atoms and use of anisotropic temperature factors for the non-H atoms. For the protein structure H atoms initially assigned in calculated positions were refined with isotropic thermal parameters. H atoms were not assigned to the waters. During the course of this phase of the analysis several residues were observed in the electron density to have ordered or clear double conformations which were built into the structure and their relative occupancies were included in the refinement summing to 1.0. At the end of the SHELXPRO refinement the R factor and R free (all data) were 0.108 and 0.146, respectively. The program MolProbity [22] was used for structure validation. Inspection of the Ramachandran plot revealed that 97.53% of the residues are in allowed regions. All coordinates and data have been deposited in the Protein Data Bank, with identification code 5E7W. The final statistics of refinement are summarized in Table 1. Model building and further least squares refinement for Intergen (II) The structure for 3W7Y was determined by molecular replacement and refined using the program REFMAC [18]. Non-hydrogen atoms were refined anisotropically. Several residues were modelled as two clear conformers with complementary occupational parameters having a sum of 1.0. At the end of the refinement the R factor and R free were 0.162 and 0.180, respectively. Inspection of the Ramachandran plot revealed that 96.81% of the residues are in the allowed regions. All coordinates and data are deposited in the Protein Data Bank, with identification code 3W7Y. General comments Superficially the ultra-high resolution structure of HR insulin (Insugen I), as expected, strongly resembles that of 2Zn porcine insulin (see "Introduction") having an asymmetric unit with 2 independent molecules: molecule 1, comprising peptide chains A and B; and molecule 2, comprising peptide chains C and D. Peptide chains A and C are identical in sequence, as are chains B and D. As described below there are significant and interesting differences between the detailed ultra-high resolution structures of Insugen (I) and Intergen (II) and also between the two human recombinant insulin structures and the less detailed porcine insulin [1]. For example in the porcine insulin structure [1] 289 waters were assigned and in Intergen (II) 275. However after intense scrutiny and assessment 220 water molecules have been included and refined in the Insugen (I) structure. Further features of interest in the Insugen (I) structure are: (i) an acetate molecule ACT2101 (or simply ACT) has been assigned in the neighbourhood of Zn2100 in molecule 1 and is in fact coordinated with this Zn. This unexpected feature is described below and is presumably a consequence of the zinc acetate used in the crystallization procedure. The acetate molecule has excellent refinement parameters and geometrical features. To the best of our knowledge acetate has not been assigned to any other published insulin structure; further evidence for this assignment can be found in Additional file 1: Text S1 and Figure S3: (ii) a solvated propanol molecule has been assigned as described below in detail. The propanol molecule POL5001 (or simply POL) forms H-bonds with the prominent Oγ1A of ThrD27 located on the A conformation of ThrD27 which has two clearly defined conformations A and B, of which A has 0.645 occupancy and B 0.355 occupancy. POL is also H-bonded to water 6007. Further evidence for the assignment of propanol can be found in Additional file 1: Text S2 and Figure S4. There is no evidence of propanol solvate close to ThrB27 in chain B which has a single fully occupied conformation (see below). Intergen (II) shows no evidence of either acetate or propanol in the electron density for the deposited 3W7Y structure. To the best of our knowledge solvated propanol has not been reported as present in any other determined insulin structure. Possible origins of the solvated propanol are examined. As discussed below other differences occur between the two human recombinant insulin crystal structures. Such differences may ultimately be of importance with respect to the hormonal and biological activities of these synthetic therapeutics [2]. Description of the secondary structure regions in Insugen (I) The ultra-high resolution refinement of HR insulin, Insugen (1) undertaken in the analysis described above has enabled a study of the secondary structure motifs in the insulin molecule to be carried out in detail which exceeds all previous studies. Chain A (Fig. 4a) Helix A1 (Fig. 4a, b) Helix A1: This involves the first 9 residues GlyA1-SerA9 and comprises about 2 turns of a distorted α-helix. Although GlyA1 involves a bifurcated H-bond and its (φ, ψ) values are indeterminate because it is N-terminal, this residue does seem to be part of the helix. SerA9 is at the C-terminal end of the helix, its side chain H-bonding to the peptide N of IleA10. Details are in Fig. 4b. Strand A2 (Fig. 4a) Strand A2 runs from IleA10-SerA12 forms an antiparallel sheet with strand B1 in the B-chain (see below). Note there is only one β-bridge, at CysA11. Helix A3 (Fig. 4a, c): this secondary structure involves LeuA13-TyrA19 and is a 7 residue 3 10 helix. The SerA12 side-chain caps the N-terminal end of the helix by H-bonding to the peptide N of GlnA15, whose side-chain in turn forms an H-bond to the N of SerA12. The carbonyl of SerA12 forms the first H-bond of the helix, but the (φ, ψ) values of SerA12 suggest it is part of the preceding strand and not this helix. Strand A4 (Fig. 4a): CysA20 and AsnA21 appear to form a mini strand and participate in an anti-parallel sheet with strand B4 (Fig. 6a) in the B-chain strand. The carbonyl oxygen of TyrA19 forms the first H-bond of the strand although it is part of the preceding helix. Table 1 Data-collection and final refinement statistics Values in parentheses are for the highest resolution shell a R merge = hkl i |I i (hkl) − �I(hkl)�|/ hkl i |I i (hkl)| where I i (hkl) and 〈I(hkl)〉 are the observed intensity and mean intensity of related reflections respectively Helix C3 (Fig. 5a, c): LeuC13-TyrC19 is a 7 residue 3 10 helix comprising about 2 turns. SerC12 caps the N-terminus end with its side-chain forming an H-bond with the peptide N of GlnC15, while the side-chain of GlnC15 forms an H-bond with the peptide N of SerC12. Strand C4 (Fig. 5a): CysC20 and AsnC21 comprise a mini strand and this forms an anti-parallel sheet with strand D4 in the D-chain (see below). Chain B (Fig. 6a) Strand B1 (Fig. 6a): This comprises seven residues from PheB1 to CysB7, based on (φ, ψ) values. This strand forms an anti-parallel sheet with the strand A2 in the A-chain. Central Loop B3 (Fig. 6a): There is a type I turn from GlyB20-GlyB23 and an open α-turn from CysB19 to GlyB23. Strand B4 (Fig. 6a): In terms of (φ, ψ) values, this strand could be considered to extend from PheB24 to ThrB30, but in terms of H-bonds in the sheet, it ends at LysB26. It forms an anti-parallel sheet with D4. Note that strands A4 and C4 are part of this four-strand sheet. Chain D (Fig. 7a) Strand D1 (Fig. 7a): Based on (φ, ψ) values this strand comprises seven residues from PheD1 to CysD7. It is perpendicular to strand C2 but does not form a sheet. There is only one H-bond from NH of LeuD6 to CO of CysC6 of chain C which is part of helix C1. Helix D2 (Fig. 7a, b): This is a 12 residue α-helix from GlyD8 to CysC19. Note CysD7 is part of strand D1, GlyD8 does not have helical (φ, ψ) values but does have a bifurcated H-bond and CysD19 is helical. Strand D4 (Fig. 7a): This extends from PheD24 to TyrD26. It forms a sheet with strand B4 and this sheet also comprises strands A4 and C4. Type I Turn D5 (Fig. 7a, c): This is a type I turn and comprises ThrD27, ProD28, LysD29 and ThrD30. Solvent molecules Solvated water molecules in Insugen (I) In the crystallographic asymmetric unit a total of 220 water molecule positions were assigned by stereochemical inspection and evaluation of the electron density displayed by WinCoot 0.7 [19]. These were included successfully in the ShelxL refinement with anisotropic thermal displacement parameters. Water H atoms were fixed geometrically. Analysis of the hydrogen bonding properties of the water molecules was carried out using Accelrys Discovery Studio 3 [23] which enabled H-bond geometry to be tabulated. These results are summarised in Table 2 which shows the presence of a variety of H-bond types with acceptable molecular geometry involving different combinations of side-chain-water interactions and water-water interactions. For a given water molecule the number of side-chain-water interactions varies from 0 to 7 and the number of water-water interactions from 0 to 5. A total of 285 side-chain-water H-bonds and 139 unique water-water H-bonds were observed. Figure 8 shows an example of a water molecule, water 6210, having 4 H-bonds to side-chain atoms and 2 H-bonds to other waters (6128 and 6209), denoted by type 4,2 in Table 2. Salt bridges in Insugen (I) Residues involved in the six salt bridges observed in the Insugen (I) structure are listed in Table 3 together with the corresponding bridge length. Figure 9 shows the salt bridge between GLYA1:HOC and GLUA4:OE1. Water-side chain interactions in Insugen (I) Of the 102 amino acid residues in Insugen (I) a total of 18:2 in both chains A and C; 8 in chain B; and 6 in chain D do not form any hydrogen bond interactions with solvated water molecules. These residues are as follows: Table 2 Types of H-bond involving water and their numbers: W-SC water-side chain, W-W water-water Eg 3,3 N = 2 (italicized) means that 2 water molecules have a total of 3 hydrogen bonds to side chain atoms plus 3 hydrogen bonds to another water molecule (6 hydrogen bonds in total) Type of H-bond W-SC When such effects are observed it is possible that the use of these harsh high speed experimental conditions have both caused and allowed these alternative structures to be captured for detailed examination. It is also possible that such alternative conformations may have a bearing on the biological activity of the protein. As described below, the present ultra-high resolution structures of human recombinant insulin Insugen (I) and Intergen (II) both display several amino acid residues having two distinct ordered conformations. As described in detail below the same residues are not necessarily affected in corresponding protein chains in either the Insugen (I) or Intergen (II) structure. Thus, somewhat surprisingly, the disordered regions do not match 1:1 between the two recombinant structures or between corresponding protein chains in the same structure. A detailed analysis and comparison is given below. It is possible that these structural features may affect the biological functions of these recombinant insulins [2]. Properties of the electron density for Insugen (I) are summarised in colour code in Fig. 11a and in further detail in Additional file 1: Tables S3a-d. Insugen (I) chain A The electron density of Insugen (I) chain A is of very high quality (mainly blue) with few problems associated with fitting the amino acid residue structures; only the C-terminal residue N21 exhibits a The table above indicates the 18 residues in Insugen (I) which do not form any hydrogen bond interactions with solvated water molecules. There are 2 in both chains A and C; 8 in chain B; and 6 in chain D. Entries in bold are common to two chains, either A and C, or B and D. The residues common to two chains are in bold: Leu16 in both chains A and C are without water interactions as are Leu11, Val12, Ala 14, Leu15 and Cys 19 in both chains B and D. The sequence LeuD11-ValD12-GluD13-AlaD14-LeuD15 is shown in Fig. 10. GluD13 is the only residue in this sequence which forms H-bonds with water molecules i.e W6034 with OE2 and W6036 with OE1. Survey of the peptide side chain electron density and conformations in Insugen (I) and Intergen (II) Peptide side chain electron density and conformations in Insugen (I) It is well known that ultra-high resolution protein structures derived from X-ray diffraction data using cryo cooled crystals often reveal amino acid residues which display more than a single ordered conformation. See for example Smith et al. [24] and Addlagatta et al. [25]. double conformation with two weak regions of density at the end of the chain. Insugen (I) chain C In contrast chain C exhibits the following characteristics: residues Q5, Y14 and Q15 have mainly good density but with some poorly defined regions; residues C6 and C11 participating in an S-S bridge, and L16 demonstrate clear electron density but corresponding to double residue conformations with good geometry (orange). The remaining residues are clearly defined in strong electron density (blue). Insugen (I) chain B The electron density of Insugen (I) chain B exhibits the following characteristics: residues L11, V12, and E13 and T27 have clear electron density with two distinct conformations (orange); residues Q4 and L17 show mainly clear double conformations but with some poor density at the extreme end; residue F25 clearly adopts two conformations but both phenyl rings A and B occupy very weak regions of density; residues K29 and T30 are mainly clear single conformations but with some terminal disorder. The remaining residues are clearly defined in strong electron density (blue). Insugen (I) chain D In contrast the electron density of Insugen (I) chain D can be described as follows: residues F1, V2, Q4, E21 and K29 have overall poorly defined electron density; residues V12 and V18 have clear double conformations (orange); residue T27 is mainly a clear double conformation but with some missing terminal density. The remaining residues are clearly defined in strong electron density (blue). Overall comments on Insugen (I) For the Insugen (I) structure the following points may be considered. Fig. 11 Analysis of the correspondence of amino acid modelling and electron density quality in a Insugen (I) and b Intergen (II) HR insulins. Colour codes: blue excellent quality electron density with minimal problems for modelling a clear single conformation, orange clear electron density with two distinct conformations modelled, red poorly defined electron density with problems in fitting a meaningful structure, blue + red single conformation modelled, mainly well-defined but with some minor problems, orange + red two distinct conformations modelled, mainly well-defined but with some minor problems It may be possible to rationalise these differences for example via molecular dynamics simulations. Why is chain Implications for the biological activity The residues most likely to affect biological activity in an adverse way are those which display conformational differences between the corresponding chains A and C, or between chains B and D, particularly with respect to the way the residues have been rated in Fig. 2. It follows that the most likely residues are by virtue of: 1. being disordered: PheB25, and to a lesser extent GlnC5, AsnA21, LysB29, LysD29 and ThrB30; 2. exhibiting two clear conformations: CysC6-CysC11, LeuB11 and ValB12, and to a lesser extent LeuC16 and GluB13. The distribution of these residues in the crystal asymmetric unit is shown in Fig. 12. They clearly form two distinctly concentrated groups possibly related to the mode of binding or interaction with the receptor. Peptide side chain electron density and conformations in Intergen (II) [PDB 3W7Y] Properties of the electron density for Intergen (II) are summarised in Fig. 11b. Intergen (II) chain A The electron density of Intergen (II) chain A is of very high quality with no major problems associated with fitting the amino acid residue structures and no multiple conformations or other disorder. Intergen (II) chain C In contrast chain C exhibits the following characteristics: residues 1-4, 7, 8, 12, 13, 16,17,19-21, have clear well defined density; residue Q5 has mainly clear density but with missing terminal density; residues S9, I10, N18 and C6-C11 are modelled as single conformations but are probably well ordered double conformations (all shown in orange in Fig. 11b; Y14 has very poor electron density and is fitted as Ala; Q15 also has very poor density and is disordered). Intergen (II) chain B Chain B exhibits the following characteristics: residue F1 has mainly clear density but with missing terminal density; residues 2-10, 13-26 and 28-30, have clear well defined density; residues L11 and V12 are modelled as single conformations but are probably well ordered double conformations, whereas residue T27 is in clear well defined density and is modelled as a double conformation but has missing terminal density (all three are shown in orange in Fig. 11b). Fig. 12 Distribution of residues possibly associated with receptor binding and biological activity of HR insulin, Insugen (I). The major concentration of residues occurs on chain B (blue) which includes the residue Phe25B discussed in the definitive account of the porcine insulin X-ray structure by Baker et al. [1]. In the Insugen (I) structure Phe25B occupies two distinct well defined conformations as shown here. It is of interest to note that in Intergen (II) HR insulin Phe25B has a clear well defined conformation. Alternative conformations in Insugen (I) residues are coloured blue here. A minor group of residues occurs on chain C (coloured grey). Drawn with Accelrys Discovery Studio 3 [23] Intergen (II) chain D Chain D exhibits the following characteristics: residues 2,3, 5-11, 13-20, 22-28, and 30 are all well defined in clear electron density; F1 is in clear but weak density; Q4 is largely well defined but has missing terminal density; V12 is modelled in a clear single conformation but is in density that strongly suggests it is disordered in two clear conformations (orange in Fig. 11b); E21 and K29 are poorly defined with weak density that does not include all atoms in the residue chains. Overall comments on Intergen (II) As for the Insugen (I) structure the following points can be made for Intergen (II). Why is chain A so well ordered while chain C shows a number of double conformations and poorly defined residues? Chain B shows one double conformation. There are no double conformations in chain D. Comparison of the Insugen (I) and Intergen (II) structures Referring to Fig. 11: 1. Both A-chains have mostly well-defined electron density with very few problems in their interpretation. 2. For the C-chains the only notable difference here lies in the assignment of a double conformation for the C6-C11 disulphide bridge in Insugen (I). As mentioned above the electron density for Intergen (II) in this region, Fig. 16, strongly suggests that it might be possible to model a double conformation here as well. 3. Comparison of the B-chains of Insugen (I) and Intergen (II): differences here occur for residues L11, V12 and R22 which have double conformations in Insugen (I) and T27 which has a double conformation in Intergen (II). Insugen (I) chain B also has problem residues E13, L17, E21, F25, T27, K29 and T30, which are well behaved in Intergen (II). Intergen (II) chain B has one residue T27 modelled as a double conformation but which is single in Insugen (I). There are a number of differences between Insugen (I) chain D and Intergen (II) chain D. In Insugen (I) F1, V2 and T27 all have problem electron density but are well behaved in Intergen (II); Q4, E21 and K29 have weak or poorly defined electron density in both structures; residues S9, V12 and V18 have double conformations in Insugen (I) but not in Intergen (II). General comments on Insugen (I) and Intergen (II) structures The above analysis has indicated that in both the Insugen (I) and Intergen (II) structures the sequence equivalent protein chains A and C, and B and D, respectively exhibit significant differences with respect to their corresponding amino acids such as double conformations and quality of the electron density. It is of interest to note that Baker et al. [1] in discussing the 1.5 Å X-ray structure of porcine insulin, report the presence of seven disordered amino acid residues: two in chain B (ArgB22 and LysB29) and five in chain D (GlnD4, ValD12, GluD21, ArgD22 and ThrD27). Of these only two amino acids in Insugen (I) ArgB22 and ValD12, have double conformations. The question of double conformations and poorly defined or absent electron density in the recombinant human insulin structures and the widespread lack of correspondence between the two raises two questions: (1) what is the origin of these differences? And (2) do they affect the therapeutic properties of these preparations? With respect to question (1) the possibilities include (a) method of preparation including folding of the recombinant amino acid-chains and (b) the forces in play when the crystal is cryo cooled prior to X-ray data collection. With respect to question (2) it is well known that differences in the form of a therapeutic insulin preparation with respect to the naturally occurring insulin can induce the production of antibodies in patients. No such indication has been noted with respect to the widespread use of either Insugen (I) or Intergen (II) but is nevertheless a possibility which should be borne in mind. Conclusions on the comparison between Insugen (I) and Intergen (II) structures Possible explanations for the observed bifurcation of chain C(3) Sγ6-Sγ11 disulphide are as follows: CysC6 is hydrogen bonded to a water molecule and there are several other waters modelled in this region which may be associated with greater conformational flexibility compared to CysA6. In addition CysA6 is in a hydrophobic pocket devoid of solvate molecules and consequently the disulphide may be more restricted by this environment. This is supported by the fact that the section of chain D close to chain A(1) Sγ6-Sγ11 disulphide is disordered (residues D1, 2 and 4) whereas the section of chain B close to chain C(3) Sγ6-Sγ11 disulphide is not, Fig. 11a and Additional file 1: Tables S3c, d. In "Molecular dynamics" the results of a molecular dynamics study of this observed order/disorder in the Sγ6-Sγ11 disulphides are presented. Intergen (II) structure: chain C(3) S-S bridge between Sγ6-Sγ11 Inspection of the deposited X-ray structure of Intergen (II) (3W7Y), indicates that no attempt was made to model CysC6-CysC11 in chain C in a double conformation. However the superposition of the refined Insugen (I) chain C with the 3W7Y chain C indicates that the [23] alternate conformation of the disulphide from residues CysC6-CysC11 is likely also to be present, but not modelled, in the 3W7Y structure. This is indicated by the presence of negative electron density (green) in the same position as the CysC11 γ sulphur atom in the second (minor) conformation and positive (red) electron density in the over modelled main conformation, Fig. 16. The possibility of this effect being accounted for by radiation damage in the Insugen (I) structure was investigated by closely inspecting the intensity data collected. This led to the conclusion that there is no global suggestion of radiation damage in the data. Next a number of subsets of data were integrated and scaled and the minimum set of data with acceptable completeness was assembled by using images 1-600 (the first third of the data). When solved and initially refined there was still evidence for the second conformation at this disulphide bond. As two clear conformations, rather than complete disorder have been assigned successfully it may be concluded that this is a reflection of the true state in the crystal, rather than radiation damage. Further examination of the difference in the disulphides A6-A11 (single ordered conformation) and C6-C11 (clear ordered double conformation) may be explained by the difference in solvent exposure. C6 is less than 4 Å from the nearest solvent molecule and there are several waters modelled in that area which may give greater conformational flexibility to the region. A6 is in a hydrophobic pocket and consequently the disulphide may be more restricted by that environment. This is supported both by the fact that the section of chain D in this vicinity of the part of the molecule is also disordered (see above). Solvated propanol in Insugen (I) The ultra-high resolution Insugen (I) X-ray structure has been found to include an unexpected solvated propanol molecule (POL5001), Fig. 17a. This solvate forms H-bonds with the prominent Oγ1A of ThrD27 in chain D(4), water 6002 and water 6007. The electron density for this solvate is clear (Fig. 17a) and the geometry of the refined propanol is excellent (Fig. 17b). ThrD27 in the Insugen (I) structure is cleanly split into two parts A and B as can be seen in Fig. 17a. To the best of our knowledge no other insulin structure has been shown to include structurally ordered propanol. Figure 17c shows the propanol molecule in Insugen (I) lying in a binding pocket formed by rigid body movement of the first helix of chain C with respect to the structure of Baker et al. [1] (PDB 4INS). Interestingly, there is a similar Fig. 15 a Electron density in the ordered internal S-S bridge Insugen (I) chain A(1) between Sγ6 and Sγ11. Drawn with WinCoot 0.7 [19]. b Insugen (I) structure: chain A(1) S-S bridge between Sγ6 and Sγ11 showing the geometry of the ordered S-S bridge after refinement. Drawn with Accelrys Discovery Studio 3 [23] Fig. 16 Electron density in Intergen (II) (3W7Y) in the vicinity of the disulphide bridge in chain C CysC6-CysC11. The presence of green density (arrowed) suggests the existence of a second conformation, as in the Insugen (I) structure. This second, minor, conformation has not been modelled in deposited Intergen (II) structure. Drawn with WinCoot 0.7 [19] movement of chain A in spite of there being no propanol solvate in this region. The C-terminal of chain D is also displaced towards the propanol binding pocket, while on chain B the movement is in the opposite direction. Figure 18a shows the electron density in Insugen (I) in the vicinity of ThrB27 in chain B. There is no evidence of solvated propanol bound in this site. Similarly Fig. 18b shows Intergen (II) in the vicinity of chain D ThrD27 again with no propanol present, as is also the case for Intergen (II) chain B ThrB27. Comments on the solvated propanol in Insugen (I) It is interesting to note that Step 12 of US Patent Application Number US 13/032,797 [27] describes the use of n-propanol in a process for producing improved preparations and methods for manufacturing substantially liquid preparations of RH insulin API. It is possible that the manufacture of Insugen (I) has included a similar step and this is the origin of the bound propanol revealed in the ultra-high resolution X-ray structure described here. [19]. b Detail produced by Discovery Studio 3 (Accelerys) [23,35] showing the solvated propanol in Insugen (I) with respect to OG1A ThrD 27 and water 6007. c The solvated propanol in Insugen (I) with respect to Oγ1A Thr D27 and water 6007. Drawn with Discovery Studio 3 (Accelerys) [23,35] Intergen (II) 3W7Y: WINCOOT 0.7 [19] electron density in the vicinity of Thr D27. There is no evidence of solvated propanol in this site. The same applies to the Intergen (II) ThrB27 site It is possible that the presence of propanol in this insulin preparation may have consequences with respect to its biological/therapeutic characteristics [28]. Further evidence for the assignment of propanol in this pocket of electron density was obtained by modelling in a number of different likely possibilities. Of these propanol emerged as the most likely candidate (see Additional file 1: Text S2, Figure S4). The use of the molecular modelling procedures described in "Molecular dynamics" to investigate reasons for the presence of propanol in the binding site located on chain D of Insugen (I) described here is currently in progress. The Zn sites in molecules 1 and 2 Insugen (I) and Intergen (II) have been synthesised to include the Zn ions present in naturally occurring insulins. The Zn ions are an essential feature in the formation of the crystal structure and are located on a crystallographic three-fold axis. In porcine insulin 2 Zn crystals [1], three insulin dimers are assembled around two zinc ions, 15.82 Å apart on the threefold axis. Each zinc is coordinated to three symmetry related Nε atoms of residue His10B, both at 2.05 Å, and to three water molecules at 2.36 and 2.21 Å, in molecules 1 (chains A and B) and 2 (chains C and D), respectively. During the course of the X-ray analysis of Insugen (I) the Zn sites in molecules 1 and 2 were carefully examined. The Zn site in Insugen (I) molecule 1 The electron density in the vicinity of Zn2 in molecule 1 is shown in Fig. 19a. This reveals an unexpected feature which was modelled and successfully refined as a solvated acetate molecule, acetate 2101. Zn2100 is coordinated to both His 2010 Nε in chain B and an oxygen atom of acetate2101 (Fig. 19b). The geometry of acetate2101 (Fig. 19b) and its refined parameters are of excellent quality. Note: the complete coordination sphere around the zinc ion is generated by application of the crystallographic three-fold symmetry. Figure 19c shows the electron density in the vicinity of Zn1 in Insugen (I) molecule 2. This shows Zn4100 coordinated to His 4010 Nε and two water molecules. This is the normal mode of Zn binding in insulins [1]. Note: as for molecule 1 the complete coordination sphere around the zinc ion is generated by application of the crystallographic threefold symmetry. The Zn site in Insugen (I) molecule 2 Additional file 1: Figure S8 shows the arrangement of the Zn sites in Insugen (I) with respect to peptide chains B and D and the propanol associated with chain D ThrD27. Figure 19d shows the electron density in the vicinity of Zn501 in the Intergen (II) structure molecule 1. Zn501 is coordinated by His10B Nε as usual and a single water molecule water 617. There is no other solvate in this site. The Zn site in Intergen (II) molecule 2 is structured in the same way. Note: as previously stated the complete coordination sphere around the zinc ion is generated by application of the crystallographic three-fold symmetry. Insugen (I) Introduction As discussed previously in the ultra-high resolution X-ray structure of Insugen (I) in the internal disulphide bridge of chain C (CysC6-CysC11) CysC11 is disordered into two sites: A (80%) and B (20%), Figs. 13, 14a, b. However the corresponding disulphide bridge in chain A is not disordered, Fig. 15a, b. In this section molecular dynamics calculations have been employed in order to investigate and find a rational explanation for this difference. Materials and methods In order to prepare for the molecular dynamics (MD) simulations, two pdb files, Cys11_80percent.pdb and Cys11_20percent.pdb were generated from the original high resolution crystal structure by editing the atom records for Cys11, generating two separate pdb files, one with the chain C CysC6-CysC11 disulphide in the major (80%) conformation, the other with the minor (20%) conformation. Following this, both structures, including water molecules in the crystal structure were subjected to energy minimisation using HyperChem 8 Professional (™) [29]. Energy minimisation was performed using the AMBER3 force field [30], using the Polak-Ribere conjugation gradient [31], with only the original contents of the crystal structure contained in a periodic box, since the object of the MD simulation was to explain the disorder in the original unit cell of the high resolution crystal structure, rather than a protein under normal solvated biological conditions. As described below MD simulations were then performed. Two MD simulations were run for each pdb file. The first simulation was run at 310 K for 300 ps, using an initial heat time of 5 ps, with data collected every 0.01 picoseconds, with a time step of 0.002 ps, using NVT dynamics with a Berendsen thermostat [32]. The second simulation was run at a higher temperature of 320 K, an initial heat time of 2.5 ps. Data collection and time steps remained the same as the first simulation. The MD simulations were carried out using the leapfrog algorithm [33], with AMBER3 [30] being used as before. Data was collected with respect to torsion angles for Cys6-Cys11 S-S bonds from both chains A and C, along with root mean square deviations for the torsion angles. RMSD values at the end of the simulations were also collected for both insulin molecules in the crystal asymmetric unit (Table 4a, b). Results The results of both simulations showed several notable changes regarding torsion angle χ3 of the internal Cys6-Cys11 disulphide bonds, where χ3 is defined by the atoms Cβ6-Sγ6-Sγ11-Cβ11. At the lower temperature (310 K) in the major conformation CysA6-CysA11 of chain A Fig. 19 a Insugen (I) electron density in the vicinity of Zn2100 (Zn2) in molecule 1: an acetate molecule acetate 2101 has been modelled in this site close to HisB10 in chain B. Drawn with WinCoot 0.7 [19]. b The Zn site in molecule 1 of Insugen (I). Zn2 is coordinated to His B10 Nε in chain B as usually observed in insulin structures (e.g Baker et al. [1]) and unexpectedly to a highly ordered acetate molecule acetate2101. Drawn with Accelrys Discovery Studio 3 [23]. c Insugen (I) structure electron density showing the vicinity of Zn4100 (Zn1) and HisD10 in chain D. Unlike chain B there is no acetate in this site. Two water molecules have been located whose equivalents are not present in the vicinity of Zn2100 (Zn2) which has the substituted acetate2101. Drawn with WinCoot 0.7 [19]. d Intergen (II) electron density in the vicinity of Zn501 in molecule 1 B-chain. Both HisB10 Nε and water 617 coordinate Zn501. Water617 is the only coordinating water. There is no acetate molecule in this site. The same applies to site D. Drawn with WinCoot 0.7 [19] underwent a conformation change around 8 ps, decreasing from about 120° to 50°, and then increased marginally before staying relatively constant between 60° and 80°. Chain C stayed relatively constant between 90° and 120°, Fig. 20a. The minor conformation for CysC6-CysC11 of chain A stayed relatively constant between 100° and 130°. CysC6-CysC11 of chain C, however, showed several changes. At about 36-41 ps there is an increase in the torsion angle χ3, followed by a decrease (41-47 ps), then another increase (41-66 ps), then another decrease before χ3 remains relatively steady for the rest of the simulation at roughly −80° to −100°, Fig. 20b. For the simulations at 320 K, the most noticeable change for the major conformation of chain C showed a very sharp, but transient decrease to around 50°, between approx 2-4 ps, before increasing and remaining relatively constant for the rest of the stimulation, while chain A remained relatively constant between 100° and 120°, Fig. 21a. Both Cys6-Cys11 of chains A and C stayed relatively constant for the minor conformation, Fig. 21b. Examination of the RMSD torsion angle kinetics for both the major and minor conformations of Cys6-Cys11 for the MD simulations show that for the major conformation at 310 K, RMSD values are much higher for CysA6-CysA11 of chain A, Fig. 22a. However for the minor conformation at 310 K and the major conformation at 320 K, the RMSD values are much higher for CysC6-CysC11 in chain C for all or the majority of the simulation, Figs. 22b and 23a. For the minor conformation at 320 K, RMSD values start higher for Cys C, but then fall below Cys A after 60 ps, Fig. 23b. Materials and methods The structure of Intergen (II) was also minimised using the method described above for Insugen (I), but was not split into two pdb files beforehand as no disordering was modelled for this structure, (Fig. 16). Following this, MD simulations were run at 310 K and 320 K, again using the methods described above for Insugen (I). Results The results for the simulations run at 310 K showed torsion angles χ3 for both Cys 6-Cys 11 in chains A and C largely remaining relatively steady around 80°-100°, except for some transient spikes above or below this range (Fig. 24a). Upon repeating the simulations at the higher temperature of 320 K, torsion angle χ3 for CysA6-Cysa11 in chain A started off mostly steady around 80°-100° up to about 35-55 ps, and then temporarily sharply decreased before increasing again and then remaining relatively constant and mostly steady around 80°-100° for the rest of the stimulation, except for some transient spikes. CysC6-CysC11 of chain C did not show any noticeable changes at the higher temperature, and remained relatively constant around 80°-100°, except for some transient spikes, which is similar to the result obtained for (Fig. 24b). The RMSD torsion angle kinetics support these changes, showing little difference between Cys 6-Cys 11 in chains A and C over the course of the simulation run at 310 K, but are much higher for Cys 6-Cys 11 in chain A than for Cys 6-11 in chain C for the simulation run at 320 K (Fig. 25a, b). Conclusions From the results of both the torsion angle plots and the RMSD kinetics for the Cys6-Cys11 SS bonds for Insugen (I), it is clear that both the Cys6-Cys11 internal disulphide bridges in chains A and C possess flexibility. However the flexibility of Cys6-Cys11 of chain C appears to be much greater, as both the MD simulations for the minor conformation at 310 K and the major conformation show times when the torsion angle of Cys6-Cys11 of chain C shows rapid decreases followed by rapid increases. In contrast, Cys6-Cys11 of chain A only showed one major change, in the major conformation at 310 K, at all other times staying relatively constant. The rapid changes in torsion angles shown by Cys6-Cys11 of chain C of Insugen (I) would appear to explain why it shows disorder in the original crystal structure. From a structural point of view, examination of the secondary structures of chains A and C in the original crystal structure of Insugen (I) may provide an explanation for this increased flexibility. The structure of these chains consists of a single loop between two α-helices. The length of the loop shows differences in chains C and A. In chain C the loop is long enough to contain both Cys residues involved in the internal disulphide bridge, whereas in chain A it is shorter and so one of the Cys residues is located on an α-helix. The longer loop of chain C would be more flexible and therefore may possibly allow for more movement of the Cys residues involved in the disulphide bond. Over the course of the MD simulation Fig. 21 a Plot of torsion angle χ3 changes for Cys6-Cys11 for the major conformation in HR insulin, Insugen (I) for the MD simulation carried out at 320 K. b Plot of torsion angle χ3 changes for Cys6-Cys11 for the minor conformation in HR insulin Insugen (I) for the MD simulation carried out at 320 K Fig. 22 a Plot of RMSD kinetics of Cys6-Cys11 torsion angles χ3 for the HR insulin, Insugen (I) MD simulations carried out on the major conformation of chain C Cys6-Cys11 at 310 K. b Plot of RMSD kinetics of Cys6-Cys11 torsion angles χ3 for the HR insulin Insugen (I) MD simulations carried out on the major conformation of chain C Cys6-Cys11 at 320 K changes in secondary structure occur, most noticeably in chain C, with significant portions becoming converted to coils, which may further affect flexibility (Fig. 26). In contrast, the results for torsion angle changes and RMSD kinetics for Intergen (II) at 320 K suggest that for this structure, Cys 6-11 of chain A possess greater flexibility than Cys 6-11 of chain A. However, the difference in flexibility would not seem to be as great for Insugen (I). Overall conclusions from the molecular dynamics study From the results of both the torsion angle plots and the RMSD kinetics for the Cys6-Cys11 S-S bonds of Insugen (I), it is clear that both the Cys6-Cys11 internal disulphide bridges in chains A and C of possess flexibility. However the flexibility of Cys6-Cys11 of chain C appears to be much greater, as both the MD simulations for the minor conformation at 310 K and the major conformation show times when the torsion angle of Cys6-Cys11 of chain C shows rapid decreases followed by rapid increases. In contrast, Cys6-Cys11 of chain A only showed one major change, in the major conformation at 310 K, at all other times staying relatively constant. The rapid changes in torsion angles shown by Cys6-Cys11 of chain C of Insugen (I) would appear to explain why it shows disorder in the original crystal structure. From a structural point of view, examination of the secondary structures of chains A and C of Insugen (I) in the original crystal structure may provide an explanation for this increased flexibility. The structure of these chains consists of a single loop between two α-helices. The length of the loop shows differences in chains C and A. In chain C the loop is long enough to contain both Cys residues involved in the internal disulphide bridge, whereas in chain A it is shorter and so one of the Cys residues is located on an α-helix. The longer loop of chain C would be more flexible and therefore may possibly allow for more movement of the Cys residues involved in the disulphide bond. Over the course of the MD simulation changes in secondary structure occur in Insugen (I), most noticeably in chain C, with significant portions becoming converted to coils, which may further affect flexibility. General comments and further selected examples The ultra-high resolution X-ray structures of two forms of human recombinant insulin, Insugen (I) and Intergen (II), has revealed several quite unexpected and previously unpredicted features. Both Insugen (I) and Intergen (II) structures exhibit structural features that can be described as: (a) highly ordered; (b) clear and resolved double conformations; (c) badly disordered. The assembled molecule comprises polypeptide chains A, B, C and D where A and C are sequence equivalent, as are B and D. It is somewhat surprising that the occurrence of structural features (a), (b) and (c) between say Insugen (I) chains A and C is by no means one to one but rather almost lacking in correspondence. This observation applies to all pairs of like polypeptide chains in both Insugen (I) and Intergen (II) and to all pairs of like polypeptide chains one from Insugen (I) and one from Intergen (II). It would be of interest (1) to find explanations for these differences and (2) to know whether they affect the therapeutic properties of these preparations? As described in "General comments and further selected examples" below why a given residue should be perfectly ordered in one structure and badly disordered [36] in the other? With respect to question (1) the possibilities include (a) method of preparation including folding of the recombinant amino acid-chains and (b) the forces in play when the crystal is cryo cooled prior to X-ray data collection. With respect to question (2) it is well known that differences in the form of a therapeutic insulin preparation with respect to the naturally occurring insulin can induce the production of antibodies in patients. No such indication has been noted with respect to the widespread use of either Insugen (I) or Intergen (II) but is nevertheless a possibility which should be borne in mind. It may be possible to use molecular dynamics simulations further to resolve some of these considerations. Previous studies: (i) that of Baker et al. [1] at room temperature and a resolution of 1.5 Å on porcine insulin and (ii) that of Smith, Pangborn and Blessing on a commercially available biosynthetic form of T 6 human insulin (Lilly Research Laboratories) at 120 K and 1.0 Å resolution [24] have revealed significant differences in a number of the individual amino acid residue conformations between the two structures. The level of refinement achieved in these two analyses, as is also the case with the low temperature structure of Intergen (II) HR insulin, as judged by the final R values (0.153, 0.183 and 0.168, respectively) are all inferior to that achieved here with the Insugen (I) HR insulin (0.1112). Interestingly Smith et al. [34] list seven side-chains in the porcine room temperature structure [1] at 1.5 Å resolution and nine side-chains in their 1.0 Å human insulin structure as having two distinct conformations. The residues involved are: porcine GlnB4, ValB12, GluB21, ArgB22, ThrB27 and LysD29; human GlnB4, ValB12, GluB17, GluD21, GluC5, LeuC16, ValD12, ValD18 and GluD21. Only two residues are in common in this list: GlnB4 and ValB12. This result follows the trend reported here for Insugen (I) and Intergen (II) that correspondence between the two structures with respect to multiple or disordered conformations does not follow any fixed pattern. However, interestingly, reference to Fig. 11 reveals that in both Insugen (I) and Intergen (II) GlnB4 and ValB12 presented problems in the interpretation of their electron densities. Of the other residues in the above porcine list some are clear single conformations and others are problematic in either Insugen (I) or Intergen (II). Similar comments apply with respect to the above list for biosynthetic form of T 6 human insulin. GlnB4 and ValB12 are the only residues in all four of these insulin structures that presented problems. According to Fig. 2 ValB12 is significantly involved in the interaction of insulin with its receptor. PheB24 and PheB25 in Insugen (I) and Intergen (II) A significant example of structural differences in the ultra-high resolution 100 K structures of Insugen (I) and Intergen (II) can be found in the phenylalanine residues PheB24 and B25 (see Fig. 11). As stated previously in "Introduction" these residues, amongst others, are important for insulin receptor binding [6]. As reported by Baker et al. [1] changes in biological activity occur when these residues are modified. Figures 27a, b show the electron density in Insugen (I) for residues PheB24 and B25, respectively while Fig. 28a, b show the electron density in Intergen (II) for the same residues. The significant observation here is the extremely poor electron density for PheB25 in Insugen (I) which has been modelled as disordered with two distinct conformations, as opposed to PheB24 in Insugen (I) and PheB24 and PheB25 in Intergen (II) which are all excellent examples of strong, clearly resolved single conformation electron density. It is of interest to note that PheB25 in the X-ray structure of porcine insulin [1] has comparatively weak electron density corresponding to a single well defined conformation but with one missing atom on the phenyl ring (see also Footnote 1; Additional file 1: Figure S6a) and PheB24 is in completely well-defined electron Fig. 27 a Electron density in the Insugen (I) structure for PheB24, an example of a clear single highly resolved amino acid residue. b Electron density in the Insugen (I) structure for PheB25, an example of very poor density modelled as a doubly disordered acid residue. This result is surprising in view of the high order observed in PheB24 (a) density as are PheD24 and PheD25. What is probably most surprising is that while Insugen (I) PheD25 has strong electron density corresponding to a single ordered conformation, as is also the case for Intergen (II) and porcine insulin [2], the conformation for Insugen (I) PheD25 uniquely corresponds to that of the ordered porcine B conformation, not the porcine D conformation as displayed by the other two PheD conformations (see also Additional file 1: Text S6, Figure S6a-h). It is planned to investigate the situation with respect to PheB24 and PheB25 in Insugen (I) and Intergen (II) using molecular dynamics as described in "Molecular dynamics" for the Sγ6-Sγ11 disulphides. Intergen (II) structure: chain C(3) S-S bridge between Sγ6-Sγ11 The intra-chain S-S bridge in chain C(3) Cys6-Cys11 has been observed in Insugen (I) to exhibit two ordered conformations. Cys6 occupies a single site while Cys11 occupies two sites with relative occupancies of 0.8 and 0.2, respectively. The geometry and all other refinement characteristics of this bifurcated cysteine bridge are of excellent quality as discussed previously. The corresponding S-S bridge in Insugen (I) chain A(1) is completely ordered which again poses a question about the origin of the distinction between the two molecular dynamics simulations. Molecular dynamics studies have provided rationale in answer to this question. In fact the difference in the disulphides A6-A11 and C6-C11 may be further explained by the difference in solvent exposure. A6 is less than 4 Å from the nearest water solvent molecule and there are several waters modelled in that area which may give greater conformational flexibility to the region. C6 is in a hydrophobic pocket and consequently the disulphide may be more restricted by that environment. This is supported both by the fact that the section of chain B near this part of the molecule is also disordered. With reference to Intergen (II) the corresponding S-S bridge in chain A(1) is also completely singular and ordered. This S-S bridge in chain C(3) as observed by inspection of PDB 3W7Y has been modelled as a single ordered conformation. However, as discussed previously, there is evidence in the electron density (Fig. 16) that this S-S bridge is actually bifurcated as in the corresponding S-S bridge in Insugen (I). S-S bridges with ordered double conformations have been previously reported. For example Cys14-Cys 38 in the ultra-high resolution (0.86 Å) low temperature, synchrotron structure of bovine pancreatic trypsin inhibitor [25] is very similar to Cys6-Cys11 in Insugen (I). Solvated propanol The ultra-high resolution Insugen (I) X-ray structure was found to include an ordered solvated propanol molecule which forms H-bonds with the prominent OG1A of ThrD27 in chain D(4) and two water molecules. The electron density for this solvate is clear and the geometry of the refined propanol is excellent. There is no solvated propanol in Insugen (I) chain B(2) or in either chain B or D in Intergen (II). These differences again offer a challenge to a rational explanation. The origin of the solvated propanol in Insugen (I) may be questioned. However it is known that propanol is a minor component used in the manufacturing process and is most likely to have been introduced into the protein at some stage of the synthesis procedure. To the best of our knowledge no other insulin structure has been shown to include structurally ordered propanol. The Zn sites Insugen (I) and Intergen (II) have been synthesised to include the essential Zn ions present in naturally occurring insulins. The Zn ions are an essential feature in the formation of the crystal structure and are located on a crystallographic three-fold axis. The Zn site in Insugen (I) molecule 1 The electron density in the vicinity of Zn2 in molecule 1 revealed an unexpected feature which was shown to be a solvated acetate molecule. Zn2 is coordinated to both His10B Nε in chain B and an oxygen atom of the acetate. It is most likely that the presence of solvated acetate originated during crystallization. There are no other solvated acetate sites in either Insugen (I) or Intergen (II). Authors' contributions DRL was responsible for growing the Insugen (I) crystals used for X-ray data collection at Diamond, monitoring the structure determination and refinement, advising on the graphical analysis of the structure, surveying the progress of the manuscript preparation, checking the Figures and Tables and advising on the Supporting Information. RAP monitored the structure determination and refinement, carried out the graphical analysis of the two structures Insugen (I) and Intergen (II), surveyed the locations and involvement of water molecules, initiated and carried out the manuscript preparation including many of the Figures and References, devised the Graphical abstract and prepared most of the Supporting Information. CMCL selected and mounted the crystal for X-ray diffraction measurements, processed and assessed the data, solved and refined the initial structure and measured fluorescence spectrum to test for the presence of Zn. CEN carried out the further refinement of Insugen (I) including assignment of multiple side chain conformations, conversion to anisotropic thermal parameters for the non-H atoms, modelling H-atoms for inclusion in the refinement, detailed checking and assessment of the structure as it developed, further refinement of the deposited Intergen (II) structure, preparation of some of the Figures and Tables and deposition of the data in the PDB. BZC was responsible (together with RAP and DRL) for the inception of the project, closely followed the progress of manuscript preparation and advised on certain aspects of how to proceed, took responsibilty for the correct useage of scientific units and advised on aspects of submission of the manuscript for publishing including appropriate References, quality of the graphics, Figures and Tables in both the main text and Supporting Information. ZAI-K and AAB were responsible for supplying the original Insugen (I) material for crystallization and advised on its biochemical properties. BJH supervised the MD calculations, interpretation and presentation of the results and implications of the results for the two structures. NCJG undertook the running of the MD calculations, preparation of the graphical outputs and interpretation and implications of the results. JWS undertook the detailed analysis of features of secondary structure in Insugen (I) and advised on the preparation of their graphical images. JNL was largely responsible for characterising the features of the water structure and assisted in devising suitable ways of illustrating and presenting them, he also advised on both special and general aspects of the presentations in the manuscript and in the reading and checking of most of the sections presented. AKB was largely responsible for initiating the further refinement of Insugen (I) and played a significant part in the analysis of the residues with multiple conformations and their refinement, he also carried out a number of checks on the
16,336.6
2017-08-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Estrogen Inhibits Glucocorticoid Action via Protein Phosphatase 5 (PP5)-mediated Glucocorticoid Receptor Dephosphorylation* Although glucocorticoids suppress proliferation of many cell types and are used in the treatment of certain cancers, trials of glucocorticoid therapy in breast cancer have been a disappointment. Another suggestion that estrogens may affect glucocorticoid action is that the course of some inflammatory diseases tends to be more severe and less responsive to corticosteroid treatment in females. To date, the molecular mechanism of cross-talk between estrogens and glucocorticoids is poorly understood. Here we show that, in both MCF-7 and T47D breast cancer cells, estrogen inhibits glucocorticoid induction of the MKP-1 (mitogen-activated protein kinase phosphatase-1) and serum/glucocorticoid-regulated kinase genes. Estrogen did not affect glucocorticoid-induced glucocorticoid receptor (GR) nuclear translocation but reduced ligand-induced GR phosphorylation at Ser-211, which is associated with the active form of GR. We show that estrogen increases expression of protein phosphatase 5 (PP5), which mediates the dephosphorylation of GR at Ser-211. Gene knockdown of PP5 abolished the estrogen-mediated suppression of GR phosphorylation and induction of MKP-1 and serum/glucocorticoid-regulated kinase. More importantly, after PP5 knockdown estrogen-promoted cell proliferation was significantly suppressed by glucocorticoids. This study demonstrates cross-talk between estrogen-induced PP5 and GR action. It also reveals that PP5 inhibition may antagonize estrogen-promoted events in response to corticosteroid therapy. Breast cancer is a leading cause of cancer mortality among women. In 2004, 186,772 women were diagnosed with breast cancer and 40,954 women died from breast cancer in United States (1). The female hormone, estrogen, promotes breast cancer cell growth via the estrogen receptor (ER), 2 which is expressed in ϳ60% of breast cancers (2). Another consequence of estrogen is suggested by observations that the course of some allergic, autoimmune, and malignant diseases is more severe and less responsive to corticosteroid treatment in females (3)(4)(5), implicating a role for estrogen in glucocorticoid resistance. There are two forms of ER, ER␣ and ER␤, that reside in the cell membrane, cytoplasm, and nucleus (6,7). Nuclear ER regulates gene transcription by binding to DNA directly at estrogen-response elements or indirectly through interactions with transcriptional factors (7). Membrane-bound ER participates in cell signal transduction by activating G protein subunits and subsequently augments downstream kinase activities, such as p38 and ERK, in endothelial and breast cancer cells (8,9). Binding to estrogen causes a conformational change in the ER that promotes the assembly of an active transcription complex at estrogen-induced genes such as c-myc and cyclin D1, which mediate the promotion of cell proliferation (10,11). Three pathways have been reported to affect GR phosphorylation and activity. First, MAPK family members p38, JNK, and ERK regulate GR activity differentially. Activation of JNK and ERK inhibits GR transcriptional enhancement, and inhibition of JNK and ERK by inhibitors enhances GR function (19 -21). The role of p38 in modulation of GR activity remains controversial (22,23). Second, cyclin-dependent kinases (CDK) phosphorylate GR and regulate its activity. CDK2 phosphorylate rat GR at Ser-224 and Ser-232 (24,25), and CDK5 suppresses GR transcriptional activity by attenuating binding of transcriptional cofactors to glucocorticoid-responsive promoters (26). Third, serine/threonine protein phosphatases (PP) negatively regulate GR phosphorylation. Inhibition of PP1, PP2A, PP2B, and PP5 by protein phosphatase inhibitors okadaic acid and calyculin A potentiates GR activity and increases GR phosphorylation (27,28). Unlike PP1 and PP2A, PP5 acts predominantly in protein complexes because the N-terminal domain of PP5 folds over the catalytic site blocking access to substrates in the absence of other proteins (29,30). PP5 has been identified in complexes containing GR and heat shock protein 90 (hsp90) (31,32), suggesting that PP5 may regulate GR activity. Glucocorticoids have been used in breast cancer therapy to antagonize the growth-promoting effect of estrogen. Nonetheless, clinical trials of glucocorticoid monotherapy in breast cancer have shown only a modest response (33). In advanced breast cancer meta-analyses, the addition of glucocorticoids to either chemotherapy or other endocrine therapy has resulted in increased response rates, but not increased survival (33,34). To date, the mechanism of glucocorticoid resistance in breast cancer has not been elucidated but would be important to understand if estrogen-driven corticosteroid resistance is to be circumvented. In this study, we investigated the three GR-regulating pathways discussed above, and we identified PP5 to be involved in the inhibition of GR activity by estrogen providing a novel mechanism of cross-talk between estrogen and glucocorticoids. EXPERIMENTAL PROCEDURES Materials-17␤-Estradiol (E 2 ), dexamethasone (DEX), ICI 182,780, nonimmune rabbit serum, and monoclonal anti-␤-actin antibody were purchased from Sigma. PD98059 and roscovitine were purchased from Calbiochem. Purified mouse antiglucocorticoid receptor antibody was purchased from BD Biosciences. Rabbit polyclonal antibody to glucocorticoid receptor, rabbit polyclonal antibody to phospho-glucocorticoid receptor (Ser-226) antibody, PP5 antibody, and mouse monoclonal antibody to TATA-binding protein (TBP) were purchased from Abcam Inc. (Cambridge, MA). Phospho-glucocorticoid receptor (Ser-211) antibody was purchased from Cell Signaling (Danvers, MA). Rabbit IgG was purchased from Southern Biotechnology Association, Inc. (Birmingham, AL). Normal mouse IgG1 and protein A/G PLUS-agarose were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). FuGENE 6 transfection reagent was purchased from Roche Applied Science. NE-PER nuclear and cytoplasmic extraction reagents were purchased from Pierce. SureSilencing shRNA plasmids were purchased from SuperArray Bioscience Corp. (Frederick, MD). ON-TARGETplus SMARTpool siRNA against PP5 and ON-TARGETplus nontargeting pool siRNA were purchased from Dharmacon (Lafayette, CO). CellQuanti-MTT TM cell viability assay kit was purchased from BioAssay Systems (Hayward, CA). SuperBlock was purchased from Skytec (Logan, UT). Nonimmune donkey serum was purchased from Jackson ImmunoResearch (West Grove, PA). Anti-mouse or anti-rabbit horseradish peroxidase-labeled IgG was purchased from Amersham Biosciences. Chemiluminescent reagent was purchased from PerkinElmer Life Sciences. Cell Culture and Treatment-MCF-7, T47D, and MDA-MB-231 cell lines were purchased from American Type Culture Collection. For routine proliferation, MCF-7 and MDA-MB-231 cell lines were cultured in minimum Eagle's medium; T47D was cultured in RPMI 1640 medium, supplemented with 10% fetal calf serum, 50 g/ml streptomycin, and 50 units/ml peni-cillin. Cells were cultured in hormone-free medium (phenol red-free minimum Eagle's medium containing 2.5% charcoalstripped serum) at least 2 days before they were treated with 10 nM E 2 and 100 nM DEX for the time length as indicated below. An equal volume of ethanol was used as vehicle control. Proliferation Assay-6 ϫ 10 3 MCF-7 cells were plated in flat bottom 96-well plates and cultured in hormone-free medium. Two days later, 10 nM E 2 was added. The following day, 100 nM DEX was added alone or in combination with E 2 , and the cells were allowed to grow for an additional 2 days. In experiments that examined the effect of PP5 knockdown on MCF-7 cell line proliferation, the cells were transfected with 0.05 g of Sure-Silencing shRNA plasmid per well 24 h prior E 2 treatment. The number of viable cells was determined with CellQuanti-MTT TM cell viability assay kit according to the manufacturer's instructions. Real Time PCR-10 5 cells per well were cultured in hormone-free medium in 24-well plates and treated with hormones and inhibitors as indicated. Total RNA was prepared using RNeasy mini kit (Qiagen, Valencia, CA). After reverse transcription, 500 ng of cDNA from each sample were analyzed by real time PCR using the dual-labeled fluorigenic probe method on an ABI Prism 7300 real time PCR system (Applied Biosystems). All primers were purchased from Applied Biosystems (Foster City, CA). The ⌬⌬Ct method was utilized to calculate the relative change in target gene expression as an approximation of transcription based on the change in threshold values for control versus treated cells (the cycle number at which the fluorescent signaling crosses the "threshold" or logarithmic increases in cDNA concentration). This method assumes that both reference gene (internal control, i.e. ␤-actin used in this study) and target genes have similar amplification efficiencies. Immunofluorescence Assay-GR nuclear translocation and its phosphorylation in response to DEX was analyzed according to Ref. 20 with modifications. In brief, 10 5 MCF-7 cells were cultured on 18-mm round coverslips in 12-well plates. Cells were fixed with 4% paraformaldehyde at room temperature for 10 min, permeabilized 15 min in Permeabilization Buffer (PBS containing 0.1% Tween 20, 0.1% bovine serum albumin, 0.01% saponin), and blocked at 37°C for 1 h in Blocking Buffer (2.25% bovine serum albumin, 45% SuperBlock, 10% nonimmune donkey serum). Total GR and phospho-GR (Ser-211) antibodies were diluted 1:50 in Permeabilization Buffer and incubated with the cells at 4°C overnight. Corresponding amounts of mouse IgG1 and rabbit IgG were used as negative controls, respectively. The cells were washed in PBS containing 0.1% Tween 20 for 20 min, followed by incubation with Cy3-conjugated secondary antibody (donkey anti-mouse or donkey antirabbit, diluted 1:500 in Permeabilization Buffer containing 300 nM 4Ј,6-diamidino-2-phenylindole) at room temperature for 1 h. Cells were washed again and mounted on slides. All slides were analyzed by fluorescence microscopy (Leica Microsystems, Wetzlar, Germany) with the imaging software Slidebook (Intelligent Imaging Innovations, Denver, CO). Mean fluorescence intensity in the cell nuclei defined by 4Ј,6-diamidino-2phenylindole staining was assessed. Fifty to 100 cells were analyzed per slide. Western Blot-Protein samples were resolved on 4 -12% Bis-Tris gel (Invitrogen) and transferred to polyvinylidene difluoride membranes. The membranes were incubated in PBS containing specific antibodies, 5% dry milk, and 0.05% Tween 20 at 4°C overnight. Subsequently, membranes were washed in PBS, 0.05% Tween 20 and incubated for 1 h at room temperature with anti-mouse or anti-rabbit horseradish peroxidase-labeled IgG (1:10,000), washed, incubated with chemiluminescent reagent, and processed for autoradiography. Knockdown of PP5-10 5 MCF-7 cells were plated in each well of a 24-well plate and cultured in hormone-free medium. 24 h later, cells were transfected with 100 nM siRNA (or 0.1 g of shRNA) in 1 ml of medium containing 1 l of FuGENE 6 transfection reagent. Corresponding amounts of control siRNA or shRNA plasmids were used. Chromatin Immunoprecipitation Assay-GR binding to GRE was assessed by chromatin immunoprecipitation assay as described previously (35) with modifications. Briefly, 2.5 ϫ 10 6 cells were used in each precipitation. After sonication, chromatin solution was pre-cleared with 60 l of protein A/G PLUSagarose beads and 20 l of nonspecific serum, followed by precipitation with 60 l of protein A/G PLUS-agarose beads and specific antibody. Precipitated chromatin complexes were removed from the beads through incubation at 65°C for 30 min with 550 l of Elution Buffer (50 mM Tris, pH 8.0, 1 mM EDTA, 1% SDS). 500 l of eluates were mixed with 25 l of 5 M NaCl, 1 l of RNase A (10 mg/ml, DNase-free) and incubated at 65°C overnight. Samples were then digested with proteinase K, and DNA was purified with QIAquick columns (Qiagen, Valencia, CA) as indicated by the manufacturer, except that the sample was first mixed with PBI buffer (supplied by the manufacturer) for 30 min with agitation (36). Precipitated DNA was quantified by quantitative real time PCR using SYBR green (Applied Biosystems). Primers used to amplify GRE in SGK gene promoter were as follows: 5Ј-CTTGTTACCTCCTCACGTG-3Ј (forward); 5Ј-GTCGTCTCTGCACTAAAGG-3Ј (reverse). Statistical Analyses-Results are expressed as the mean Ϯ S.E. Statistical analysis was conducted using GraphPad Prism, version 5 (GraphPad Software, La Jolla, CA). Responses within an experiment were expressed as fold change over the control setting. These data were analyzed by the paired Student's t test, pairing by experiment. Before testing, paired difference distributions were examined for outliers, which can indicate violation to the normality assumption of the t test. No outliers were apparent. Tests were performed only for specific pre-planned treatment comparisons. Differences were considered significant at p Ͻ 0.05. A minimum of three independent experiments were conducted to allow for statistical comparisons. Estrogen-promoted MCF-7 Cell Proliferation Is Not Affected by Glucocorticoids-In this study, we chose the MCF-7 breast adenocarcinoma cell line as a model for our experiments. This cell line is known to proliferate in response to estrogen stimulation (37). To study whether glucocorticoids can inhibit estrogen-mediated cell growth, a proliferation assay was carried out. MCF-7 cells were first cultured in hormone-free medium for 2 days to deplete hormones in the cells and then pretreated with 10 nM E 2 (the only form of estrogen used in this study) or an equal volume of vehicle (ethanol) for 24 h followed by 100 nM DEX treatment. The purpose of pretreating the cells with estrogen was to mimic the in vivo state, because the breast cancer cells are under estrogen influence before glucocorticoid treatment. Estrogen promoted cell growth by 2.09 Ϯ 0.06-fold as compared with mock control. No change in cell proliferation was noted when cells were cultured in the presence of both estrogen and DEX (1.97 Ϯ 0.04-fold as compared with mock control), indicating that estrogen-promoted cell growth was not inhibited by glucocorticoid treatment. Estrogen Inhibits Glucocorticoid Action-To further explore how estrogen affects the action of glucocorticoids, we assessed the effect of estrogen on DEX induction of MKP-1 and SGK, two well known glucocorticoid-responsive genes (17,18). DEX alone induced MKP-1 and SGK by 3.23 Ϯ 0.16-and 108.30 Ϯ 9.69-fold, respectively, for 3 h in the MCF-7 cell line. Preincubating MCF-7 cells with estrogen for 24 h prior to DEX treatment significantly inhibited MKP-1 and SGK induction by a mean of 85% (n ϭ 3) and 74% (n ϭ 3), respectively (Fig. 1). Similar estrogen effects were observed in another breast cancer cell line, T47D (Fig. 1). Significantly lower GR expression was found in the T47D cell line as compared with MCF-7 cell line (data not shown). 8 h of stimulation with DEX was determined as an optimal time point for MKP-1 and SGK induction in this cell line. Estrogen Exerts Its Effect on Glucocorticoids through ER-To investigate whether estrogen inhibited DEX induction of MKP-1 and SGK through ER, we employed a selective ER inhibitor ICI 182,780 (inhibitory concentration of 50% (IC 50 ) ϭ 0.29 nM) to antagonize ER in MCF-7 cells. The results demonstrated that with the presence of 1 M ICI 182,780, DEX induced both MKP-1 and SGK by 2.37 Ϯ 0.36-and 182.90 Ϯ 38.98-fold, respectively, and the DEX-mediated induction of these genes was no longer inhibited by preincubating the cells with estrogen ( Fig. 2A). These data indicate that estrogen exerts its inhibitory effect on glucocorticoid action through the ER. In all experiments using inhibitors, MCF-7 cells were treated in parallel with estrogen and DEX as mentioned above without inhibitors. The results showed that the cells were responding to hormones the same way as described in Fig. 1 (data not shown). In the ER␣-negative MDA-MB-231 cell line, which expresses only ER␤ (38,39), DEX induction of MKP-1 and SGK was not affected by estrogen (Fig. 2B). These data indicate that estrogen exerts its inhibitory effect on glucocorticoid action through ER␣. Estrogen Has No Effect on GR Nuclear Translocation but Inhibits GR Phosphorylation at Ser-211- Because DEX induces gene expression through GR, which accumulates in the nucleus after ligand binding, and Ser-211 phosphorylation is associated with the transcriptionally active form of GR (40), we tested GR nuclear translocation and GR phosphorylation at Ser-211 by immunofluorescence (Fig. 3) and Western blot (Fig. 4). DEX alone increased GR nuclear localization with concomitant loss of cytoplasmic GR. This was not affected by preincubation of the cells with estrogen (Fig. 3, A and C). DEX-induced GR phosphorylation at Ser-211 was observed only in the nucleus and was significantly inhibited by estrogen by a mean of 39% (n ϭ 3) (Fig. 3, B and D). Consistent with the immunofluorescence assay results, Western blot of cytoplasmic and nuclear fractions showed that estrogen significantly inhibited DEX-mediated Ser-211 GR phosphorylation by a mean of 55% (n ϭ 3) (Fig. 4, A and C) without affecting GR nuclear localization (Fig. 4, A and B). We also tested GR phosphorylation at Ser-226. No phosphorylation at this site was observed in either the absence or presence of glucocorticoid and/or estrogen (data not shown). The other GR phosphorylation sites were not tested because there were no commercially available antibodies. Estrogen Inhibits GR Binding to GRE of SGK Promoter-Chromatin immunoprecipitation was then performed to test GR binding to a well characterized GRE in the promoter region of the SGK gene (18,41). Within 1 h, DEX induced GR binding to the GRE by 5.49 Ϯ 0.15-fold, and this was significantly inhibited by estrogen by a mean of 82% (n ϭ 3) (Fig. 5). This result suggests that estrogen inhibits DEX induction of SGK by reducing GR recruitment to the SGK promoter. Estrogen Suppresses GR Activity through PP5-Because MAPK, CDK, and protein phosphatase can regulate GR activ- ity, we examined the effect of estrogens on these three pathways (19 -32). First, we screened MAPK phosphorylation using human phospho-MAPK array kit from R & D Systems and found that estrogen treatment increased ERK phosphorylation in MCF-7 cells (data not shown) confirming published data (8,42). Inhibition of ERK phosphorylation with 20 M PD98059 (MEK/ERK inhibitor, IC 50 ϭ 2 M), as confirmed by Western blot (data not shown), did not diminish estrogen inhibition of the DEX-mediated induction of either MKP-1 or SGK (Fig. 6A). Similar results were seen when the cells were treated with 20 M roscovitine (26,43), a selective CDK2 (IC 50 ϭ 700 nM) and CDK5 (IC 50 ϭ 200 nM) inhibitor (Fig. 6B) indicating that neither ERK nor CDK pathways were involved in the estrogenmediated inhibition of glucocorticoid action. To examine a possible role of protein phosphatase in ER-GR cross-talk, we assessed PP1, PP2A, and PP5 expression in MCF-7 cells following estrogen stimulation. We found that PP5 expression was significantly induced by estrogen both at the protein level (Fig. 7A) and the mRNA level (Fig. 7B). No difference was observed in the expression of PP1 and PP2A. To determine whether estrogen exerts its inhibitory effect on glucocorticoid action through PP5, we specifically knocked down PP5 by RNA interference, and then assessed DEX induction of MKP-1 and SGK. Estrogen-induced PP5 expression was completely abolished by either siRNA or shRNA (Fig. 8, A and B); shRNA more effectively knocked down the base-line expression of PP5 (Fig. 8A). As before, DEX induction of MKP-1 and SGK was inhibited by estrogen if the cells were transfected with control siRNA (Fig. 8C). In contrast, when the cells were transfected with PP5 siRNA, estrogen-mediated inhibition was almost entirely abrogated (Fig. 8C). These data indicate that estrogen suppresses GR activity through increased expression of PP5. Following PP5 knockdown, DEX-mediated GR phosphorylation at Ser-211 was assessed (Fig. 9). Upon PP5 knockdown, DEX-induced GR phosphorylation at Ser-211 was no longer inhibited by estrogen, as compared with control siRNA treatment. Glucocorticoids Suppress Estrogen-induced Proliferation in the Absence of PP5-As described above glucocorticoids have no effect on estrogen-promoted cell growth, and the subsequent studies suggest that this may be because GR activity is inhibited by estrogen through PP5. To definitively test our proposed mechanism, we knocked down PP5 and assessed MCF-7 proliferation in response to estrogen or to estrogen plus DEX. In cells transfected with the control shRNA plasmid, estrogen alone promoted cell growth by 2.02 Ϯ 0.04-fold, and this increase was unaffected when DEX was also added (Fig. 10). The knockdown of PP5 itself inhibited estrogen-induced cell proliferation by a mean of 65% (n ϭ 3) as compared with estrogen-induced proliferation in cells transfected with the control shRNA (p ϭ 0.0478) possibly because of activation of p53-mediated growth arrest (44). However, the addition of DEX significantly inhibited estrogen-mediated cell proliferation even further, resulting in a mean of 79% inhibition (n ϭ 3) as compared with estrogen-induced proliferation of MCF-7 cells transfected with the PP5 shRNA plasmid (Fig. 10), substantiating the key role of PP5 in estrogen-mediated antagonism of the anti-proliferative effects of glucocorticoids. DISCUSSION This study demonstrates for the first time that estrogen-induced PP5 in breast cancer cells ablates GR function via reduction in ligand-mediated Ser-211 GR phosphorylation. Further- . DEX-induced GR nuclear translocation and its phosphorylation at Ser-211 in MCF-7 cells treated with estrogen as detected by Western blot. A, cells were treated with hormones as described in Fig. 3. Nuclear and cytoplasmic protein samples were prepared using NE-PER nuclear and cytoplasmic extraction reagents and blotted with antibodies against Ser-211-phosphorylated GR. The membranes were stripped and reprobed with antibodies against total GR. Actin and TBP were used as loading controls for cytoplasmic and nuclear proteins, respectively. Images are representative of three independent experiments. Fold changes in the densitometry readings of nuclear total GR normalized to TBP (B) and Ser-211-phosphorylated nuclear GR normalized to total GR (C) in the cells treated with DEX alone (set as 1) versus cells cultured with E 2 and DEX are provided. NS, not significant. more, inhibition of PP5 induction by estrogen restores DEXinduced GR phosphorylation and allows GR-mediated growth arrest in the presence of estrogen. These findings have important implications for breast cancer therapy because they provide an explanation for the limited benefit observed in clinical trials utilizing corticosteroids as monotherapy (33,34). Our study also suggests a potential antagonistic role of estrogeninduced PP5 in cellular glucocorticoid-mediated events in females, and it may provide an explanation of why the course of some allergic, autoimmune, and malignant diseases tends to be more severe and less responsive to corticosteroid treatment in females (3)(4)(5). MKP-1 and SGK were chosen as well characterized glucocorticoid-inducible genes to study the inhibitory effects of estrogen on glucocorticoid-regulated targets. MKP-1 was originally identified as an ERK-specific phosphatase (45,46). However, MKP-1 can also dephosphorylate and inactivate both the stress-activated protein kinase/JNK and p38 (47)(48)(49). The rapid induction of MKP-1 by DEX and the presence of at least three putative GREs in its promoter region suggest that it is transcriptionally regulated by the GR (17). It has been reported that induction of MKP-1 in MCF-7 cells results in cell growth suppression (50). SGK was first described in rat mammary epithelia as an immediate early response gene that is rapidly induced in vivo by glucocorticoids (51). The primary role of SGK is thought to involve the regulation of epithelial ion transport (52). It was also reported that glucocorticoidinduced SGK protected breast cancer cell lines from apoptosis (53). However, only a subset of breast cancer cell lines can be protected by glucocorticoid from apoptosis, and this does not occur in MCF-7 and T47D cell lines used in this study (53), which makes these cell lines a good model to study the ER-GR cross-talk. Thus, the role of SGK in cell growth regulation may be cell type-specific and deserves further investigation. Furthermore, we find that the GR fails to load at the SGK GRE upon ligand binding. This provides a mechanistic basis for the estrogen-mediated suppression of GR action. The phosphorylation status of all three major families of MAPKs, the p38, JNK, and ERK, is essential in understanding the roles these signaling molecules play in cell function and disease. Our primary screening of MAPK phosphorylation found that only ERK phosphorylation is induced by estrogen. Further study using MEK/ERK inhibitor PD98059 excluded ERK to be the mediator of the effect of estrogen on glucocorticoid function in the MCF-7 cell line. Similarly, we excluded CDK2 and CDK5 using the CDK-specific inhibitor roscovitine. We then tested the protein phosphatase pathway in estrogen-treated MCF-7 cells and found expression of PP5 to be significantly increased by estrogen both at the protein and the mRNA level. This is consistent with previous reports that PP5 can be induced by estrogen (37), but it had not been previously established whether PP5 mediates the cross-talk between estrogen and glucocorticoids. To address this issue, we knocked down PP5 expression in the MCF-7 cell line and found that the inhibitory effect estrogen of on glucocorticoid induction of both MKP-1 and SGK was abolished. This novel discovery provides direct proof for the first time that PP5 bridges the cross-talk between estrogen and glucocorticoids. The work by Honkanen and co-workers (37, 54, 55) that first explored a role of PP5 in breast cancer demonstrated the presence of the estrogen-response element in PP5 promoter and showed that PP5 can be induced by estrogen (37). It was found that the PP5 overexpression in MCF-7 cells allows rapid cell proliferation. When PP5 expression was inhibited by the synthetic oligonucleotide ISIS 15534 then cell proliferation was suppressed (37). It was suggested that PP5 provides a growth advantage to estrogen-responsive tumors (54,55). In an MCF-7 mouse xenograph model of tumor development, the constitutive overexpression of PP5 was associated with accelerated tumor growth in a high estrogen environment. However, PP5 overexpression alone failed to produce spontaneous tumors in a low estrogen environment (55). PP5 has been shown to associate with estrogen receptors, resulting in suppression of ER-dependent transcription, as a feedback control mechanism (56). No experiments have been performed to determine whether the inhibition of estrogen-induced PP5 in breast cancer cell lines would allow DEX-mediated growth arrest. Selective estrogen receptor modulators, such as tamoxifen (57), and aromatase inhibitors that inhibit estrogen synthesis (58) are used currently for the clinical management of ER-positive breast cancer. However, there are great variations among patients in both the therapeutic efficacy and side effects of these drugs (59). It was reported that long term selective estrogen receptor modulators therapy causes the development of acquired resistance (60); serious systemic side effects had been noted for the aromatase inhibitors (61). Our study suggests an alternative therapeutic approach in managing such cases. It demonstrates that suppression of estrogen-induced PP5 enhances the efficacy of corticosteroids and allows GR-mediated tumor growth arrest in the presence of estrogen. In another cell line, A549, which is a lung epithelial cell line and is responsive to DEX-mediated growth arrest, it was shown that inhibition of endogenous PP5 by ISIS 15534 induces inhibition of cell growth via the p53 pathway and enhances GR transcriptional activity (44). It was determined that PP5 inhibits p53 phosphorylation at Ser-15 and suppresses p53 activity in the cells (44,62). In contrast, glucocorticoids induce the same Ser-15 p53 phosphorylation, and this induces p21 expression, which mediates G 1 growth arrest (62). In this study we tested whether PP5 suppression would relieve estrogen-mediated suppression of the GR function in the estrogen-responsive breast cancer model. We found that PP5 knockdown enhances GR function in breast cancer cells because of restitution of ligand-mediated Ser-211 GR phosphorylation. GR has been well characterized to include three major functional domains as follows: N-terminal domain, DNA binding domain, and ligand binding domain. The N-terminal domain contains a transcriptional activation region (AF1) required for maximal transcriptional activity of GR. The AF1 of human GR has three residues (Ser-203, Ser-211, and Ser-226) that can be phosphorylated and affect GR function (for review see Ref. 63). It was reported that the Ser-203-phosphorylated GR is confined to the cytoplasm and to the perinuclear region; the Ser-226phosphorylated GR inhibits transcription; and the Ser-211phosphorylated GR is strictly agonist-dependent, localized to the nucleus, and strongly correlates with GR transcriptional activation (for review see Ref. 40). Inhibition of GR phosphorylation at Ser-211 is associated with decreased nuclear retention of GR and decreased gene transcription. Some GR-regulated gene promoters were found to be extremely sensitive to GR phosphorylation at Ser-211 as shown by inhibition of DEXmediated gene transcription when S211A GR mutants were overexpressed in the cells. It was suggested that Ser-211 phosphorylation promotes GR conformational change that facilitates GR interaction with the coactivator MED14. MED14-dependent GR-regulated targets were found to be the most reliant on GR phosphorylation at Ser-211. Overexpression of the S226A mutant mainly enhanced DEX-mediated gene transcription as compared with wild type GR (64). In our study, we observed that glucocorticoids strongly induced GR phosphorylation at Ser-211 within 1 h in MCF-7 cells, and this phosphorylation of GR at Ser-211 was significantly inhibited by estrogen. This is a critical result that supports our observation that estrogen inhibits glucocorticoid-induced GR binding to GRE and induction of MKP-1 and SGK. Furthermore, our PP5 knockdown experiments proved that estrogen-induced PP5 dephosphorylated the GR Ser-211 phosphorylation that was induced by glucocorticoids, resulting in reduced DEX-induced Ser-211 GR phosphorylation in the cells treated with estrogen. Data from the osteosarcoma cell line U2OS provided evidence that glucocorticoids induced both Ser-211 and Ser-226 phosphorylation as well as higher phosphorylation at Ser-211 relative to Ser-226. This correlated with GR nuclear localization and greater transcriptional activity (64). However, we did not see any Ser-226 phosphorylation in the MCF-7 cell line, which indicated GR phosphorylation could be cell type-specific. Because of FIGURE 11. Proposed mechanism of cross-talk between estrogen and glucocorticoids. Upon ligand binding GR accumulates in the cell nucleus and is highly phosphorylated at Ser-211. The phosphorylated GR is transcriptionally active and binds as a homodimer to a specific palindromic DNA sequence, termed a GRE, located in the regulatory regions of target genes, such as MKP-1 and SGK. The induction of the target genes mediates the cell growth arrest. Estrogen induces the expression of PP5 that binds to the Ser-211-phosphorylated GR and dephosphorylates it, dampening its ability to bind GRE. Thus, the expression of glucocorticoidinducible genes is inhibited. This supports estrogen-mediated cell proliferation. the lack of commercially available chromatin immunoprecipitation grade Ser-211-phosphorylated GR antibody, we were unable to estimate Ser-211-phosphorylated GR binding to the SGK promoter. The data presented here with respect to Ser-211 phosphorylation mainly serve as an indicator that the changes in GR phosphorylation are important in estrogen-glucocorticoid cross-talk. However, the data do not exclude the possible contribution of other phosphorylation sites in GR in this process. In addition to the breast cancer literature that described the role of PP5 in estrogen-mediated cell growth, basic molecular biology studies demonstrated the association of PP5 with the ligand binding domain of GR (interaction with hsp90 via the tetratricopeptide domain of PP5) (65). Furthermore, it was demonstrated that upon ligand binding PP5 dissociates from the GR (65). Garabedian and co-workers (27) have shown that ligand-bound PP5 in the absence of ligand dephosphorylates GR in a U2OS osteosarcoma cell line that was designed to overexpress wild type human GR. PP5 siRNA experiments in these cells demonstrated a somewhat enhanced DEX-induced Ser-203, Ser-211, and Ser-226 GR phosphorylation (27). The induction of several GR targets via transactivation (IRF8, ladinin, IGFBP-1, but not GILZ) was inhibited upon PP5 silencing. This suggested that PP5 modification of GR phosphorylation has selective effects on GR target gene induction (27). In this study we demonstrate that estrogen-induced PP5 dephosphorylates DEX-induced GR phosphorylation at Ser-211 and reduces GR transcriptional activity. Inhibition of estrogen-induced PP5 restores GR function. Previous publications indicate that estrogen can also inhibit glucocorticoid action by lowering the GR level (66,67). In our study we did not see GR level change in 24 h (data not shown), although GR phosphorylation is affected. This suggests that estrogen inhibits GR action through not only one pathway. The accumulated body of literature from several fields suggests that PP5 and GR actions are naturally in fine balance, and different scenarios can unfold when this balance is disturbed. This study demonstrates for the first time the following: 1) when PP5 is overproduced due to estrogen stimulation in breast cancer cells, it decreases DEX-induced Ser-211 phosphorylation of the endogenous GR. This inhibits DEX-mediated induction of MKP-1 and SGK. 2) Inhibition of PP5 induction by estrogen restores DEX-induced Ser-211 GR phosphorylation and MKP/SGK induction. 3) Suppression of estrogen-induced PP5 in breast cancer cell lines restores DEXmediated growth arrest. Our study therefore demonstrates a cross-talk between estrogen-induced PP5 and GR action (depicted in Fig. 11) that highlights a potential relevance to human disease not only for treatment of breast cancer but also potentially opening up new directions in exploring gender differences in response to corticosteroid therapy.
7,274.4
2009-07-08T00:00:00.000
[ "Biology", "Medicine" ]
Branes are Waves and Monopoles In a recent paper it was shown that fundamental strings are null waves in Double Field Theory. Similarly, membranes are waves in exceptional extended geometry. Here the story is continued by showing how various branes are Kaluza-Klein monopoles of these higher dimensional theories. Examining the specific case of the E7 exceptional extended geometry, we see that all branes are both waves and monopoles. Along the way we discuss the O(d; d) transformation of localized brane solutions not associated to an isometry and how true T-duality emerges in Double Field Theory when the background possesses isometries. We will now adopt a rather simplistic approach which begins with the question, is there a lift of supergravity to a higher dimensional theory where the p-form potentials are "geometric" just as the graviphoton is in conventional Kaluza-Klein theory? If one only considers the NS-NS sector of ten-dimensional supergravity where there is only the Kalb-Ramond two-form potential, then the answer to this question is Double surprising. Finally, we can also have the fivebrane as a null wave. Thus in the exceptional geometry case, the membrane and fivebrane solutions of eleven-dimensional supergravity may be identified as either a wave-or a monopole-like solution of the extended theory. On further reflection, this had to be the case since the whole point of the exceptional extended geometry is to have U-duality manifest symmetry of the theory. S-duality is clearly a part of the U-duality group. S-duality swaps "electric" and "magnetic" solutions which in terms of geometry means exchanging null wave solutions with those of nonvanishing first Chern class. This is a non-trivial duality since it relates solutions with different topology. The story of this paper is similar to what happens in the six-dimensional (0, 2) theory associated with the M-theory fivebrane. The (0, 2) theory is self-dual in six dimensions and under dimensional reduction on a torus this self-duality results in the hidden duality symmetry of the lower dimensional theory, such as the S-duality in four-dimensional N = 4 Super-Yang-Mills [57,58]. The relevant solution of the six-dimensional theory is the self-dual string. It is only how one identifies the wrapped self-dual string with states in the four-dimensional theory that causes the emergence of the hidden duality symmetry. Just like the (0, 2) theory, the exceptional extended geometry is describing a theory where the duality group is a manifest symmetry. As such it is only through the reduction to the lower dimensional theory that one actually produces a hidden duality. What is novel is that this is a gravitational theory as opposed to the field theory examples that have been studied so far and the duality group is beyond that of the SL(2) corresponding to large diffeomorphism of the torus. Yet the principle is the same. In general we expect all solutions related under U-duality to be a single solution in the extended geometry. Let us start by describing the monopole in DFT and using this to extract the NS5brane. We will then show how the M-theory fivebrane may be described in the exceptional extended geometry associated to E 7 first as a null wave and then as a monopole solution. We will then also show how the membrane can be produced as both wave and monopole solutions. Finally we comment further on the implications. The Monopole in DFT In what follows it will be useful to introduce coordinates (x µ ,x µ ) for Double Field Theory. We will call the coordinates associated to our usual notion of spacetime x µ and the winding or dual coordinatesx µ . It is the presence of the O(d, d) structure η that allows this split into (x µ ,x µ ) coordinates since η produces a natural pairing between coordinates. (For the reader familiar with the symplectic geometry of classical mechanics, η is very much like a symplectic form and may be used to define a polarization which is essentially what one does when applying the section condition or equivalently picking a duality frame.) The action and equations of motion of DFT are concisely written in Appendix A for easy referral. In [51] a null wave in the doubled space of DFT was shown to reduce to a pp-wave or a fundamental string when viewed from the ordinary supergravity point of view. The interpretation of the solution in terms of the normal supergravity theory associated to the reduction of DFT was determined by the direction the null wave was travelling in. If the DFT solution carries momentum in a spacetime direction, x, it reduces to a wave. But if it carries momentum in a dual (winding) direction,x it gives the string whose mass and charge are determined by the momentum in that dual direction. Instead of the wave we will now consider the Kaluza-Klein monopole solution. In general relativity this monopole solution is based on the four-dimensional Euclidean Taub-NUT [48,49] where H is a harmonic function and A i a vector potential with i = 1, 2, 3. If this solution is supplemented by some trivial world volume directions, it can be turned into something known as a KK-brane, the KK-monopole being a KK0-brane. The low energy limit of M-theory is eleven-dimensional supergravity. Thus, to embed the Taub-NUT solution (which is four-dimensional) requires adding seven trivial dimensions (one of which is timelike) which would then produce a KK6-brane solution as follows where H and A i are the same as above. (From the point of view of Type IIA supergravity, which is the theory that emerges upon Kaluza-Klein reduction in the z direction, this is the Type IIA D6-brane.) All of this is part of the usual supergravity story relating solutions of eleven-dimensional supergravity to those of the Type IIA theory [50]. Now let's consider a Taub-NUT type solution in Double Field Theory which we also call the DFT monopole. Appendix A shows that the following is a solution and satisfies the DFT equations of motion. The solution is described by the generalized metric H M N which may be given in terms of a line element, and the rescaled dilaton of DFT (defined as e −2d = g 1/2 e −2φ ) where φ 0 is a constant. The generalized coordinates with M = 1, . . . , 20 are where i = 1, 2, 3 and a = 1, . . . , 6. The last line in the line element uses the Minkowski metric η ab , i.e. x 1 = t andx 1 =t are timelike, our signature is mostly plus. Here H is a harmonic function of the y i only; it is annihilated (up to delta function sources) by the Laplacian in the y-directions and is given by with h an arbitrary constant that is related to the NUT charge. The vector A i also obeys the Laplace equation, is divergence-free and its curl is given by the gradient of H This doubled solution is to be interpreted as a KK-brane of DFT. It can be rewritten to extract the spacetime metric g µν and the Kalb-Ramond two-form B µν in ordinary spacetime with coordinates x µ = (z, y i , x a ). We will show explicitly that the "reduced" solution is in fact an infinite periodic array of NS5-branes smeared along the z direction. One can also show that ifz is treated as a normal coordinate and z as a dual coordinate the reduced solution is the string theory monopole introduced above. This means the (smeared) NS5-brane is the same as a KK-monopole with the KK-circle in a dual (winding) direction. Rewriting the Solution We will now use the form of the doubled metric H M N in terms of g µν and B µν to rewrite the solution (2.3) in terms of ten-dimensional non-doubled quantities. This is like in Kaluza-Klein theory, writing a solution of the full theory in terms of the reduced metric and vector potential (2.8) By Comparing (2.8) with (2.3) the reduced fields can be computed. The spacetime metric g µν and the non-vanishing components of the B-field B µν are given by The determinant of this metric is −H 4 and therefore the string theory dilaton becomes This solution is the NS5-brane solution of string theory [59], more precisely it is the NS5brane smeared along the z direction. Usually the harmonic function of the NS5-brane depends on all four transverse directions, that is y i and z. By smearing it over the z direction the brane is no longer localized in z and so the z-dependence is removed from the harmonic function. Smearing the solution along z has also consequences for the field strength H µνρ . The NS5-brane comes with an H-flux whose only non-zero components are in the transverse directions y i and z = y 4 . The field strength is written as where m = i, z = 1, . . . , 4. We then note that the non-trivial part of the metric is g mn = Hδ mn so that g = det g mn = H 4 . This then allows us to write the field strength as where the epsilon tensor has been converted to the permutation symbol (a tensor density) in order to make contact with the epsilon in a lower dimension. If the solution then is smeared along z, H no longer depends on this coordinate. Therefore H ijk = 0 and Thus the only non-zero component of the B-field (up to a gauge choice) of the smeared NS5-brane is B iz = A i . This then shows how the flux of the smeared NS5-brane is just the same as the usual magnetic two-form flux from a magnetic monopole for the electromagnetic field. In conclusion, the smeared NS5-brane solution (2.9) can be extracted from the DFT monopole (2.3) using (2.8). If z andz are exchanged, the same procedure recovers the KK-monopole of string theory. Since the monopole and the NS5-brane are T-dual to each other in string theory and DFT makes T-duality manifest, this should not come as a surprise. In order to identify the NS5-brane with the KK-monopole, it needed to be smeared along the z direction. Any monopole type solution is expected to need more than a single patch to describe it (and in fact the topological charge may be viewed as the obstruction to a global description). In [60] the problems of constructing a full global solution containing NSNS magnetic flux, with patching between different local descriptions in DFT, are discussed in detail. So have we resolved those issues here? Not really, in the case described above, because of the additional isometry in the transverse directions, the three form flux is completely encoded in a two form flux. (This is nontrivial and can be constructed in the usual way, a la Dirac). In other words because of the additional isometry H (3) = F (2) ∧ dz, so that although the H (3) flux is an element of the third cohomology it is really completely given by second cohomology of which F (2) is a nontrivial representative. One can now ask the question if it is possible to localize the monopole and remove this additional smearing. We will look at this next. The Localized Monopole Solution One can construct a solution which is not smeared but localized in the z direction. Then the harmonic function H has an explicit dependance on z H(r, z) = 1 + h r 2 + z 2 (2.14) and the field strength H µνρ in (2.11) of the NS5-brane has two non-zero components The first one can be expressed in terms of the magnetic potential A i as before in the smeared case. The second one is new, as the ∂ z derivative now does not vanish. The localized monopole solution of DFT then reads where extra terms for dy 2 and dy i dỹ j involving B ij arise as compared to (2.3). Upon rewriting this solution by using the ansatz (2.8), one obtains the localized NS5brane with its full field strength. If we carry out the simple operation of swapping the roles of z andz in the reduction, then this gives the following result This solution is the KK-monopole. The spacetime coordinates in this duality frame now includez, crucially though the harmonic function H still depends on z, which is a dual coordinate in this frame. One thus concludes that this is the monopole localized in the dual winding space. This property is discussed in detail in [52]. This is exactly the same result as blindly applying the Buscher rules to the localized NS5-brane along the z direction. It produces the monopole (which is indeed the T-dual of the NS5) but the solution is localized in the dual winding directions. The alert reader will be aware that obviously one should not be allowed to use the Buscher rules to carry out a T-duality in the z direction in the case where the NS5-brane is localized. The z direction is not an isometry of the localized solution. Here we have a very clear example of how Double Field Theory differs from just a theory with manifest T-duality. Double Field Theory makes no assumptions about the existence of isometries. The O(d, d) symmetry in DFT is a local continuous symmetry that is applicable for any background. This perspective is discussed in [61] amongst other places, most recently in [62]. The usual spacetime manifold is defined by picking out a maximally isotropic subspace of the doubled space. Normally this is done by solving the section condition or strong constraint, which removes the dependence of fields on half of the coordinates. We then identify the remaining coordinates with the coordinates of spacetime. The DFT monopole is a single DFT solution which obeys the section condition; how we identify spacetime is essentially a choice of the duality frame. When the half-dimensional subspace which we call spacetime matches that of the reduction through the section condition, then we have a normal supergravity solution which, in the case described above, is the NS5-brane. Alternatively, one can pick the identification of spacetime not to be determined by the section condition, this then gives an alternative duality frame. Generically this will not have a supergravity description even though it is part of a good DFT solution. This is precisely the case described in this section. There is a localization in winding space and so this solution cannot be described through supergravity aloneeven though it maybe a good string background. In DFT it is just described by picking a spacetime submanifold that is not determined by the solution of the section condition. With this in mind, we come to the following conclusion. There are two different DFT solutions of the form (2.16), one with H(r, z) and the other with H(r,z) as harmonic duality frame DFT solution with H = H(r, z) DFT solution with H = H(r,z) A NS5-brane localized in spacetime NS5-brane localized in winding space B KK-monopole localized in winding space KK-monopole localized in spacetime Table 1: In this table both DFT solutions are of the form (2.16) but with different coordinate dependencies in the harmonic function. Each solution can be viewed in two different duality frames. In frame A the z coordinate is a spacetime coordinate whilez is a dual winding coordinate. In frame B it is the other way round, z is a dual winding coordinate whilez is a spacetime coordinate. The solutions extracted from the DFT solutions that are localized in spacetime have good supergravity descriptions while those that are localized in winding space have not. function. Here by z andz we do not mean spacetime and winding coordinates a priori, but just the coordinates as expressed in (2.16). For each of these two DFT solutions there is a choice of duality frames which are of course related by O(d, d) rotations. In one frame, for clarity call it frame A, z is a spacetime coordinate andz is a dual winding coordinate. In another frame, say frame B, the role of z andz is exchanged, i.e.z is a spacetime coordinate and z is dual. See Table 1 for an overview. In the case where H is a function of z, the DFT solution rewritten in the duality frame A is the NS5-brane localized in spacetime. Its T-dual, found by going to frame B, is the KK-monopole localized in winding space which has no supergravity description as explained above. In the other case where H is a function ofz, the DFT solution rewritten in frame B gives the KK-monopole localized in spacetime while frame A gives the NS5-brane localized in winding space. Again this is a solution with no supergravity description but valid from a string theory point of view. What then is T-duality? When there is a spacetime isometry then there is indeed an ambiguity in how one identifies the spacetime in doubled space. The presence of the isometry means there are no unwanted dependences on dual coordinates from picking different duality frames and so supergravity is a good description for both choices. Thus from the DFT perspective, traditional T-duality comes from an ambiguity in how one defines the half-dimensional subspace corresponding to a good supergravity solution. In [63] and more recently in related works by Harvey and Jensen [52,64] and Kimura [53][54][55][56]65,66] a gauged linear sigma model was used to describe the NS5-brane and related solutions. By "related solutions" we mean the KK-monopole and in fact also the exotic 5 2 2 brane [67,68]. These are all solutions in the same O(d, d) duality orbit. The advantage of the gauged linear sigma model description is that one may examine the inclusion of world sheet instanton effects. As first shown in [63], the inclusion of such world sheet instantons gives rise exactly to the localization in dual winding space we are describing above. Thus in some sense DFT knows about world sheet instantons. In terms of the topological questions raised by [60], the localized solution (which does not have the additional isometry) requires an appropriate patching to form a globally defined solution. Thus for this paper we restrict ourselves to giving only descriptions in a local patch. What is hopeful is that the solution described here has very specific topology of the dual space since it is itself a monopole. It is hoped to carry out a detailed analysis of the global poperties in the future. The Exceptional Case E 7 There are similar constructions to DFT for the U-duality groups of M-theory. In this paper we will work with the E 7 group. For more on this, see [33]. The approach described in [33] is in fact a truncated version of the full theory. Recently, through an excellent series of works, the full non-truncated theory, which goes by the name Exceptional Field Theory, has been developed by Hohm and Samtleben [69][70][71][72][73]. We will not deal with this non-truncated version of the theory in this paper but we hope to investigate properties of solutions to the Hohm and Samtleben theory in the future [74]. The E 7 Exceptional Extended Geometry We consider the case where the eleven-dimensional theory is a direct product of M 4 ×M 7 , the U-duality group acting on the seven-dimensional space M 7 is E 7 . We will truncate the theory to ignore all dependence on the M 4 directions and will not allow any excitations of fields with mixed indices such as the graviphoton. The exceptional extended geometry is constructed by combining the seven spacetime dimensions with wrapping directions of the M2-brane, M5-brane and D6-brane to form a 56-dimensional extended space with tangent space given by Details of this construction and the resulting theory are described in [26] and [24, 25, 27-29, 33, 34]. The algebra is E 7 ⊗ GL(4) with the E 7 acting along the seven spacetime dimensions of the extended space. The generators of the associated motion group are where µ = 1, . . . , 7 and α = 1, . . . , 4. The first four generate the 56 representation of E 7 and the last one generates translations in the remaining four directions, the GL(4). For convenience, the following dualization of generators is used For the E 7 generators we can now introduce generalized coordinates to form the extended 56-dimensional space. Note that an index pair µν is antisymmetric and we thus have indeed 7 + 21 + 21 + 7 = 56 coordinates. The generalized metric M M N of this extended space can be constructed from the vielbein given in [24-26, 28, 29, 33]. The full expression is quite an unwieldy structure, so we will introduce it in several steps. The underlying structure of M M N can be seen clearly if the M-theory potentials C 3 and C 6 are turned off. Then the only field present is the spacetime metric g µν and the line element of the extended space reads Here the determinant of the spacetime metric is denoted by g = det g µν and the four-index objects are defined by g µν,ρσ = 1 2 (g µρ g νσ − g µσ g νρ ) and similarly for the inverse. The generalized metric has a scaling symmetry and can be rescaled by a power of its determinant which in turn is just a power of g. The bare metric, i.e. without the factor of g −1/2 upfront, has det M M N = g −28 . One could choose to rescale by including a factor of g 1/2 which would then lead to det M M N = 1, an often useful and desirable property. Here the factor g −1/2 is included. It arises completely naturally from the E 11 programme, see [33], and interestingly gives solutions in the Einstein frame when rewritten by a KK-ansatz (i.e. no further rescaling is necessary). If the gauge potentials are non-zero, there are additional terms for the "diagonal" entries of (3.5) and also "cross-terms" mixing the different types of coordinates. For what follows we will not need to use the full generalised metric with both potentials present at the same time. We will just need to consider the two special cases where either the C 3 potential or the C 6 potential vanishes. In the first case with no three-form, the six-form is dualized and encoded as which allows the line element to be written as In the second case with no six-form, the three-form components are encoded in C, V and X (see [33]). We will concentrate on the special case where Then the line element for the generalized metric is then given by The action for the E 7 theory can be constructed as in [24-26, 28, 29, 33]. One should remember though that when deriving the equations of motion through the variation of the action, it is necessary that the generalized metric remains in the E 7 /SU(8) coset. Thus the variation is subject to a constraint. This has the effect of introducing a projector on the naive equations of motion. This set of projected equations of motion was first worked out for DFT in [11] and for the SL(5) exceptional case in [51] along with the general formula for the exceptional cases. A solution of the exceptional extended geometry thus has to satisfy where P is the projector of the E 7 theory and K is the variation of the action with respect to the generalized metric M. (The indices are taken to run from 1 to 56 and appear in symmetric pairs.) Before we go on to construct and discuss specific solutions to the E 7 theory, let as briefly recall some classic M-theory solutions. This allows us to present our conventions and clarify the notation. Classic Supergravity Solutions In eleven-dimensional supergravity there are four classic solutions: the wave, the membrane, the fivebrane and the monopole. They are all related by T-and S-duality and upon reduction on a circle they give rise to the spectrum of string theory solutions in ten dimensions. Here we will briefly present these four solutions in terms of the bosonic fields C 3 , C 6 and g which in turn are given terms of an harmonic function H. To allow for easy comparison of the solutions, they are all expressed in the same coordinate system, even if is not the most natural for each solution. The coordinates we choose have one time direction t, one "special" direction z, six directions x (6) = x a and three directions y (3) = y i for a total of eleven dimensions. The reason for this notation will become apparent soon. The order of these coordinates is important for the extended coordinates with an antisymmetric pair of indices since for example Y tz = −Y zt . It is fixed by defining the permutation symbol ǫ tx 1 x 2 x 3 x 4 x 5 x 6 y 1 y 2 y 3 z = +1. This order will be kept also after reductions when some of the coordinates drop out. Let's start with the "pure gravity" solutions, the pp-wave and the KK-monopole. They do not come with a gauge potential and are given just in terms of the metric. The pp-wave consists of parallel rays carrying momentum in the z direction with transverse plane wavefronts spanned by x a and y i in the above mentioned coordinates. The wave solution then reads where h is some constant proportional to the momentum carried. The KK-monopole or KK6-brane solution was already introduced in Section 2. Whereas the momentum of the wave solution can be seen as gravito-static charge, the monopole carries topological or gravito-magnetic charge, hence the name "monopole". This solution is expressed in terms of a vector potential A i which is related to the harmonic function as before, see equation (2.7). For the monopole, the z direction needs to be compact and will be referred to as the "KK-circle". The x a form the world volume of the KK6-brane, leaving the y i to be transverse. For completeness, the monopole solution is restated in full Again h is a constant, here it is proportional to the first Chern class. Now turn to the extended solutions, the M2-brane and the M5-brane. These branes naturally couple to the C 3 and C 6 gauge potentials respectively. This can be seen as the natural electric coupling. For both branes the worldvolume is spanned by t and some of the x a , while the remaining x's, y i and z are transverse to it. The harmonic function H in each case is a function of the transverse directions. The membrane solution is given by and the fivebrane solution reads (3.14) In both cases both the electric and magnetic potentials are shown. The latter ones can be found by dualizing the corresponding field strengths. The field strength of the electric potential is proportional to F ∼ ∂H −1 ∼ ∂H which is dualized intoF ∼ ǫ∂H ∼ ∂A where we use (2.7) to relate H and A. Therefore the vector potential A i appears in the components of the magnetic potentials. The four solutions recapped above are all related to each other by M-theory dualities. The wave and the membrane are T-dual to each other, in the same way the wave and the fundamental string are related by T-duality in string theory. Similarly the monopole and the fivebrane are T-duals, again as for the monopole and NS5-brane in string theory (cf. Section 2). Furthermore, the membrane and fivebrane are related by S-duality, they are electromagnetic duals of each other. To complete the picture, there is a S-duality relation between the wave and the monopole. We will discuss this further towards the end of this paper. In Table 2 the character of each of the eleven dimensions for each of the four solutions is illustrated. If these classic solutions are carried over from eleven-dimensional supergravity to the extended E 7 theory, the underlying spacetime has to be reduced form eleven to seven dimensions in order to build the 56-dimensional extended space. There are various ways of picking the seven and four out of the eleven as will be explained below. Note that in order to keep the notation simple we will use the following convention. If the directions x 3 , x 4 and x 5 are reduced, we still use x a with a = 1, 2 for the first two x's or alternatively label them as x 1 = u and x 2 = v. Similarly we use x 6 = w where necessary. The M2-and M5-brane as a Wave in Exceptional Extended Geometry In [51] it was not only shown how the wave in DFT gives rise to the fundamental string but also that a null wave in the SL(5) extended theory reduces to the membrane in ordinary spacetime. The same is true for the E 7 extended theory. A null wave propagating along a membrane wrapping direction gives rise to the M2-brane. Furthermore, due to the larger extended space, it is now also possible to consider a wave travelling in a fivebrane wrapping direction. Unsurprisingly, this reduces to the M5-brane in ordinary spacetime. We will demonstrate this explicitly and for completeness reproduce the membrane result. In DFT, the section condition is easily solved by reducing the coordinate dependence to half the doubled space. Thus each pair of solutions related by an O(d, d) transformation, such as the wave and string or the monopole and fivebrane, can be presented in a straightforward fashion. In contrast in the exceptional extended geometry, the solutions to the section condition are more complex since a much larger extended space has to be dealt with. In the case of E 7 , the section condition takes one from 56 to seven dimensions. We thus present the solutions step by step and relate them "by hand" rather than constructing the different solutions to the section condition explicitly. Consider the following solution for an extended E 7 theory built from a seven-dimensional spacetime with coordinates X µ = (t, x m , z) → X M with m = 1, . . . , 5, i.e. in the above mentioned coordinate system reduce on x 3 , x 4 , x 5 and x 6 and collect the remaining transverse directions x 1 , x 2 and y i into x m . The generalized metric is given by 2 + δ mn dX m dX n + δ mn,kl dY mn dY kl − δ mn,kl dZ mn dZ kl − δ mn dW m dW n . (3.15) This is a massless, uncharged null wave carrying momentum in the X z = z direction and If the wave is rotated to travel in a different direction, the momentum it carries becomes the mass and charge of an extended object in the reduced picture. The different M-theory solutions obtained upon a KK-reduction of the extended wave solution pointing in various directions are summarized in Table 3. Table 3: The wave in exceptional extended geometry can propagate along any of the extended directions giving the various classic solutions when seen from a supergravity perspective. direction of propagation The rotation that points the wave in the Z tz direction is achieved by the following swap of coordinate pairs in the above solution The rotated wave solution can now be rewritten by using a KK-ansatz based on the line element given in (3.7) to remove the extra dimensions. This gives the M5-brane solution (3.14) reduced to seven dimensions (and smeared over the reduced directions) (3.17) The details of this calculation can be found in Appendix B.1. It can also be shown that the wave in the E 7 extended theory pointing along one of the Y -directions gives the membrane from a reduced point of view. The key steps of this calculation are given here. Start by splitting the transverse coordinates x m into x a and y i with a = 1, 2 and i = 1, 2, 3 as before so that the extended space is given by X µ = (t, x a , y i , z) → X M . Then the wave can be rotated to point in the Y x 1 x 2 direction. This is achieved by the mapping while leaving the remaining coordinates unaltered. The extended solution (3.15) then reads (recall that x 1 = u and x 2 = v) The KK-ansatz to reduce this metric is based on the line element given in (3.9), it will be used again later in the monopole section, equation (B.16). The procedure is the same as in the reduction calculation that yielded the fivebrane and gives which is the M2-brane solution reduced to seven dimensions (with the harmonic function smeared accordingly). Hence, both the M2 and the M5 can be obtained from the same wave solution in the exceptional extended geometry and all branes in M-theory are just momentum modes of a null wave in the extended theory. The direction of the wave determines the type of brane (from the reduced perspective) or indeed gives a normal spacetime wave solution. From this point of view the duality transformations between the various solutions are just rotations in the extended space. The M5-brane as a Monopole in Exceptional Extended Geometry In Section 2 we showed that the NS5-brane of string theory was the monopole solution of DFT. In this section we want to show something similar for the M5-brane in exceptional extended geometry. If the KK-circle of the monopole in the E 7 extended theory is not along a usual spacetime direction but instead along one of the novel Y -directions, then this produces a smeared fivebrane solution. First, a slightly different extended space has to be constructed. Starting from eleven dimensions and reducing on x 3 , x 4 , x 5 and t allows for a construction of the monopole solution in the extended space with coordinates X µ = (x a , w, y i , z) → X M (where w = x 6 ) and potential A i . The generalized metric is given by The ellipsis denotes the same terms as in the line above, with the obvious cycling through the i index. The harmonic function H is a function of the three y's and is given by The relation between the harmonic function and the vector potential are as given in (2.7). This is a monopole with the KK-circle in the X z = z direction. The solution as before may be rotated such that this "special" direction is of a different kind. If the KK-circle is along Y wz , a membrane wrapping direction, the solution reduces to a M5-brane smeared along z. This rotation is achieved by the following map (recall that x 1 = u and x 2 = v) Using (3.7) to read off the fields, the exceptional extended geometry monopole reduces to the M5-brane solution The fivebrane is given in terms of its magnetic potential, i.e. to the dual gauge potential C 3 given in (3.14). The full calculation is shown explicitly in Appendix B.2. We have thus demonstrated how a monopole with its KK-circle along a membrane wrapping direction is identified with a (smeared) fivebrane. This is the analogous result to the KK-monopole/NS5-brane identification in DFT shown in Section 2. The Situation for the Membrane In theory the same story should be true for the membrane. In the previous sections the wave was shown not only to give the membrane but also the fivebrane. From the same reasoning the monopole should not only give the fivebrane, but also the membrane. The problem is that this cannot be shown as simply as for the fivebrane in the E 7 truncated theory. To obtain the membrane from the monopole one has to consider its magnetic potential C 6 given in (3.13). But this six-form has non-zero components with indices C izx 3 x 4 x 5 x 6 , i.e. in directions which are truncated in order to construct the exceptional extended geometry. More technically, if the electric C 3 of the membrane is dualized in seven dimensions, this gives a two-form. This means that only some part of the above six-form lives in the seven-space that gets extended, the remainder lives in the other four directions. Thus it is not possible to describe the membrane this way and stay in the truncated space. This is simply a problem with the tools at our disposal, i.e. the truncated version of the E 7 exceptional field theory. By looking at all the relations we have built between the solutions in the extended space, it seems natural that a monopole with its KK-circle in a fivebrane wrapping direction gives a membrane. This problem then is demanding the full non-truncated EFT [71] and we hope to report on this in future work [74]. Discussion and Outlook This paper has explored the role played by monopole-type solutions in Double Field Theory and its M-theory version, exceptional extended geometry. We have seen how the KK-monopole in both the doubled and the exceptional extended geometry can be identified with a fivebrane solution (NS5 and M5 respectively) in supergravity. For the DFT monopole, we also examined the localized solutions. The key here is seeing how the O(d, d) symmetry in DFT is not T-duality. T-duality in DFT emerges only when one has sufficient isometries in the solution, something that is certainly in tune with our intuition. Without the additional isometries the O(d, d) related solutions do not all have supergravity descriptions because they have a localization in the dual space. How can we understand the localization in the dual space? It has no supergravity description. From gauged linear sigma models this has been shown to be the result of world sheet instanton effects. Rather speculatively, this may indicate that DFT has some knowledge of world sheet instantons. For the wave-and monopole-like solutions in the exceptional extended geometry, there are numerous directions of further investigations that one may consider. The most pressing is the need to study these solutions in the full non-truncated version of the theory, so called exceptional field theory, developed by Hohm and Samtleben. This will then allow us to see the relation between the wave-and monopole-like solutions which are obviously duals of each other. We need to do this in the full theory because the duality requires the Hodge star operation of the full eleven-dimensional spacetime. In other words, the truncated E 7 theory uses both C 3 and C 6 and treats them as independent. We know from eleven-dimensional supergravity though that there is a duality relation between these potentials, i.e. F 4 = ⋆F 7 . This is a crucial aspect of the story and is part of exceptional field theory, but is not seen in the truncated E 7 theory. A further direction building on this work is to examine how black branes fit into the picture in exceptional extended geometries. In particular it would be good to know how the presence of the additional dimensions of the extended solutions affect the singularity structure and the origin of the black brane. This is reserved for future work. We have seen how a single extended geometry solution may give rise to the membrane and fivebrane of M-theory. The orientation of the extended geometry solution determines the M-theory brane type. One may ask what happens if the orientation of the solution is directed along a linear combination of exceptional directions. It is clear that this may be used to describe M-theory brane bound states or equivalently branes with non-trivial background potentials. These solutions have been explored in detail in [75] where the solutions were constructed through a U-duality technique. The NS5-brane in Type IIA has an interesting two-dimensional CFT description [76] in the near horizon. It would be interesting to examine this DFT description of the fivebrane from some two-dimensional CFT point of view (note that the shift in the dilaton in the DFT description allows for different regions of validity as compared to the usual description). Finally, in [51] the dynamics of the Goldstone modes of the DFT wave solution were calculated to give the Tseytlin string. A similar Goldstone mode analysis for these exceptional extended geometry solutions would produce a U-duality covariant worldvolume description for the membrane/fivebrane. The analysis cannot work for the membrane or the fivebrane alone since they transform into each other under U-duality. It would be interesting to see exactly what are the Goldstone modes and describe their dynamics in order to describe how the extended geometry solutions relate to normal M-theory brane actions. Acknowledgement DSB is partially supported by the STFC consolidated grant ST/J000469/1 "String Theory, Gauge Theory and Duality" and FJR is supported by an STFC studentship. We wish to thank Paul Townsend for questions that inspired this paper and Martin Cederwall, Jeong-Hyuck Park, Malcolm Perry, Henning Samtleben and David Tong for discussions on various related topics. A Reduction of the DFT Monopole In this appendix it is demonstrated that the monopole solution of DFT presented in (2.3) satisfies the equations of motion which can be derived from the action where the scalar R is given by For a detailed presentation of this action and the meaning of the last line in R, see [51]. The full equations of motion are given in terms of a projector to take the fact into account that the generalized metric is constrained to parametrize a coset structure. The equations for H M N and d are where K M N is the variation of the action with respect to the generalized metric Here η is the invariant O(d, d) metric of DFT 3 . Thus one has to compute R and K M N non-zero components are It is interesting to note the action of the projector here. Whereas the general significance of the projector in the equations of motion was pointed out in [51], it turned out that its presence was not strictly needed to show that the DFT wave was a solution as all the components of K M N vanished for it independently (see Appendix A of [51]). In contrast here for the DFT monopole, not all components of K M N are zero and only once the projector acts are the equations of motion satisfied. This might be due to different properties of the wave and monopole solution, the former being conformally invariant while the latter is not. B Reduction of the Exceptional Extended Wave and Monopole In this appendix we fill in the details of how the extended solutions of the E 7 duality invariant theory can be rewritten by using a Kaluza-Klein ansatz to obtain solutions in ordinary spacetime. B.1 From Wave to Fivebrane In Section 3.3 it is explained how the extended wave solution can be rotated to carry momentum along a fivebrane wrapping direction. From a ordinary spacetime point of view, this is then the M5-brane solution of supergravity. Here this calculation is presented in detail. After the rotation (3.16), the wave solution (3.15) reads + δ mn dZ tm dZ tn + δ mn,kl dY mn dY kl − δ mn,kl dZ mn dZ kl − δ mn dY tm dY tn . (B.1) The KK-reduction ansatz to reduce the extended dimensions is based on the line element given in (3.7) where the scale factors e 2α , e 2β and e 2γ are undetermined. They arise naturally in such a reduction ansatz which attempts to reduce 49 dimensions at once and will be determined by consistency. By comparing (B.2) to (B.1) term by term, one can step by step work out the fields of the reduced solution. The term with dW 2 gives e 2α 3 g −3/2 g zz = −1 e 2α 3 g −3/2 g tt = H e 2α 3 g −3/2 g mn = −Hδ mn (B.3) while the dZ 2 term gives e 2α 2 g −3/2 g tz,tz = H , e 2α 2 g −3/2 g zm,zn = −Hδ mn e 2α 2 g −3/2 g tm,tn = δ mn , e 2α 2 g −3/2 g mn,kl = −δ mn,kl . (B.4) Using (B.3), the cross-term dY dW gives an expression for U µ which encodes the six-form potential Next consider the dY 2 term which gives e 2α 1 g −1/2 g mz,nz + e 2γ 2 g −1/2 g mn U z U z = (2 − H)δ mn , e 2α 1 g −1/2 g mn,kl = δ mn,kl e 2α 1 g −1/2 g tz,tz + e 2γ 1 g −1/2 g tt U z U z = −(2 − H) , e 2α 1 g −1/2 g tm,tn = −δ mn (B. if the factor e 2γ 2 +2α 3 −4β 2 is equal to 1. The penultimate step is to look at the dXdZ term e 2β 1 g −1 g tt U z = (H − 1) , e 2β 1 g −1 g mn U z = −(H − 1)δ mn (B.8) and the dX 2 term which gives They can all be combined to determine the two remaining components of the metric provided that e 2γ 1 +2α 3 −2β 1 −2β 2 = 1. Collecting all the above results, we have 4 (B.11) From the first line the determinant of the spacetime metric can be computed as g = −H 12/5 and thus g µν is finally determined. The three objects in the other lines, the inverse metric g µν , g µν,ρσ and g µν,ρσ , are all related to the metric. For this to be consistent and the constraints mentioned above to be satisfied, the factors e 2α , e 2β and e 2γ have to be With this the factor in front of U z in (B.5) now also vanishes and the six-form potential can be worked out from (3.6) as Thus the result of reducing the full solution (B.1) down to seven dimensions is (B.14) where the harmonic function has to be smeared over the reduced directions. This is precisely the fivebrane solution in seven dimensions, obtained from reducing (3.14) on x 3 , x 4 , x 5 and x 6 (and smearing H). B.2 From Monopole to Fivebrane In Section 3.4 the extended monopole solution with its KK-circle in a membrane wrapping direction was shown to give the fivebrane coupled to its magnetic potential in ordinary spacetime. The details of this calculation are given here. The monopole solution (3.21) is transformed by (3.22) to have its KK-circle along Y wz . The extended line element then reads
10,927
2014-09-22T00:00:00.000
[ "Physics" ]
Machine Learning Classification of Patients with Amnestic Mild Cognitive Impairment and Non-Amnestic Mild Cognitive Impairment from Written Picture Description Tasks Individuals with Mild Cognitive Impairment (MCI), a transitional stage between cognitively healthy aging and dementia, are characterized by subtle neurocognitive changes. Clinically, they can be grouped into two main variants, namely patients with amnestic MCI (aMCI) and non-amnestic MCI (naMCI). The distinction of the two variants is known to be clinically significant as they exhibit different progression rates to dementia. However, it has been particularly challenging to classify the two variants robustly. Recent research indicates that linguistic changes may manifest as one of the early indicators of pathology. Therefore, we focused on MCI’s discourse-level writing samples in this study. We hypothesized that a written picture description task can provide information that can be used as an ecological, cost-effective classification system between the two variants. We included one hundred sixty-nine individuals diagnosed with either aMCI or naMCI who received neurophysiological evaluations in addition to a short, written picture description task. Natural Language Processing (NLP) and a BERT pre-trained language model were utilized to analyze the writing samples. We showed that the written picture description task provided 90% overall classification accuracy for the best classification models, which performed better than cognitive measures. Written discourses analyzed by AI models can automatically assess individuals with aMCI and naMCI and facilitate diagnosis, prognosis, therapy planning, and evaluation. Background With the growth in the number of older adults, age-related neurodegenerative diseases such as Alzheimer's disease (AD) have dramatically increased.These diseases cause a great deal of financial and emotional burden for patients, their caregivers, and society.The global cost of dementia care was estimated to exceed USD 500 billion in the United States [1].It is expected to rise to USD 2 trillion by 2030 [2].Research has suggested that the preclinical phase of dementia may start earlier than the diagnosis.Detecting the preclinical stage of dementia and providing an intervention will delay the onset of AD.Delaying the onset of AD will significantly minimize the socio-economic burden and is expected to reduce societal costs by 40% [3]. Mild cognitive impairment (MCI) is an intermediate stage between cognitively healthy aging and dementia [4].It represents a critical preclinical stage of AD [5][6][7].MCI includes four different clinical subtypes.Two main subtypes are amnestic MCI (aMCI) and non-amnestic MCI (naMCI); this subtyping is determined based on the impairment in memory.Individuals with aMCI are characterized by memory loss, while individuals with naMCI demonstrate impairment in domains such as executive functions, attention, and language [8,9].Also, depending on the number of cognitive domains impaired, individuals can be categorized into single-domain and multi-domain MCI.Although a higher risk of developing dementia characterizes individuals with MCI, not all individuals with MCI will progress to dementia; some may remain stable, and others even regress to a condition of healthy aging [10][11][12].Therefore, it is essential to discriminate against those who are more likely to progress to dementia for early intervention since most treatment strategies are more effective in the presymptomatic stage of dementia [13]. Depending on the two main subtypes of MCI, differences in the progression from MCI to dementia have been reported.In general, it has been suggested that aMCI represents the earliest symptomatic manifestation of AD pathophysiology, while naMCI is likely to progress to non-Alzheimer's dementia [14][15][16].A recent 20-year retrospective study supports this and adds more information with a large dataset (N = 1188).The authors demonstrated that aMCI represents a greater risk for progressing to dementia (not only for AD) compared to naMCI.The odds ratio of the progression to dementia between aMCI and naMCI was statistically different [17].This highlights the clinical need for a robust, reliable system for classifying aMCI and naMCI [18]. There have been several approaches for MCI diagnosis.Behaviorally, a brief cognitive screening test can assist in identifying whether an individual has an apparent cognitive impairment [9].Neuropsychological tests can be administered depending on the need for further assessments to determine the presence or degree of impairment in cognitive functions.The tests for MCI biomarkers require magnetic resonance imaging (MRI) or lumbar puncture for cerebrospinal fluid (CSF).An increased amyloid burden was found to be specific to aMCI, while naMCI does not exhibit a specific abnormality in neuroimaging (see review for Yeung et al. [19]).Blood biomarkers, which are considered a comparatively more straightforward means of testing, have also been investigated [20].Unfortunately, such tests for MCI biomarkers are not routine care in clinical settings [21][22][23][24].Moreover, the cost and availability of the testing technique (e.g., MRI) may limit its impact on individuals' care [25]. Linguistic changes are considered to manifest as one of the earlier indicators of pathology in cognitive impairment.It has been reported that they emerge years before deficits in other cognitive systems become apparent [26].In particular, writing is a cognitively and linguistically complicated activity.Writing consists of distinct phases: planning, generating, and revising [27].Writers initially set a goal for organizing their knowledge and executing the plan in response to the topic of the writing activity.Then, writers revisit and revise their output.All phases should be well orchestrated to accomplish successful writing within cognitive systems such as executive functions, attention, and working memory. The research directly focusing on writing abilities in patients with amnestic and nonamnestic MCI is very limited [28,29].As mentioned above, the cognitive model of writing proposed cognitive functions involved in picture descriptions, but it has not specified what cognitive functions are required in each process of writing [27].Considering the model, memory in which aMCI demonstrates impairment can be involved in the process of writing complete, cohesive, and coherent passages while memorizing previously written sentences.Contrarily, other cognitive systems like language, executive functions, and attention where patients with naMCI have difficulty can be associated with the overall process of writing.Due to the lack of research on writing abilities in MCI, research explicitly examining writing samples from individuals with aMCI and naMCI is needed to show any characteristic patterns of writing impairment associated with each type. A recent review article highlighted the diagnostic value of writing tests, especially at the discourse level [29].Discourse is any language beyond the sentence level [30,31].Kim and colleagues [28] investigated the prognostic value of discourse-level writing tests.They conducted a chart review of individuals diagnosed with MCI and visited a neurology outpatient clinic more than once (N = 71).They classified the study participants into a stable MCI group and a converter group.The authors examined whether a written discourse task using the Cookie Theft picture [32] predicts clinical course in the MCI group.They found that the stable MCI group produced more core words than the converter group at their baseline assessment.This underscores the potential clinical utility of discourse-level writing tasks for early detection of those who are likely to progress to dementia from MCI. However, the manual analysis of language production (both spoken and written modalities) is time-consuming and labor-intensive [33].In recent years, computational methods such as Natural Language Processing (NLP) have been used to analyze discourse samples in individuals with neurodegenerative diseases [34][35][36][37][38][39] and improve screening methods [34,38].Computational methods offer two advantages.First, they allow the elicitation and combination of measures from different linguistic domains.A decisive property of machine learning (ML) models is their ability to find patterns between features associated with a specific group of individuals, i.e., patients with aMCI and naMCI.The Cookie Theft picture description task is typically an oral task, used for employing connected speech, and was shown in several studies to distinguish between individuals with MCI and healthy controls (HCs) [34,35,[37][38][39].ML and NLP analysis can be employed to analyze and interpret the subtle linguistic patterns in language that might not be readily apparent to human observation.Using ML, earlier studies successfully distinguished healthy adults from individuals with MCI [35], MCI from dementia [40][41][42][43], and the subtypes of primary progressive aphasia [44,45].These findings highlight the use of ML as an important tool that can contribute to existing approaches [38] and inform clinical assessment and therapy. In this study, we leverage the written form of the Cookie Theft picture description task to explore its potential for classifying two subtypes of MCI: aMCI and naMCI.This study marks the first attempt to utilize discourse-level writing samples from patients with MCI subtyping.By focusing on the written form, we aim to develop a more accessible assessment tool for individuals with speech difficulties, potentially expanding the reach of early MCI detection.If writing samples differentiate individuals with two MCI subtypes, they could allow the assessment of individuals with speaking disorders unrelated to MCI.Also, since writing involves several cognitive functions (especially language, vision, and motor control), we hypothesized that a written picture description task could distinguish individuals with aMCI and naMCI.For example, Yan et al. [46] found that patients with AD and MCI "demonstrated slower, less smooth, less coordinated, and less consistent handwriting movements than their healthy counterparts".This work could provide a quick and easy tool to facilitate the subtyping of patients with MCI and demonstrate the potential contribution of written language tasks in the automatic assessment of patients with cognitive impairments. Participants Our participants comprised 169 individuals diagnosed with either aMCI or naMCI (Table 1).All individuals were recruited through the Johns Hopkins Hospital and were diagnosed by an experienced neurologist (AEH).All of the cognitive and linguistic tests were part of routine care in the outpatient clinic.Participants were seated in a quiet room for testing with examiners.All procedures took approximately 45 to 60 min.The diagnosis was based on history, neuroimaging, neurological examination, and neuropsychological testing, and all individuals met the current criteria for MCI (Table 2).The exclusion criteria for the study included individuals who (1) were younger than 18 years old, (2) had a lack of English competence, (3) had significant psychiatric illness and alcohol and drug use, (4) had significant neurological problems affecting the brain (e.g., stroke, multiple sclerosis, and Parkinson's disease), and (5) had uncorrected visual or hearing loss.All individuals with MCI fulfilled the recent criteria of the 2018 National Institute on Aging-Alzheimer's Association (NIA-AA) research framework [47].Demographic information for individuals with MCI can be found in Table 1.Specifically, participants underwent a battery of standardized neuropsychological tests to assess their cognitive and linguistic abilities.These tests comprehensively evaluated various aspects of language and cognitive functioning, offering a detailed assessment of their cognitive strengths and weaknesses.The neurocognitive tests include the Mini-Mental State Examination (MMSE, Folstein et al. [48]), the Orientation and Information subset from the Wechsler Memory Scale-Third Edition (WMS-III; Wechsler [49]), the Digit span subtests of the WMS-III [49], the Rey Auditory Verbal Learning Test (RAVLT; Rey, 1941), Rey Complex Figure (RCF; Rey [50]), the Boston Naming Test [32], the Verbal Fluency Task (FAS), the Free narrative writing section from BDAE [32], the Trail Making Test (TMT; Reitan and Wolfson [51]), and the Stroop test [52].The tests were carefully selected to provide a sensitive measure of abnormalities compared to individuals with normal cognitive functioning.Table 2 includes neurocognitive test results for all individuals with MCI.The study protocol underwent rigorous review and received approval from the Johns Hopkins Institutional Review Board. Written Picture Description Task Writing samples were collected using the Cookie Theft picture from the Boston Diagnostic Aphasia Examination-3 (BDAE-3; Goodglass et al. [32]).Participants were seated with the picture stimulus and a piece of paper.The clinicians used the prompt to encourage the participants to provide a written description: "Write as much as you can about what you see going on in this picture."Once the participants completed the task, their writing samples were transcribed into a text document by experienced researchers. Machine Learning Process The analysis involved the preprocessing of the data (Figure 1), the extraction of significant features from the written picture description task, and the study of those measures. Analysis of Narrative Speech The texts were automatically processed and multiple measurements were exported to an Excel file (participant ID included) using Open Brain AI's (h p://openbrainai.comaccessed on 5 September 2023) clinical computational toolkit [53].We analyzed the wri en transcripts from the text documents using two NLP tools, including the tokenization of Analysis of Narrative Speech The texts were automatically processed and multiple measurements were exported to an Excel file (participant ID included) using Open Brain AI's (http://openbrainai.com accessed on 5 September 2023) clinical computational toolkit [53].We analyzed the written transcripts from the text documents using two NLP tools, including the tokenization of the text, the tagging of morphological categories, and the parsing of the syntactic constituents.Specifically, each word in the text was labeled using Open Brain AI's Part of Speech (POS) tagger and syntactic dependency parser, which uses a variety of linguistic information to determine the dependency structure of a sentence [53].Open Brain AI provided automatic measures that included counts and the ratio of each word/total count of words that appeared in the text for each participant. Specifically, the automatically elicited morphosyntactic measures shown in Table 3 include POS categories (i.e., adjective, adposition, adverb, auxiliary verb, coordinating conjunction, determiner, interjection, noun, numeral, particle, pronoun, proper noun, subordinating conjunction, symbol, and verb), the number of words and characters and their character/word ratio, and syntactic dependency measures indicating the grammatical relationships between words in a sentence and their count to total word ratio [54]. Semantic Measures from BERT To depict semantic relationships, we included word and sentence embeddings from BERT-large-uncased, a BERT (Bidirectional Encoder Representations from Transformers) pre-trained language model [55].Word embeddings were integrated with linguistic measures to form the final database.This inclusion was motivated by the importance of semantic measures in differentiating between individuals with aMCI and naMCI.Specifically, the BERT-large-uncased is a deep neural network trained on a large dataset of text corpora and can be used for various NLP tasks, such as question answering, text summarization, and sentiment analysis.The BERT-large-uncased has been shown to achieve state-of-theart performance on various NLP tasks.It consists of 12 encoder layers, each containing a self-attention mechanism and a feed-forward network.The self-attention mechanism allows the model to learn long-range dependencies between words in a sentence, while the feed-forward network adds non-linearity. Addressing Imbalance and Cross-Validation We employed Random Over-Sampling (ROS) to balance the class distribution and address the limitations of the relatively small dataset [56].This technique alleviates the models' tendency to favor the majority class, a common challenge in imbalanced datasets.Additionally, we implemented group 5-fold cross-validation.This approach minimized data leakage and provided a more reliable model performance evaluation.Furthermore, we standardized the non-BERT features to ensure uniformity in scale. Model Evaluation and Selection We selected ML models that do not require massive amounts of training data.To choose the best model for our data, we trained ML models that roughly belong to four main categories of models, namely ensemble learning models (Random Forest (RF)), Gradient Boosting (GB), XGBoost (XGB), and LightGBM (LGBM)).RF is an ML method combining several decision trees to enhance prediction accuracy.This approach can manage highdimensional data and is resilient to overfitting.GB sequentially combines weak ML learners, each correcting the predecessor's errors.GB is used in classification and regression tasks for large, complex datasets.XGB and LGBM implement gradient boosting with speed and accuracy.They are employed in scenarios requiring rapid processing of large datasets.Hist Gradient Boosting (HGB), a gradient boosting variant, uses histograms for feature representation, enhancing efficiency with large-scale, high-dimensional data structures.Each ML algorithm has unique strengths, making these models suitable for specific data types and prediction tasks.Only comparing and selecting ML models provides versatility, adaptability, and improved performance in the ML process, enabling the model to tackle the various underlying characteristics of the data.The selected ML models (RF, GB, XGB, LGBM, and HGB) were trained on the training data. Hyperparameter Tuning and Model Comparison A grid search with cross-validation was employed to evaluate and compare the performance of the different machine-learning models.The hyperparameter tuning involved finding the optimal hyperparameters for each model using grid search and calculating the evaluation metrics. Grid search is a method for hyperparameter tuning that evaluates different combinations of predefined hyperparameter values to determine the combination that produces the best performance for a given model.In this case, a grid search was performed for each of the ML models included in the study. We evaluated each model using five-fold cross-validation, which involves evaluating the performance of a model by splitting the data into multiple folds.Each fold is used as a validation set, while the remaining folds are used as the training set.The model is trained on the training set and evaluated on the validation set.This process is repeated for each fold, and the average performance across all folds is used as the final performance estimate. Various evaluation metrics were used to assess the performance of the different machine learning models.These metrics included accuracy, F1 score, precision, recall, ROC AUC, and Cohen's kappa score.Accuracy is the proportion of correct predictions.The F1 score measures a model's ability to correctly classify positive and negative cases.Precision is the proportion of positive predictions that are positive.Recall is the proportion of positive cases correctly classified as positive.ROC AUC (Receiver Operating Characteristic Area Under the Curve) measures a model's ability to distinguish between positive and negative cases. Results Written picture description tasks were processed using combined NLP analysis and BERT models to elicit measures representing the embeddings.We implemented two MLsupervised classification tasks. A classification model was designed to distinguish individuals with aMCI and naMCI.The model included only information from the Cookie Theft picture description task.The model distinguished individuals with aMCI and naMCI.These results suggest that the written discourse from a picture description task provides sufficient information to identify the individuals with the two variants of MCI. In the ML models, the ROC curves were nearly 98% for classifying individuals with aMCI and naMCI (Figure 2).This suggests that written discourse productions, as manifested in a picture description task, can distinguish the two groups of individuals from language measures.Regarding accuracy, the ensemble models with boosting had the best performance (Table 3).The consistency in the output of those models further demonstrates their effectiveness for real-world applications. As indicated by the outcomes (Table 4), the utilization of machine learning models shows the potential of MLs in diagnosing and differentiating the two MCI subtypes.The reported standardized metrics-accuracy, F1 score, precision, recall, and ROC/AUC-indicate the effectiveness of these models, with one (1) being the best value. • Accuracy (0.90 for most models) reflects the ML model's overall correctness in classifying the MCI type. • F1 score balances precision and recall, with values around 0.70-0.72,indicating a good balance between false positives and false negatives.As indicated by the outcomes (Table 4), the utilization of machine learning models shows the potential of MLs in diagnosing and differentiating the two MCI subtypes.The reported standardized metrics-accuracy, F1 score, precision, recall, and ROC/AUC-indicate the effectiveness of these models, with one (1) being the best value.Table 4. Model performance in the classification task: individuals with aMCI vs. individuals with naMCI from language measures.We evaluated the feature importance and found that BERT features dominate the rankings of the 15 contributing factors for the RF classification.The following features contribute to RF classification, ordered from more important to less important: prepositional object, adposition, dependent, particle, auxiliary, root (verb), adjective, and subordinating conjunction. These results suggest a reliable performance in distinguishing patients with naMCI vs. aMCI, highlighting the potential of advanced ML techniques in medical diagnostics, especially for complex conditions like MCI.The high performance of these models suggests that they could be valuable tools in clinical practice for early and accurate identification of MCI types, thereby enabling more tailored and effective treatment strategies. Discussion MCI is an early stage of cognitive decline due to pathology reasons [4].Individuals with aMCI are characterized primarily by memory deficits, while individuals with naMCI are impaired in other cognitive functions, such as language, attention, and executive functions.Identifying the type of MCI is important for predicting the progression of the condition, as individuals with aMCI are more prone to progress into Alzheimer's disease [57,58] or other types of dementia (Glynn et al., 2021).This study determined the potential diagnostic utility of computational methods in classifying two subtypes of MCI from writing.We found that a written picture description task can distinguish individuals with aMCI and naMCI at approximately 90% accuracy.This finding confirms that written discourse analysis, which is infrequently carried out in clinical settings, provides clinically essential information [28] and can be a powerful approach for better characterizing the subtypes of MCI. Importantly, our study shows that a single behavioral task (i.e., a picture description task) can provide substantial information about domains that require multiple separate tasks.As mentioned earlier, either multiple pen-and-pencil tasks or neuroimaging techniques need to be conducted clinically to classify MCI.Previous studies using ML algorithms and neuroimaging data demonstrated an accurate classification of MCI subtypes [59,60].However, data can be obtained only with advanced techniques.They are not often feasible for individual patients [61].Behaviorally, multiple tasks that evaluate different cognitive components, such as memory and executive functions, need to be administered, which is considered a time-intensive process.From a clinical perspective, computational assessment of language with ML and NLP opens the door for exciting opportunities to expand the analysis to both longer and more complex test productions. Besides being a cost-effective assessment, it is also significant to note that the current study used written discourse samples, which have received little attention in research [29] and are not often collected and evaluated in clinical settings [62].He et al. [63] used a spoken discourse task to investigate the classification among healthy adults, subtypes of MCI, and dementia.In the study, the researchers used both linguistic and acoustic features, but the classification accuracy (aMCI vs. naMCI) was 88%.Our findings shed light on the clinical value of written discourse, as the linguistic features in writing lead to higher classification accuracy.This also indicates that linguistic features in writing can be potential markers of memory deficits and may provide enough information for the classification. Written discourse offers a plethora of information about individuals' linguistic functioning, including textual macrostructure and microstructure.However, it is not clear which components of written discourse in this population are more influenced by cognitive impairment in MCI.This is evidenced by 102 different measures used to quantify writing behaviors in research with little repetition of the same measure (Kim et al., 2024).In the current study, using written discourse samples, we calculated the POS of each word and syntactic relationships [64] that appear in the written picture description task [54].Together, this can be an optimal approach for analyzing such language samples in that it adds to the efficiency of written picture description analysis.This also provides a comprehensive and detailed grammar analysis in a standardized and less subjective manner. Moreover, we found that the BERT semantic features dominated the hierarchy of analytical constructs that we used.This finding is consistent with the consensus that impairments in semantic domains of language are a key manifestation of disease progress in neurodegenerative disorders [28,65,66].These features can be seen in the literature to be associated with one or more elements of the writing skills of individuals with MCI, as they interface linguistic and semantic memory domains. Specifically, context-sensitive embeddings from BERT [55] played a critical role in the high accuracy of the classification.These result from averaging the token-level embeddings from the last layer of a BERT model for each input text, which creates a single, comprehensive vector representation for the entire text, capturing its overall contextual meaning.Traditional word embedding techniques, such as Word2Vec [67] and GloVe [68], generate a single word embedding for each word in the vocabulary.The embeddings are decontextualized, which fails to capture the meanings of polysemous words.For instance, the word bank can mean a financial institution that accepts deposits and makes loans or the sloping edge of a river or other body of water.On the other hand, BERT uses a technique known as contextual embedding.This means that the representation of a word is based on sentence context.So, the word bank would have different representations in the sentences "I went to the bank to retrieve money" and "the little house next to the river bank", which offers a better representation of ambiguous meanings, improving the accuracy of text classification.Again, the contextual embeddings utilized in this study demonstrate a better understanding of the syntactic and semantic relationships between words in a sentence.This is crucial for quantifying the overall thematic content of the written picture descriptions.Additionally, since individuals with amnestic and non-amnestic MCI differ in their semantic memory [69,70], the contextual sensitivity of BERT's embeddings helps the model adapt to differences in vocabulary and jargon. Although it is well known that picture description tasks are valuable for eliciting connected language samples in individuals with MCI [71], the Cookie Theft picture offers a less organic method of personal expression through writing.Such productions are constrained substantially in their context and in effectively identifying differences in pragmatic language usage and speech and voice parameters.Also, the task does not allow the assessment of non-epistemic domains, such as deontic modality expressions of wish and hope and non-present tense verb-tense semantics, as it does not provide opportunities to discuss past or future events.Additionally, picture description tasks do not offer opportunities for expressing emotional and other affective content, which might be necessary for assessing the interface of language, emotion, and pragmatics.An open-ended essay writing could have offered the potential to assess more stylistic, linguistic, and communicative speech characteristics.Nevertheless, written picture descriptions demonstrate the potential to detect speech and language characteristics in neurodegenerative diseases such as MCI and dementia, as suggested by a recent review [28].Considering the brief time to elicit writing samples, NLP combined with discourse-level writing samples will enable more efficient methods for analyzing these linguistic and communicative features, further enhancing the diagnostic accuracy and the clinical utility of written discourse analysis. Conclusions The results of the current study suggest that written discourse samples can offer a quick and efficient means of gaining valuable insights into linguistic abilities while minimizing the burden placed on individuals with MCI.Future research is necessary to verify this finding with a balanced sample size between aMCI and naMCI.For a better diagnostic tool, future studies, including MCI-dementia conversion, are needed to test the predictive value of the automatic classification of MCI. Institutional Review Board Statement: This study was conducted in accordance with the Declaration of Helsinki and approved by the Johns Hopkins University School of Medicine Institutional Review Board (IRB00266221, 11 October 2020). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Figure; RCF (delayed) = score for the delayed recall of the Rey Complex Figure; BNT = Boston Naming Test; BDAE writing = free narrative writing from the Boston Diagnostic Aphasia Examination; TMT A = Trail Making Test Part A; TMT A error = Errors made in the Trail Making Test Part A; TMT B = Trail Making Test Part B; TMT B error = Errors made in Trail Making Test Part B; Color = Color Stroop test; Color(Word) = Color Stroop and Word Tests; SD: Single-domain MCI. Figure 1 . Figure1.The machine learning (ML) classification process used to differentiate between amnestic MCI (aMCI) and non-amnestic MCI (naMCI).Linguistic measures extracted from wri en picture descriptions were combined with contextualized word embeddings generated by BERT.The features were used to train and evaluate five ML models: Random Forest (RF), Gradient Boosting (GB), Histogram-based Gradient Boosting (HGB), XGBoost (XGB), and LightGBM (LGBM).The optimal model was selected after training and through hyperparameter tuning and comparative performance analysis. Figure 1 . Figure 1.The machine learning (ML) classification process used to differentiate between amnestic MCI (aMCI) and non-amnestic MCI (naMCI).Linguistic measures extracted from written picture descriptions were combined with contextualized word embeddings generated by BERT.The features were used to train and evaluate five ML models: Random Forest (RF), Gradient Boosting (GB), Histogram-based Gradient Boosting (HGB), XGBoost (XGB), and LightGBM (LGBM).The optimal model was selected after training and through hyperparameter tuning and comparative performance analysis. • Precision (0.74-0.75) measures the proportion of correctly identified positive cases among all positive calls made by the model.• Recall (ranging from 0.66 to 0.70) indicates the model's ability to identify all actual positive cases.• ROC/AUC (between 0.97 and 0.98) reflects the model's ability to distinguish between the two classes across various thresholds, with values close to 1 indicating excellent performance.Brain Sci.2024, 14, x FOR PEER REVIEW 10 of 16 Figure 2 . Figure 2. MLs performance on the classification task: individuals with aMCI vs. individuals with naMCI from language measures.  Accuracy (0.90 for most models) reflects the ML model's overall correctness in classifying the MCI type. F1 score balances precision and recall, with values around 0.70-0.72,indicating a good balance between false positives and false negatives. Precision (0.74-0.75) measures the proportion of correctly identified positive cases among all positive calls made by the model. Recall (ranging from 0.66 to 0.70) indicates the model's ability to identify all actual positive cases. ROC/AUC (between 0.97 and 0.98) reflects the model's ability to distinguish between the two classes across various thresholds, with values close to 1 indicating excellent performance. Figure 2 . Figure 2. MLs performance on the classification task: individuals with aMCI vs. individuals with naMCI from language measures. Table 1 . Participants' age and education across variants (amnestic and non-amnestic) and gender. Table 2 . Performance in neurocognitive tests in individuals with MCI. MMSE = Mini-Mental State Examination; WMS = Wechsler Memory Scale; RAVLT (total) = total score of the Rey Auditory Verbal Learning Test; RAVLT (delayed) = score for the delayed recall of the Rey Auditory Verbal Learning Test; (delayed recall); RCF (immediate) = score for the immediate recall of the Rey Complex Table 3 . Means and standard deviations of features in individuals with non-amnestic and amnestic MCI. Note: All measures indicate the count/total word; features marked with the index ( 1 ) are counts. Table 4 . Model performance in the classification task: individuals with aMCI vs. individuals with naMCI from language measures.
6,876.8
2024-06-27T00:00:00.000
[ "Medicine", "Computer Science" ]
Big Data Driven Agriculture: Big Data Analytics in Plant Breeding, Genomics, and the Use of Remote Sensing Technologies to Advance Crop Productivity Interdisciplinary efforts in high‐throughput field phenotyping Linking proximal and remote field phenotyping Cyberinfrastructure for high‐throughput field phenotyping security and aims. This modeling effort represents one opportunity to leverage the nation's cyberinfrastructure and government investments in planetary science to advance agriculture. To provide insight into these various initiatives and agencies, two Big Data Driven Agriculture workshops, focused on big data analytics in plant breeding and genomics, satellite data and modeling, and the use of machine learning and other remote sensing technologies to advance crop productivity, were organized by the Donald Danforth Plant Science Center. The first workshop began with opening remarks by the Director of USDA-NIFA, Dr. Sonny Ramaswamy, and the second workshop was opened by the Administrator of the USDA-ARS, Dr. Chavonda Jacobs-Young. The opening remarks were followed by a series of presentations designed to provide the workshop participants with a status report on state-of-the-art research and applied work in related disciplines. Each group of presentations was followed by a facilitated panel discussion allowing the opportunity for questions, answers, and productive discussion. In the afternoon of each workshop day, participants interacted in smaller group discussions and "hackathons." Key outputs of the presentations and discussion sessions are presented here. To encourage graduate student and postdoctoral interactions with geneticists, breeders, and remote sensing experts and to promote interdisciplinary career perspectives in agriculture, abstracts were invited from graduate students and postdoctoral scholars. Ten travel awards to the workshop were given to selected student and postdoctoral applicants. Four students and postdoctoral researchers were selected from the abstract submissions to give 15-min presentations on their research. This meeting brought together participants from USDA, the National Science Foundation (NSF), the Advanced Research Projects Agency-Energy (ARPA-E), DARPA, and NASA with a community of scientists and engineers to develop a road map for the delivery of immediately applicable algorithms and best practices and a strategic plan for future success in this domain through managed standards, data repositories, and interdisciplinary engagement. The meeting participants were solicited by a diverse organizing committee (see Supplement 1), and are hereafter referred to as the "Big Data Driven Agriculture community" (Fig. 1). Background and Significance Interdisciplinary Efforts in High-Throughput Field Phenotyping High-throughput field phenotyping is a relatively new but rapidly growing research area, and it will remain a top agricultural research priority in the next decade. The NIFA FACT Initiative provides a timely opportunity to develop a cross-disciplinary research agenda, bringing together plant breeding and analytics with phenotyping data and modeling. Remote sensing technologies, proximal sensors, deployment platforms such as unmanned aerial vehicles (UAVs) and ground vehicles, and statistical analytics are being rapidly customized and deployed for high-throughput phenotyping and use as plant performance measurement tools for crop improvement and breeding and precision agriculture platforms for agronomy, soil science, and farm management. Currently, the most important challenge is to ensure that the plant science communities and data analytics communities know how to use these data to deliver actionable results for scientists and farmers. Coordination and interaction among the key disciplines of breeding, agronomy, computer science, data science, engineering, and genomics is needed so that these high-throughput phenotyping tools are accessible for broad and applied agricultural use. Linking Proximal and Remote Field Phenotyping The coordinated collection of high-resolution proximal infield datasets and satellite monitoring has rarely been done, and it may be possible to create even greater insights by fusing these methodologies. The collection of high-resolution proximal data is critical for many applications, but in many cases insight and recommendations will be acceptable at far lower resolution and can be generated from high data volume satellite resources. For example, weather patterns, environmental determinations, and assessments of target crop populations may be achieved with veryhigh-throughput satellite platforms. In many remote-sensing applications, high temporal resolution is the most important feature, and the goal is to have more "revisits," ultimately reaching the potential for daily imaging. However, a breeder may be willing to sacrifice temporal resolution for higher image resolution from proximal sensors to ensure data collection at the plot, plant, or leaf level. This is a nascent and rapidly developing field, and research is needed to understand the inherent tradeoffs between temporal and spatial resolutions. The optimal solution depends on the crop and research objectives. Multiple US agencies and other government services are using satellite data for estimating crop area, yield, and condition, including USDA-National Agricultural Statistical Service (NASS), USDA-Foreign Agriculture Service (FAS), Group on Earth Observations-Global Agricultural Monitoring Implementation Team (GEOGLAM), Famine Early Warning Systems Network (FEWS NET), and for weather modeling (NOAA and others). In the private sector, many firms are deploying low-cost and highreturn-rate satellite fleets to develop decision support systems for crop producers. These firms are leveraging models developed by plant physiologists and are investing heavily to deliver actionable data for farmers and plant breeders. Cyberinfrastructure for High-Throughput Field Phenotyping The Big Data Driven Agriculture community has need for additional tools from the national cyberinfrastructure toolbox, including knowledge frameworks that organize the work of a diverse community in a coherent manner. A highly promising method is to develop models that provide recommendations for plant breeding, management, or policy decisions. Gramene's Plant Reactome (plantreactome.gramene.org) is an example repository where researchers contribute findings to maps of protein networks and pathways. This site presents large amounts of work in a digestible way and guides new research predicting emergent cellular behaviors. Likewise, an effort funded by the Foundation for Food and Agriculture Research (FFAR), Crops In Silico, is creating models that link genomic insights, molecular networks, and plant phenotypes, providing a method for researchers working at any scale to contribute to shared results. At a macro scale, a modeling community organized by DARPA, through the program World Modelers, is charged with using similar methods to combine crop yield predictions, weather, trade, and immigration to predict regional food security challenges. These large-scale models represent another opportunity to leverage the nation's cyberinfrastructure and government investments to advance agriculture. Once the methods are developed for food security prediction, they will be broadly applicable for other complex modeling exercises that require multiple data stream applications like farm management and crop improvement. The Big Data Driven Agriculture meeting, as summarized here, aimed to link diverse fields and to create a research community that can develop and deploy a national cyberinfrastructure network that supports plant breeding, food security, and other USDA missions. Key Takeaways Specific Recommendations for NIFA FACT and Related Initiatives and Agencies 1. The Big Data Driven Agriculture community gives a strong recommendation for longer term funding or formal grant extension and "plus up" opportunities to support breeding and genomic selection projects. The breeding cycle for most annual crops can take 7 to 10 yr from an initial cross to commercialization, with perennial and tree crops taking longer. Incorporating high-throughput field phenotyping data will probably increase the number of genotypes that can be screened and improve selection accuracy once technologies and tools for breeders are available and accessible. 2. The Big Data Driven Agriculture community requests funding opportunities or a specific initiative that supports a sustainable data repository system with tools for analysis ( Fig. 2). This is a vital undertaking for continued success in this interdisciplinary effort, and long-term federal agency support is necessary for it to succeed. While some infrastructure and tool development can be conducted through competitive grants, ultimately a permanent repository (e.g., the National Institutes of Health's National Center for Biotechnology Information [NCBI], USDA-NASS) is needed for long-term stability and to ensure sufficient maintenance. An alternative model would be a centralized clearinghouse that provides a seamless interface to permanent institutional repositories such as libraries. This approach could serve a similar purpose while being more robust and inclusive but will require more advanced technologies and coordination. 3. Many members of the community recommend that NIFA should create funding opportunities to invest in existing infrastructure rather than initiatives to develop new infrastructure for high-throughput field phenotyping technologies. Interdisciplinary grants related to data-driven agriculture generally have two main components: (i) development of the infrastructure or core technology followed by (ii) hypothesis-driven experimentation using the newly developed infrastructure or technology. There is a sense that the current timeline of a typical USDA-NIFA grant (3 yr) realistically results in achieving only the first component of the project, which is successful infrastructure development. Within a typical funding period, there is not sufficient time to implement key improvements to the infrastructure and/or technology or to demonstrate application of a developed system. Either grant terms need to be extended to allow time for implementation or funding should be made available specifically for productive and impactful applications of scientific research. 4. To facilitate sustained interdisciplinary interaction over the next decade, the Big Data Driven Agriculture community recommends that agencies such as NIFA fund interdisciplinary training programs for principal investigators as well as students and postdoctoral researchers. For example, can NIFA package fellowships in which graduate students from multiple disciplines work in multiple labs? Similar to the NSF-funded Predictive Plant Phenomics program at Iowa State University or the NIFA National Needs scholarships, a program that crosstrains students should be prioritized. The community strongly believes that the next generation of grant applicants, Ph.D. students, and postdoctoral researchers will benefit from training and exposure to diverse fields including data science, biology, engineering, and math (Fig. 3). 5. The Big Data Driven Agriculture community requests funding opportunities that are problem oriented without being overly narrow. Many constituents are interested in developing and using phenology-focused tools that are specifically designed to solve problems for agricultural stakeholders. As an example from the private sector, Planet Labs is working with Farmers Edge to analyze data to determine crop-cycle changes. These funding opportunities could initiate research with a small amount of money, with the opportunity for larger amounts with progress as suggested in Recommendation 1 above. Further, research problems should be generated from an agricultural perspective to focus on a current and impactful problem and then bring in other disciplines to figure out how to solve the problem. With each proposal call, we recommend an explicit statement from both the agency and the applicant addressing the applied objectives of big data collection and analysis. 6. The collective Big Data Driven Agriculture community requests that funding agencies promote and support initiative-wide standard operating practices (SOPs), data standards, and data formats. With research grants heavy in experimentation and development, scientists generally do not take the time to learn and teach SOPs. Standardization of phenomics-related variables is long overdue; however, without incentive or repercussion, funded proposals can often be biased toward individual research objectives over transdisciplinary research, and meaningful translation across phenomics platforms and transdisciplinary research efforts is difficult, if not impossible. Standard procedures and formats would allow transformative research and accelerated discovery through integration of data across species, time, and location. Concerns and Additional Recommendations from the Big Data Driven Agriculture Community Facing this Interdisciplinary Effort Real World Metrics for Success The key goals for plant breeders are to predict phenotype (preferably in untested genotypes grown in untested environments) and increase genetic gain; however, deep data collection and analysis are needed to support this objective. The concern with precision agriculture is the lack of rapid data analysis that provides actionable guidance that farmers can trust. Farmers require near-real-time data to effectively adjust management strategies to optimize yield. However, it is unclear the extent to which sensing has improved crop production. Further, of the many technologies currently available, it is unclear to the community which tools are being effectively used for crop improvement. How can funding agencies work to measure the impact of diverse research groups? Real world metrics and milestones for cross-disciplinary projects are underdeveloped and should be formally considered by NIFA and other funding agencies as Requests for Applications (RFAs) are drafted. Trade-offs of High-Throughput Sensing Another concern about the use of current high-throughput sensing and machine-learning-based prediction efforts is that it is unlikely that the rare "unicorn" genotype that has the potential to make large step-function crop improvement advances (an outlier almost by definition) will be detected. The current model of driving steady but incremental genetic gain favors exclusion of possible outliers. To partially address this concern, integration of multiple data layers, both remote and proximal, is needed to dramatically improve the phenotype prediction equation. Emphasis on Data Quality and Standards E.O. Wilson's comment, "We are drowning in information, while starving for wisdom," was quoted by Dr. Sonny Ramaswamy in his opening remarks, and this statement captures one of the biggest challenges facing the Big Data Driven Agriculture community. In the race to publish and be awarded grants, there is a concern about the lack of emphasis placed on data quality by principal investigators. There is a consensus that "garbage in" in terms of primary data quality results in "garbage out" of final data quality (Fig. 4) and that collecting and quality checking data can take a long time-too long for the current 3-yr lifespan of a typical grant. Insights from current machine learning models are only as good as the data used. Further, there are no real standards or protocols for sensor precision and calibration of instruments. There are individual efforts to address this from universities and large research programs; however, there is a need for one or more organizing bodies or programs to oversee standard or protocol implementation, evaluate the quality of research, and develop standards for future research programs. Diversity in Grant Applicant Groups Due to a number of varied reasons, investigators generally apply for grants within their networks of institutional and first degree colleagues. These circles of established applicant groups can be hard to join, particularly for new faculty. To address this concern and benefit the interdisciplinary FACT initiative, we suggest that NIFA and other agencies consider "playing matchmaker" and pairing grant proposals toward a common goal. We recognize this is not a traditional role for NIFA, but it could be a very impactful one. A pilot project might be a beneficial first step. Q&A with the Big Data Driven Agriculture Community Question 1: How can large and comprehensive datasets on plant breeding, genomics, remote sensing, and analytics benefit agriculture (Fig. 5)? 1. These large datasets can significantly contribute to cultivar development. Data fusion from multiple sensors can be used to make cultivar selections, as breeding programs often deploy multiple sensors to measure unique physiological or architectural attributes to make informed breeding decisions. 2. The use of sensor datasets that have relationships with target traits (e.g., yield, drought tolerance) can be effectively used during the breeding season to assist selection decisions. Fig. 4. Data quality and standards: "garbage in" data, "garbage out" results. 3. These large datasets can inform genomic selection and machine learning models for breeding and crop modeling. 4. Results, knowledge, and ideas from big data initiatives in agriculture need to formally integrate university extension services. Extension bridges basic and applied research, and extension scientists are uniquely positioned and skilled to translate knowledge and technology applications and deliver it to the farmer or producer. Question 2: What methods could be used to create a successful field phenotyping campaign? 1. Well-tested and documented sensor calibration is important to collect reproducible and biologically relevant data. Protocols for sensor calibration should be published with the research outcomes. 2. Appropriate adjustment of sensor data resolution to the field campaign and experimental design is necessary. Phenotyping speed is generally inversely correlated to sensor spatial resolution, and the right balance should be struck to achieve the field campaign and project goals. 3. Phenotyping campaigns need to have clearly defined strategies to prevent unnecessary and time-consuming data collection. Data collection for field campaigns should measure what is important to the project goals, not what is simply easy to measure. 4. Precision and accuracy are often unknown in a field phenotyping effort. Measures of the environment need to be standardized to account for variation in the sensor phenotypes that are observed. Question 3: How can we determine protocols for the collection and analysis of agricultural big data? 1. Newly established data collection and analysis protocols to be used in phenotyping should garner the input and support of professional societies. The Big Data Driven Agriculture community is international. Where appropriate, the US-based research efforts should implement and apply standards commonly used and established in international programs. 3. Agencies like NIFA and the FACT initiative can support development and implementation of protocol standards. Question 4: How can we most effectively address the need for a sustainable means of data storage and access? 1. The solution to this question needs to include discussion and buy-in from public and private industry, universities, and the government. 2. Financial support for a long-term data repository that maintains original copies is required, but uncertain, and should be addressed immediately. 3. The Big Data Driven Agriculture community proposes the development of a federated data storage system as a collaboration between private, public, and government agencies. Business models for this concept will be needed at multiple levels to support collection and maintenance costs. The demand for data storage is growing at a rate faster than storage costs are decreasing, and long-term sustainability of the shared system is critical. 4. The recommended centralized platform is likely to attract other researchers who will bring even more data, thus increasing the storage demand. 5. Data collection often evolves over the course of a project and usually over-delivers types and amounts of data. 6. To ensure use of a federated data storage system, funding agencies might consider withholding funding until data are deposited into a central repository. Question 5: What research engagement opportunities might cut across the represented disciplines of plant breeding, machine learning, remote sensing, and big data infrastructure and analytics? 1. Research challenges could initially be generated from the perspective of agricultural stakeholders (e.g., farmers, nongovernmental organizations [NGOs], extension services), and subsequently bring in researchers in additional disciplines to address specific research challenges. Core disciplines of crop physiology, pathology, entomology, soil science, and in silico biology should not be overlooked. 2. Funding agency awards should support multiple, interdisciplinary principal investigators. More interdisciplinary teams of engineers, data specialists, and plant breeders are needed. The ARPA-E TERRA and ROOTS programs are potential models. It is difficult to coordinate these efforts without shared program planning, and greater results can be achieved through planned coordination. 3. The Big Data Driven Agricultural community is a highly interdisciplinary community, and few institutions have a full team to put all the pieces together. Funding agencies should take on "matchmaking" for specific initiatives, bringing together research groups and institutions that might not normally interact. This can include matching smaller, less well-funded research groups with larger institutions that may have greater resources. Question 6: What cross-cutting short-and long-term funding needs can you identify for continued success in these domains? 1. Resources are needed for developing standards and best practices prior to the completion of the grant. There can be great value in the generation of template data sets for training and other learning opportunities. 2. When funding is granted, agencies should consider offering additional resources earmarked for curation, publishing, and promoting data. For example, in some cases NSF provides additional funding for computing resources for groups with NSF-funded grants, and the National Institutes of Health give credits for certain computational services and applications. 3. Principal investigator training across disciplines is needed to communicate current capabilities and state-of-the-art methodologies. This could be short courses, online modules, and webinars. Moreover, principal investigators should be exposed to stakeholders in agriculture (e.g., farmers, NGOs, extension specialists) to understand real world needs and challenges. 4. Interdisciplinary training opportunities for students and postdoctoral researchers are needed. Similar to the NSF Predictive Plant Phenomics program at Iowa State University or NIFA's National Needs scholarship, funding agencies should consider a package of fellowships in which graduate students from different disciplines work in multiple labs. Students and postdoctoral researchers should be cross-trained in the areas of data science, bioinformatics, engineering, and statistics. Question 7: How can we incentivize cross-disciplinary and transdisciplinary work when discipline-specific discoveries are rewarded? 1. Agencies should consider direct funding support for student exchanges and support for multiple faculty across disciplines. 2. Agency RFAs should explicitly require a cross-discipline approach instead of making an implicit recommendation in the proposal guidelines. 3. Funding milestones and follow-up funding could reward crossdiscipline discoveries. 4. Requests for Applications could incentivize research approaches that come from other disciplines for application to agricultural problems. 5. Mechanisms to highlight data, research, and code by individuals within a larger interdisciplinary program should be created so that individual contributions to overall projects are clear. Question 8: What measurements are feasible with remote sensing, and when is in-field monitoring needed? How might you design experiments that incorporate ground sensing and remote sensing to leverage the capacity of both? 1. Traits like crop fraction cover, hyperspectral reflectance, leaf area index, and disease resistance are traits of interest. Radiometrically corrected data and surface reflectance and bidirectional reflectance distribution function are all feasible with remote sensing; however, noise in growth curves can be attributed to the plant or crop and also the atmosphere. 2. Enviro-typing campaigns would benefit from the assistance of remote sensing where several different types of data are needed. 3. Virtual constellations comprising different modes and scales of data collection is a challenging area (e.g., UAV to satellite data). 4. Calibration protocols for aerial platforms are different than for the satellite platforms. With surface reflectance, the atmosphere is modeled with the sun angle and reflectance. MODIS, Landsat, etc., all have been calibrated using surface reflectance, and satellites have extra bands delegated to this correction. With the advantage of higher resolution, UAV systems do not have these standardized protocols for correction. Question 9: Some groups are considering having shared UAV user facilities. How feasible would it be for a university to set up a core facility on analysis of geospatial data? 1. A shared research facility would be highly useful to bring in sufficient resources for all groups, and universities may have the infrastructure and resources to maintain such a facility. 2. Centralized locations have the potential to facilitate training in standardized operation and deployment of pheno typing technologies. 3. Data sharing policies have to be in place, as concerns related to proprietary data and licensing are likely. 4. Centralized facilities enable standardization of data products and methodologies as well as adoption and development of best practices. Conclusions The main outcomes of the Big Data Driven Agriculture workshops were (i) the current white paper with suggestions to NIFA and other interested funding agencies for future RFAs and (ii) connecting researchers from the various disciplines with each other and with the Departments of Agriculture, Defense, Energy, and other governmental departments for the discussion of adopting technologies and creating opportunities for agricultural research. New funding methods are needed to support innovation, and the Big Data Driven Agriculture community has six core recommendations for building a vibrant phenotyping community in the United States: 1. Provide phased, stage-gate funding up to a complete crop cycle (7-10 yr): Success of multifaceted systems projects require longer setup than traditional research programs. Funders should explore phased funding structures with stage gates and incremental increases in funding to allow successful teams the continuity to achieve large impacts. 2. Build a centralized data repository: Researchers need a centralized data repository to store, compare, and repurpose data. This resource could support a new team of data analysts who are available to researchers to assist in data preservation and reuse. 3. Invest in existing infrastructure and tools: The community needs opportunities to continue use of de-risked, existing phenotyping methods and equipment to achieve breeding or agronomic outcomes. 4. Provide interdisciplinary training opportunities for students: Expanded funding efforts are needed to train students and postdoctoral scientists in multiple disciplines, providing infrastructure and tools to support the next generation of agricultural researchers who are equally comfortable on a keyboard and a combine. 5. Provide problem-focused funding: Phenotyping efforts can be accelerated by focusing teams on specific agricultural problems that allow comparison of algorithms and can serve to coordinate efforts at a program level. 6. Develop data standards and standard operating practices: Collaboration will be greatly enhanced by the development of standard, intercomparable data and software. This should include protocols for standardized data collection and calibration, gold standard datasets for algorithm validation, and common data exchange formats for interoperability. Coordination with organizations such as the National Institute of Standards and Technology (NIST) would be beneficial. Acknowledgments This workshop was sponsored by the USDA-NIFA FACT program via Grants no. 2018-67021-27483 and 2018-67013-27427. Any opinions, findings, conclusions, or recommendations expressed here are those of the workshop participants and do not necessarily represent the official views, opinions, or policy of the funding agency. The recommendations put forth in this report also do not necessarily reflect the opinions of all attendees of the workshop. We have summarized general consensus topics and suggestions that were documented by several note-takers and the authors during the meeting breakout sessions, panel discussions, and presentations. The names and affiliations of participants mentioned here were current at the time of the workshop and may have changed. The organizing committee would like to thank the speakers, moderators, and student note-takers. We also thank Kathleen Mackey and Bill Stutz from the Donald Danforth Plant Science Center for their assistance in organizing the workshop. Graphic design in this white paper is credited to Bill Kezele. Finally, we like to thank Dr. Stephen Thomson and Dr. Ed Kaleikau, national program leaders at USDA-NIFA for their perspective, support, and input.
5,965.4
2019-01-01T00:00:00.000
[ "Biology" ]
Heat pipe long term performance using water based nanofluid The heat pipe is a passive cooling device that transfers heat from a hot source to a heat sink using fluids as a working medium. Working medium evaporation and condensation are key factors for designing an efficient heat pipe. Many researchers highlight nanofluids, a mixture of base fluid and nanoparticles, as a new working medium for more efficient heat pipes. The present research aimed to investigate heat pipe long-term performance using water-based nanofluids as working medium. Nanofluids with 1 and 3 vol% Al2O3 of 20–70 nm particle diameter in water were prepared and characterized. It has been seen in our previous study that the heat pipe performance is enhanced by an average of 26%; however, this enhancement was not sustained over long use and raised a concern about the long-life homogeneity of the nanofluid due to the liquid evaporation. Therefore, we investigated used nanofluid characteristics to determine whether it stays suspended in the base fluid as dispersed particles, or it agglomerates, then aggregates in bigger sizes and then precipitates. The dried heat pipe’s porous medium is cut-out after several uses and is scanned by electron microscope (SEM) at different operation heat loads. Some aggregated nanoparticles have been seen on the wick surface, which caused a capillary and thermal resistance. Also, a sample of the used nanofluid is dried and *Corresponding author: Mohamed I. Hassan, Department of Mechanical Engineering, Masdar Institute, Khalifa University of Science and Technology, Masdar City, P.O. Box 54224, Abu Dhabi, United Arab Emirates E-mail<EMAIL_ADDRESS>Reviewing editor: Duc Pham, University of Birmingham, UK Additional information is available at the end of the article ABOUT THE AUTHORS The corresponding author has multidisciplinary experimental and computational experience in a broad area of thermal and material science. His research team developed several computational fluid dynamics and finite element models for a variety of industrial and energy sustainability applications. The team research projects are covering: reverberatory furnaces design, waste energy recovering, nanofluids preparation and characterization, water desalination, voltage drop reduction in aluminum smelter anode assembly, smelter potline efficient cooling, MEMS energy harvester, industrial furnaces burner's design, electronics devices cooling, heat engines efficiency improvement, cooling systems using variable refrigerant flow (VRF), variable air volume (VAV) and district cooling chillers with thermal storage. Our research studies are sponsored by grants from Masdar Institute, Global Foundation, and Emiratis Global Aluminum (EGA), Masdar Corporation, Strata, and Masdar/MIT flagship. The present paper shows the outcomes of implementing in-house engineered nanofluids in electronic device cooling. PUBLIC INTEREST STATEMENT The heat pipe is a passive cooling device that transfers heat from a hot source to a heat sink using fluids as a working medium. The fluid phase-change increases the heat pipe thermal conductivity by more than ten times compared to solids. The present research investigated heat pipe using water-based nanofluids as working medium. Nanofluid is a homogeneous mixture of basefluid and nanoparticles. The nanoparticles sizes are in the range of 20-70 nm diameter. In this study, water is used as a base-fluid, and alumina nanoparticles are dispersed in water with 1 and 3 vol%. Previous studies indicated that the heat pipe performance is enhanced by an average of 26%; however, this enhancement did not persist for long time use because of nanofluids structure deteriorating. Scanning Electron Microscope results for Heat pipe cutout showed aggregated nanoparticles on the wick surface, which increased the capillary and the thermal resistance. Background and introduction Heat pipe is a passive heat transfer device that works based on working medium (fluid) phase change to increase the device thermal conductivity compared to solids. Working fluid properties are of primary concern when designing a heat pipe (HP) device to dissipate heat with minor temperature drop. Working medium phase change behavior varies from one liquid to another, and its saturation temperature is altered by adding particles or solutes. Therefore, the working fluid's thermophysical properties influence its thermal performance (Shafahi, Bianco, Vafai, & Manca, 2010). Unusual heat transfer improvement methods are required to cope with the required cooling demand in high-energy compact devices, such as computer processors and MEMS devices. All traditional liquids have low heat transfer properties with respect to the thermally conductive solids (Kang, Wei, Tsai, & Yang, 2006). A nanofluid is a mixture of base-liquid and solid particles made from highly thermal-conductive metals in nano-scale; nanoparticles must be dispersed in base-liquid in a homogeneous suspension pattern (Choi, 1995;Kakaç & Pramuanjaroenkij, 2009). Several studies have experimentally facilitated nanofluids such as HP working-medium incorporating solid metal nanoparticles such as silver, copper oxide, diamond, alumina, titanium, nickel oxide and gold (Kang, Wei, Tsai, & Huang, 2009;KyuHyung, HyoJun, & Seok, 2010;Lin, Kang, & Chen, 2008;Yang, Liu, & Zhao, 2008). The fabricated nanofluids acquired noticeably higher thermal conductivity and high heat transfer properties compared to the conventional pure liquid (Kang et al., 2006). Nanofluids' thermophysical and rheology properties, phase saturation, specific heat, thermal conductivity, density, and viscosity are based on the nanoparticle's types and concentrations (Kakaç & Pramuanjaroenkij, 2009). The HP size and design at different operation conditions using different nanoparticles metals in various base-liquids is investigated by other researchers (Asirvatham, Nimmagadda, & Wongwises, 2012;Kang et al., 2006) in parametric study. Alumina (Al 2 O 3 ), silver (Ag) and copper oxide (CuO) are among the most utilized nanoparticles that are well investigated (Asirvatham et al., 2012;Hassan, Singh et al. 2015;Hung, Teng, & Lin, 2013;Kakaç & Pramuanjaroenkij, 2009;Kumar, Sridhar, & Narasimha, 2014;Lai, Phelan, Vinod, & Prasher, 2008;Lin et al., 2008;Moraveji & Razvarz, 2012;Nguyen, Roy, & Gauthier, 2007;Noie, Heris, Kahani, & Nowee, 2009). As proposed by Minkowycz, Sparrow, and Abraham (2012), the thermal conductivity of base-liquid could increase by adding solid particles of diameters less than 100 nm indicating the importance of the nanoparticles size. The influence of the nanoparticles concentration on the heat pipe performance is considered by Shafahi et al. (2010) using nanofluids that constituted the most common nanoparticles, and were able to optimize the mass concentration for nanoparticles to maximize the HP heat transfer. Water-gold nanofluids showed a significant reduction in the HP thermal resistance as investigated by Tsai et al. (2004). 50-80% reduction in the thermal resistance is reported by Zhou (2004), using particle sizes of 10 and 35 nm in water-copper nanofluids in a grooved HP evaporator. Another significant reduction in the HP thermal resistance, 76.2%, is reported by Asirvatham et al. (2012) incorporating a water-silver nanofluid. Water-alumina nanofluids are included in HP by Hung et al. (2013), KyuHyung et al. (2010) and Noie et al. (2009), and all showed significant improvement in the HP heat transfer rate, across a range of methodologies. However, the studies did not mention whether these improvements are repeatable or not. The watersilver nanofluid-filling ratio in the thermosyphon study by Paramatthanuwat, Boothaisong, Rittidech, and Booddachan (2010) did not show an effect on the heat transfer properties; however, it affects the heat transfer rate. As concluded from the literature survey, heat pipe incorporating nanofluids with different particle materials, size and concentration are of interest to many researchers. However, it is a challenging device because of the complex physics of the working medium. There is a gap of published work concerning the stability of nanofluids after base-liquid separation due to phase change in heat pipe evaporator. Questions regarding the HP effectiveness reliability are still being answered unconvincingly, to the extent that it is not certain whether nanoparticles' suspension stability will be sustainable in a given liquid medium over the device usage period. Do the particles stick to a given HP wall surface? Do particles agglomerate and/or aggregate due to base-liquid evaporation? Do particles block the porous medium and affect the wick capillarity (wettability)? To our knowledge, a gap of relevant studies explains the transient changes occurring between the heat pipe and nanofluid need to be fulfilled. Therefore, further experimental studies are required to investigate the effectiveness of the HP incorporating nanofluids' temporal performance sustainability. Different nanoparticles concentrations with and without surfactants are critical factors to support the current understanding. The main objectives of this study are to investigate the heat pipe's nanofluid-enhanced performance stability after one year of usage under the same operation conditions and to justify the redundant performance with the HP life performance reliability. Particle size distribution Water-alumina nanofluids with a volume concentration of 1 and 3% nanoparticles, 20-70 nm in diameters, are manufactured as explained in our previous publication (Hassan, Singh et al., 2015). Alumina nanoparticles are dispersed in deionized water, and the suspensions were sonicated in room temperature control facility, 25 ± 5°C using a high-performance dispersing instrument, IKA T25 digital ULTRA-TURRAX, for 6 h. The suspension was stabilized by acidifying with hydrochloric acid to a pH of 5.4, which is far from the isoelectric point (IEP) of alumina nanofluids. The particle diameter distribution and suspensions zeta potential were characterized by an acoustic and electroacoustic spectrometer from Dispersion Technology, DT-1201. Figure 1 shows the particle size distribution (PSD) of alumina particles in the tested nanofluids, and it shows a binomial distribution of 20-70 nm with 65 nm peak value. This distribution is in reasonable agreement with the dry alumina particles manufacturer's manual, although some deviation because of the wet environment versus the dry one is noticed. Some aggregates are represented by the presence of a small binomial-peak which is not in the original manufacturer's particle size range; they are relatively very few in population to affect the fresh nanofluid thermophysical properties. This distribution infers that the nanoparticles are dominating the suspensions alone with few aggregates. However, it gives an indication of the possible aggregation if the particles are getting closer to each other due to liquid separation. Nanofluids viscosity characterization Rheology characterization of water-alumina nanofluids is measured from a stress-controlled ARES-G2 Rheometer using a 30 mm diameter bob setting for the concentric cylinder. Advanced Peltier Systems (APS) is used to control the system's temperature with an accuracy of ± 0.1°C within the range of 20-60°C. The experiment was conducted at 5.485 mm of a gap between bob and cylinder. Effective-viscosity (μ eff ), the viscosity of the nanofluids to that of the base fluid at same measuring conditions, versus shear rate measurements is carried over a wide range of 0.1 to 1,000 s −1 as shown in Figure 2. The shear rate is varied logarithmically to cover a broad range of viscosity measurements. The examined nanofluids showed a Newtonian behavior in the experimental range. Figure 3(a) shows the rheology measurements for the nanofluids viscosity versus temperature at different alumina particles concentrations. The effective viscosity behavior of the nanofluids against temperature is shown in Figure 3(b). As indicated in Figure 3(a), the viscosity is exponentially decreased with temperature increase indicating an Arrhenius type trend for the nanofluids, which is in agreement with what had been seen by Zhou, Ni, and Funfschilling (2010). Further, the effective viscosity's temperature dependence can be seen to be more significant at 3 vol% as shown in Figure 3(b). This effect might infer that the contribution of intermolecular forces between particles to increase the nanofluid viscosity at a lower temperature and decays with increasing the energy of Brownian motions because of the temperature. Therefore, the rheology effect will be reduced by increasing the temperature. The lower particle concentration, 1%, comes closer to the base fluid viscosity; however, the higher particles concentration, 3%, still showing a significant difference in viscosity at high temperature. These results are indicating that the higher particle concentrations will be even getting worse. Thermal conductivity measurements The methodology and technique that have been used for thermal conductivity measurement are discussed and detailed in Hassan, Singh et al. (2015). Figure 4 illustrates the thermal conductivity enhancement for the alumina particles nanofluids in the temperature range of 20 to 60°C. The percentage enhancement is calculated as the percentage change of the nanofluid thermal conductivity with respect to the base fluid. In this case, water was the base fluid with a thermal conductivity variation from 0.615 to 0.651 W/m K in the measured temperature range. The thermal conductivity improvement is much higher than that of the Hamilton Crosser (HC) model. In addition, nanofluids thermal conductivity improvement is strongly temperature dependent. Boiling and specific heat measurements The aim of this test is to measure the thermal diffusion rate of the base fluid and the nanofluids under investigation as well as their specific heat and find out if there is any correlation between these two quantities. An experimental investigation for the γ-alumina nanofluids pool boiling heat transfer is done by Wen and Ding (2005). The measurements showed significant enhancement in the boiling heat transfer; the enhancement reaches 40% at alumina particle (size 10-50 nm) loading of 1.25% by weight. A similar experiment for our in-house prepared nanofluids is performed at atmospheric pressure; however, the boiling heat transfer coefficient is out of the scope of this paper. A temperature controlled hot plate is implemented to warm-up the nanofluid to its boiling onset. A linear sensitivity thermocouple, E-Type, is centered in the nanofluid, and a data-acquisition device is used to acquire the temporal temperature readings, similar to T 6 in Wen and Ding (2005). An error analysis showed ±1.1°C uncertainty for this experiment setup with 95% confidence. Temperature versus time measurements for base-fluid as well as nanofluids at different particles concentration is shown in Figure 5. The main idea here is to fix the heat source rate, Q , and record the nanofluid's temperature response versus time. As it can be seen from the plots, the temperature versus time curve slope T∕ t is elevated by introducing nanoparticles, and more elevation is shown by increasing nanoparticles' concentration. These results indicate the enhancement in the boiling heat transfer coefficient as previously observed by Wen and Ding (2005). Higher T∕ t depicts higher heat transfer rate according to Equation (4). The nanofluid specific heat varies with the nanoparticle concentration in the base fluid according to Equation (1) as reported by Buongiorno (2005). Figure 6 shows the measured Base fluids (Cp) as well as the calculated nanofluid's specific-heat, (Cp) nf , versus temperature. Particle's concentration, ∅, and nanofluid density, ( ) nf , are calculated from Equation (3) (Pak & Cho, 1998). In contradiction to the base fluid, the specific heat of the nanofluid (Cp) nf is depreciated by elevation of temperature as well as the nanoparticles concentration as illustrated in Figure 6. According to the heat diffusion rate equation, Equation (2), at constant heat rate, Q , and fixed fluid mass, the higher T∕ t slope will be faced with a specific heat reduction. Therefore, the results of Figures 5 and 6 are in good agreement as the heating curve slope ( T∕ t) in Figure 5 is increased by increasing the nanoparticles concentration, which in turn reduces the nanofluid specific heat. The learning outcome of this observation is concluded in the thermal diffusion enhancement of the nanofluids compared to the base fluid. The measured average specific heat of these fluids are 4.14 ± 0.06 kJ/kg K for water, 4.04 ± 05 kJ/ kg K for 1% nanofluids, and 3.8 ± 04 kJ/kg K for the 3% nanofluids. It can be seen that the predicted values are higher than the measured ones; however, this indicates the same trend for the three concentrations. One can say, the nanofluid heat conductive enhancement shown in Figure 4 is associated with specific heat reduction; which depicts an additional appreciation effect toward nanofluids' thermal diffusivity (α), = k c p . (1) As illustrated from the nanofluids characterization, significant heat transfer properties enhancement is observed and it is in compliance with previously published literature. The question here is whether these enhancements remain with time, when nanofluids' phase change is mandatory in the implemented application. The heat pipe is one of these applications that have been developed based on working medium phase change phenomena. When nanofluid was applied in the heat pipe, it showed a significant conductivity enhancement as reported by Hassan, Singh et al. (2015) in agreement with a number of publications. Later, on focusing on the phase change it is found that the heat pipe does not result in the same performance over a period of just several months. The study was repeated over a period of one year, and a comprehensive structural investigation is performed on the heat pipe internal surface as well as the used nanofluid. The following section will describe these study results. Experimental apparatus The apparatus is a modification of the design previously built by Hassan, Singh et al. (2015) to facilitate the working medium recharging as illustrated in Figures 7 and 8, wherein the HP internal diameter is 10 mm, and the total length is 200 mm. A 1 mm thick porous mesh is lining the pipe's internal wall, 90% porosity porous medium. The evaporator section is surrounded by an electrical heater with adjustable power supply. AC and DC electric heaters are alternatively used as a heat source. ACheater supplies 80 W of heat rate and the DC heater supplies 50 W of heat rate. A cold water jacket is used to dissipate the condenser heat. Vacuum-pressure (VP) sensors and E-Type thermocouples (TC) are inserted in the adiabatic and the condenser sections as shown in Figure 7. TC1 and TC2 for readings T 1 and T 2 are inserted 1 mm near the inside wall surface and 100 mm a part as demonstrated in Figure 7. Vacuum pump with rigid houses and needle valves are facilitated to affirm the vacuum tightness in the HP and to control its fluid recharging process. The heater power rate is used to control the evaporator temperature, T 2 . The vacuum pressure gauges are used to measure the vacuum level and to record the pressure development in the pipe due to the evaporator heat load. Experimental analysis Following the nanofluid preparation and characterization, it has been used to recharge the HP for a series of experimental runs for performance measurements. Fresh HP are used to comply with the fresh working medium. The vacuum pressure before every run is 750 mm Hg and the nanofluid charge is 10 ml. The evaporator temperature, T 2 , is a controlled variable, condenser temperature, T 1 , is a dependable variable. The first subsection of the experimental results will explain the heat pipe performance versus the evaporator temperature over one year of operation, and the second subsection will present and discuss the ANOVA SEM microscopy imaging for the nanoparticles structure in the heat pipe porous medium. Heat pipe performance results and discussion Heat flows in the HP through its metal skin and working medium phase change. Temperatures are measured across the adiabatic section of the HP metal skin using TC1 and TC2. The main idea of these measurements is to monitor the portion of heat transfers by each medium. If the temperature difference across the metal skin is increased, it indicates more heat is carried on by the phase change of the fluid medium, which can be used as an indicator for the HP thermal performance enhancement. The opposite, if the temperature difference is decreased, indicating that more heat is carried on by the HP metal skin, which indicates lower HP performance. Temperatures plots, T 1 versus T 2 , for the three investigated working mediums, water and nanofluids 1 and 3 vol% are illustrated in Figure 9. There are two sets of plots in this figure, set (a) shows the three working medium plots results for the fresh HP (AC heater is used), and set (b) shows the corresponding results for the reused HP (DC heater is used). It can be seen from Figure 9 that the fresh HP plots depict a significant effect for using nanofluids as a working medium compared to the reused HP. The heaters power rate is controlled by TC2, the heat rate in the condenser is controlled by varying the condenser cooling water flow rate. Therefore, as T 2 varies, T 1 is responding to this variation according to the HP thermal performance. According to Fourier equation, Equation (4), if the pipe cross-section area, A, the distance between the T 2 and T 1 thermocouples, dx and the HP metal thermal conductivity, k are constants, then the heat transfer rate, Q , will be directly proportional to the temperature difference as shown in Equation (5). Figure 9. T 1 versus T 2 readings; (a) Fresh HP and (b) Reused HP. Therefore, the increase of the temperature difference between the two measured temperatures indicates more heat absorption by the working fluid rather than the skin thermal conductivity and vice versa, as is going to be explained by the HP pressure measurements in Figure 12 later in this section. The temperature difference (T 2 − T 1 ) is calculated from temperature results in Figure 9 and is plotted versus the control temperature, T 2 , as shown in Figure 10. As it can be seen in the figure, the temperature difference increases in the direction of the base fluid to the higher particles concentration nanofluids as an indication of enhancement in the HP thermal performance. It indicates the effectiveness of the nanofluids in increasing the heat pipe heat-transfer rate through the working medium thermal diffusion and evaporation as seen earlier in Hassan, Alzarooni et al. (2015). In order to get a better understanding of the effect of T 2 and T 1 readings, an increase in heat transfer rate relationship is derived from Equation (5) as shown in Equation (6) below. The enhancement in the heat pipe heat transfer, on the left-hand side of Equation (6), is calculated from the temperature difference results in Figure 10, on the right-hand side of the equation, for both fresh and long term used HP. The results are plotted in Figure 11 versus the control temperature T 2 . (4) It can be seen that the change is much higher than the uncertainty in these experiments, which was less than 5% with a confidence level of 95%. The thermal diffusivity improvement can also be seen from the vacuum pressure (VP) sensor readings, Figure 12. Initially, pressure sensor readings indicated higher-pressure for 3 vol% nanofluids, then later, and after several runs, no significant change has been noticed. Figure 11 illustrates the HP performance changes due to the long-term usage as represented by the significant reduction of the heat transfer rate, ΔQ. In addition, it shows the linear behavior of the HP performance reduction with increased evaporator temperature, T 2 for the fresh HP versus the nonlinear behavior for the long-term used HP. The nonlinearity could be an indication for the particles' accumulation on the HP porous medium. Figure 12 shows the HP pressure measurement versus the control temperature, T 2 for the fresh HP with the tested nanofluids. The working medium evaporation enhancement can be seen in this figure as the pressure in the HP is increased by increasing the particles concentration in the nanofluid as well as T 2 . To sustain the pressure in the heat pipe at its initial value, more heat should be removed by the condenser cooling, indicating an enhancement in the heat transfer load that can be removed by the HP with nanofluids. This observation promotes the nanofluid to be an efficient working medium for HP cooling equipment. Therefore, with these results, nanofluids indicated a positive impact on the HP efficiency as it increases the temperature difference along the adiabatic section as shown in Figure 10 for the fresh HP, and therefore increases the heat dissipation from the evaporator. To draw a robust conclusion for the observed deficiency of the HP after long-term of usage and eliminate the doubt of the heater type effect (e.g. AC versus DC), the experiment has been repeated three months later for the HP with the DC heater. An unpredictable performance is observed as seen in Figure 13 for the high nanoparticles' concentration fluid. As shown, with the 3 vol% working medium deteriorating, it has a significant temperature difference drop compared to the less used HP, with T 2 between 45 and 55°C. The reason is mainly because of the base fluid separation which occurs while the working medium is evaporated in the evaporator. In order to explain this observation, the case of water desalination process is a good example. In this process, pure water is separated from saline water (water + salt) leaving more salt concentrated brine with different thermal properties. Similarly, in the nanofluid, water will be separated in the evaporator leaving higher nanoparticles concentration fluid behind. When the condensate is returned to the evaporator, it will not be mixed perfectly as it was sonicated in the preparation process. Therefore, nanofluid homogeneity and stability are altered and two new separate working mediums are generated in the HP evaporator: pure water and highly concentrated nanofluid. This will definitely impact the forces and the bridging balance mechanisms between the particles and liquid. Particles agglomeration followed by aggregation would be the results of this mechanism imbalance. Microscopic imaging is presented in Section 4.2 for the HP wick as well as the nanoparticles after several uses are investigated to prove the above discussion. ANOVA scan electron microscopy (SEM) results and discussion SEM imaging is performed for the HP internal mesh and for the dried nanoparticles that are collected from fresh and reused nanofluids. Figures 14 shows and SEM images for the HP internal mesh for the 3 vol% nanofluids when it is fresh (a) and after one month of use (b). The images indicate a significant increase in the nanoparticles coat layer on the mesh surface after one month of use in the HP. Dried particles SEM images are shown in Figure 15 for the same nanofluid particles. By investigating the particle size range, it can be seen that the particle size is increased due to the particles agglomeration and aggregation as discussed in the previous section. These images indicate a significant impact on the base fluid separation process that will not only increase the nanofluid particles concentration but also will increase the particle size. Accumulating the particles on the mesh may increase its wettability as mentioned in KyuHyung et al. (2010); however, this may block the wick porosity if the wick's mesh is not sufficiently coarse. Therefore, the HP performance enhancement that has been seen experimentally by KyuHyung et al. (2010) and computationally by Mashaei, Shahryari, Fazeli, and Hosseinalipour (2016) may not remain if the experiment is repeated several times and if the model implements the attractive forces between the particles when liquid is evaporated leaving the particles closer to each other. Conclusions This research aimed to investigate the long-term performance of heat pipe devices facilitated by nanofluids as working medium. Water-based nanofluids with 1 and 3% alumina particles are prepared using sonication process and are well characterized for particle distribution stability as well as for thermophysical properties. The heat pipe study was accomplished in 10 mm diameter copper HP lined by 1 mm brass mesh. The HP performance is measured using vacuum pressure sensors and thermocouples that are installed in the evaporator and condenser, separated by an adiabatic section. Pressure results indicated the vacuum pressure level in the pipe drops, as does the instantaneous pressure. The temperature measurements indicate a change in the heat transfer rate across the heat pipe at different operation conditions and different nanofluids particle concentration. Fresh and reused heat pipes are used for performance comparison. Heat pipe heat transfer rate is improved, with up to 100% change in the heat transfer rate. However, the improvement depreciates after reusing for several trials in a one year period. This decay raises a concern about the particles distribution and suspension sustainability after the base-fluid separation due to phase change. In order to determine whether it is particles agglomeration or aggregation, another set of experimental characterization has been done on the used nanofluid comparing with the fresh one. Scan electron microscope (SEM) imaging analyses are performed on the cutout internal surface of the heat pipe porous medium as well as the used nanofluid particles to investigate the nanoparticles structure. The SEM results revealed a noticeable increase in the nanoparticles size, with nanoparticles coating on the wick-mesh. Particle sizes elevation indicated particles agglomeration and aggregation, that definitively accounts for the development of a skin layer and porosity reduction leading to the heat pipe's performance depreciation. Supplemental data Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/23311916.2017.1336070.
6,601
2017-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Analysis of the directional pattern of a non-equidistant antenna array with a random arrangement of radiating elements in three-dimensional space A three-dimensional antenna array with a spherical surface was investigated. The channels were randomly placed at array points with a vertical and horizontal step equal to 0.45 wavelengths. For uniform filling of the sphere, the minimum allowable distance between the channels was introduced, depending on the number of channels and the sphere size. The maximum side lobe level depends on the number of channels. To calculate the path difference in a given direction, the coordinate system was rotated so that one of the coordinate axes coincided with the selected direction. To find the radiation pattern, the scalar product of the amplitude-phase distribution vector corresponding to the signal emission direction was calculated by the complex-conjugate vector of the amplitude-phase distribution corresponding to all possible directions of signal arrival from an external source. Introduction Currently, two-dimensional (flat) equidistant antenna arrays are widely used in aircraft and ship radars, as well as in missile attack warning radars. Such arrays make it possible to perform electronic scanning in two planes: in azimuth and elevation, to form the radiation pattern zeros for active jammers, thereby suppressing interference, to obtain a small level of the radiation pattern side lobes using the weight processing of the receiving channels with the Hamming window, and provide high directional action and small angular resolution element. Unfortunately, all these wonderful possibilities are limited in the direction of scanning to some sectors and cannot be realized at full 360 degrees in azimuth. In a surveillance radar station, such an array has to be rotated in azimuth, which is very difficult due to the need to transmit information from a large number of channels through a rotating joint and a large array mass for the decimeter and meter ranges. Therefore, a flat array immediately turns into a linear (one-dimensional) vertical array, in which, to further reduce the number of information transmission channels, the directional patterns of 3 or 4 receiving channels in elevation angle are formed directly at ultrahigh frequency using digitally controlled phase shifters. A radical solution to the problem of transmitting information from the antenna array to the processing device is the use of a three-dimensional (volumetric) non-equidistant antenna array, which electronically scans the entire space by rotating the coordinate system in azimuth and elevation. Such an array is called the "Crow's Nest" [1,2,3]. The antenna's name was derived from the Crow's Nest designation for a platform at the top of a sailing ship's mast, which is used as an observation deck for observation in all 1. Amplitude-phase signal distribution and the directional pattern of a three-dimensional antenna array The amplitude-phase signal distribution over the antenna array elements is determined by the path difference, that is, the distance between the array elements in the direction of signal arrival. Distance is a dimensionless quantity and is specified in wavelengths. To calculate the path difference in a given direction, it is necessary to rotate the coordinate system so that one of the coordinate axes coincides with the selected direction. The difference in the path of each element is calculated relative to the array center and is its coordinate along this axis. To calculate the path difference, we will graphically depict the signal arrival direction and one array element. The results are shown in figure 2. In this figure A is the wave front arrival direction, E is the point of the signal receiver location, . To find out the path difference from the A direction, we will rotate the coordinate system so that the AP line becomes the vertical axis of the new coordinate system. For this purpose, the Cartesian coordinates of the E point are calculated as follows: First, we multiply the coordinate system rotation matrix relative to the Z axis by the A  angle [4,5]. As a result, we get: The first coordinate in the resulting vector is x, the second is y, and the third is z. Therefore, the travel difference relative to the origin in the A direction is calculated as follows: (2) 3 We transform the obtained path difference into the signal phase, with the help of which we calculate the complex amplitude of the signal arriving from the A direction at the E point after the coordinate system rotation. The result is a vector of the phase distribution over the antenna array elements, which depends on the chosen direction: Here is a vector formed from the path differences for each element of the antenna array, where n is the number of array elements. To find the directional pattern, it is necessary to calculate the scalar product of the amplitude-phase distribution vector corresponding to the directional pattern main lobe (signal emission direction) Bg by the complex-conjugate amplitude-phase distribution vector corresponding to each direction of the received signal arrival from an external source Bs If the received signal is the radiated signal reflection result, that is, the vectors being multiplied differ only by complex conjugation, we obtain the maximum directional pattern value. To calculate the directional pattern, we use simulation modeling. Influence of a three-dimensional antenna array parameters on the directional pattern The width of the main lobe of the directional pattern (DP) emanating from the array center depends on its size, that is, on the diameter of the sphere in which the array transmitting and receiving elements are randomly placed. Since the sphere is symmetrical, that is, its vertical size is equal to the horizontal size, then after the coordinate system rotation, the DP main lobe width in two mutually perpendicular planes, the intersection line of which coincides with the DP maximum direction, will be the same. The DP main lobe width is measured at the half power level and is equal to 2 degrees with a sphere radius of 12.5 wavelengths [6,7]. In the original coordinate system, the DP main lobe azimuth width depends on the elevation angle. When calculating the DP, we change the azimuth from 0 to 359 degrees, and the elevation angle from -90° to + 90°. As a result, with an angle sampling step of 0.5 degrees, the directional pattern will be an array of 361 rows and 720 columns for each elevation and azimuth, respectively. When assessing the array step effect on the directional diagram, it was assumed that the same number of array elements is randomly located inside the sphere, equal to 128. When placing, the minimum distance between the elements was chosen as possible, but such that the specified number of elements could fit inside the selected volume sphere. For 128 array elements with a sphere radius of 12.5, this distance will be 3.3. The array step will be changed from 0.4 to 0.55 at 0.05 wavelengths. The normalized DP, obtained as a result of experiments under the indicated conditions, are shown in figures 3, 4, 5 and 6, where the target azimuth is chosen equal to 180°, and the elevation angle is 0°. In the foreground of each of the figures there is a line corresponding to an elevation angle of + 90°, in the back --90°. From the analysis of the above figures, it follows that changing the array step does not affect the directional pattern width, which turned out to be equal to 2º for the chosen size of the array aperture of 12.5 wavelengths. An exception is the array step equal to 0.5 wavelengths, which is used as the maximum possible in linear and flat arrays. With such an array spacing, shown in figure 5, the radiation pattern has two identical main lobes, one is in the selected direction, the other is in the opposite azimuth, the latter being cut into two parts for 0 ° and 359°. It should be noted that the width of the side lobes in azimuth increases with an increase in the elevation angle magnitude, however, the height of the side lobes remains, on average, the same as at small elevation angles. Since the randomness of the array elements' arrangement should affect the DP maximum side lobe magnitude, it is of great interest to calculate its mean value and the standard deviation value (SDV). Knowing the SDV will make it possible to assess the possibility of reducing the maximum side lobe by 3 levels by enumerating all possible transceiver locations and finding the best one. For further research, we will choose an array step equal to 0.45. Without changing the DP width in azimuth and elevation at the half power level and the array step, we will restrict ourselves to changing the number of transceivers from 64 to 1024, doubling this number each time. Directional patterns obtained as a result of experiments with a different number of transceivers located at random with given placement parameters are shown in figures 7, 8, 9, 10 and 11. The average value of the normalized directional pattern maximum side lobe and its standard deviation for a different number of transceivers are given in table 2. It follows from the table that for each double increase in the number of transceivers placed randomly inside a sphere of constant radius equal to 12.5  , which provides a beam width of 2º, the level of the directional pattern side lobes decreases by 1.4 times. The value of the standard deviation of the directional pattern maximum side lobe changes differently with each increase in the number of array elements by a factor of 2, in contrast to its mean value, for which all changes are the same. So, with an increase in the number of elements from 64 to 128, the maximum side lobe SDV decreases by 1.2 times, from 128 to 256 and from 256 to 512 -by 1.45 times, and from 512 to 1024, the SDV decreases by 1.5 times. ... It follows that, starting with 256 array elements, the SDV decreases by about 1.5 times and, therefore, the maximum side lobe average level for 2048 transceivers will be 0.076, and the SDV is 0.004. Comparing the DP of arrays with the same parameters, shown in figures 4 and 8, we note that the existing difference in the shape and size of the DP side lobes is due to the difference in the placement of the transceivers inside the sphere. Let us now consider the effect of changes in the elevation level on the directional pattern. Since the sphere is symmetric, and the directional pattern width is in two planes, the direction of the intersection line of which coincides with the direction to the DP maximum, is the same, then for any angular position of the beam in the initial coordinate system, depending on the azimuth and elevation angle, the DP width in both planes in the local the coordinate system, one axis of which coincides with the signal arrival direction and, accordingly, with the directional pattern, will not change, since the spherical antenna aperture in these planes is the same. Table 2 In the original coordinate system, when the elevation angle changes, the end of a vector of the same length will be on a circle with a smaller radius than when the elevation angle is 0. So, with an elevation angle of 60 degrees, the circle radius will be half of the maximum. In the limit with an elevation angle of 90 degrees, the circle radius is 0 and the azimuth is not defined, since the vector projection onto the horizontal plane is a point. When the circle radius is halved, the angle at which a part of the perimeter of the same length circle is visible will double. Hence it follows that the directional pattern width in azimuth at an elevation angle of 60 degrees should be twice as wide as the directional pattern at an elevation angle equal to zero. These considerations are illustrated in figure 12. Figure 11. 3D DP of 1024 elements' array with an array step of 0,45 and an elevation angle of 0° Since the circle radius described by the end of the fixed length vector when the elevation angle changes remains constant, the width of the directional pattern in elevation does not change. The study of the directional pattern dependence on elevation angle changes will be carried out with the DP width in azimuth and elevation at a half power level equal to 2 degrees. The array step will take the previous value, equal to 0.45 wavelengths, and the number of transmit-receive elements will remain equal to 128. The azimuth will be fixed at a value equal to 180 degrees. The elevation angle will take on different values of 0, 60, 85, 89 degrees. The results of the DP calculation are shown in figures 13, 14, 15, 16. From the analysis of the directional pattern cross-sections shown in figures 17 and 18, which correspond to the 3D diagrams shown in figures 13 and 14, respectively, it follows that with an increase in the elevation angle from 0 to 60 degrees, when the length of the projection of the vector of the directional pattern maximum direction onto the horizontal plane decreases by 2 times, the DP width in azimuth increases from 2.5 to 5.5 degrees, which practically confirms the situation shown in figure 12. The diagram width is calculated at the 0.707 voltage level, which corresponds to the 0.5 power level. The slight expansion of the experimental diagram in comparison with the theoretical one is due to the fact that the real size of the non-equidistant lobe is somewhat less than 12.5 due to the random With an increase in the number of elements, the discrepancy between the directional pattern theoretical and experimental width will decrease. As for the further change in the elevation angle, leading to the next expected two-fold increase in the projection of the maximum directional pattern direction vector on the horizontal plane, it corresponds to an elevation angle equal to 85 degrees. The DP corresponding to this elevation angle is shown in figure 18, and its azimuth cross-section in the main lobe area is shown in figure. 19. From the last figure it follows that the directional pattern width with the indicated parameters at an elevation angle of 85 degrees is 33 degrees, i.e. when the elevation angle is changed from 60 to 85 degrees, the DP width increases from 5.5 to 33 degrees, which is 3 times more than expected. From this it follows that the considerations for the DP broadening with a change in the elevation angle shown in figure 12 are valid only for elevation angles less than 60 degrees. At large angles, the great-circle arc from figure 12 becomes a chord for the small-circle arc. The greater the arc length is greater than the chord length, the greater the DP azimuth expansion will be greater than expected. At an elevation angle of 89 °, two intersecting circles of the same radius are formed on the sphere surface, which are visible from the sphere center at the same angle of 2 °. The circle center formed by the main lobe of the pattern relative to the center of the array is located on the azimuthal circle and vice versa. The points of intersection of these circles cut off the arc from the azimuthal circle, which corresponds to the DP main lobe width in azimuth, equal to 120 °. Starting from an elevation angle of 89.5 °, the 10 azimuth circle is completely inside the circle formed by the DP main lobe, which width in azimuth in the original coordinate system becomes 360 °. The average value of the normalized DP maximum side lobe and its standard deviation for different elevation angle values are given in table 3. With an increase in the elevation angle of more than 60 °, the DP main lobe begins to expand sharply in azimuth, however, the average level of the maximum side lobe and its SDV, as shown in table 3, do not change up to an angle of 89 °. As shown in figure 16, the DP at this elevation angle has a regular side lobe with a width of 360 ° in azimuth, the level of which is significantly higher than the level of the other random side lobes. With an increase in the elevation angle to 90 °, this side lobe grows and turns into the main one. Conclusion The presence of sufficiently large side lobes in the directional pattern of a three-dimensional nonequidistant antenna array requires the obligatory use of adaptive compensators for active interference [8,9,10]. The absence of movement of the three-dimensional array elements' movement facilitates the formation of the directional pattern zeros in the direction of the jammers. Unfortunately, bringing the level of the side lobes of the three-dimensional array DP to acceptable values requires a sufficiently large number of transmitting and receiving array elements and a long search for their best placement. This disadvantage is compensated to a certain extent by the possibility of circular electronic scanning of space, which can be either sequential (single-beam) scanning in elevation and azimuth, and parallel (multi-beam) in azimuth at fixed elevation angles.
3,991.4
2021-01-01T00:00:00.000
[ "Physics" ]
Tailored voltage waveform capacitively coupled plasmas in electronegative gases: frequency dependence of asymmetry effects Capacitively coupled radio frequency plasmas operated in an electronegative gas (CF4) and driven by voltage waveforms composed of four consecutive harmonics are investigated for different fundamental driving frequencies using PIC/MCC simulations and an analytical model. As has been observed previously for electropositive gases, the application of peak-shaped waveforms (that are characterized by a strong amplitude asymmetry) results in the development of a DC self-bias due to the electrical asymmetry effect (EAE), which increases the energy of ions arriving at the powered electrode. In contrast to the electropositive case (Korolov et al 2012 J. Phys. D: Appl. Phys. 45 465202) the absolute value of the DC self-bias is found to increase as the fundamental frequency is reduced in this electronegative discharge, providing an increased range over which the DC self-bias can be controlled. The analytical model reveals that this increased DC self-bias is caused by changes in the spatial profile and the mean value of the net charge density in the grounded electrode sheath. The spatio-temporally resolved simulation data show that as the frequency is reduced the grounded electrode sheath region becomes electronegative. The presence of negative ions in this sheath leads to very different dynamics of the power absorption of electrons, which in turn enhances the local electronegativity and plasma density via ionization and attachment processes. The ion flux to the grounded electrode (where the ion energy is lowest) can be up to twice that to the powered electrode. At the same time, while the mean ion energies at both electrodes are quite different, their ratio remains approximately constant for all base frequencies studied here. Abstract Capacitively coupled radio frequency plasmas operated in an electronegative gas (CF 4 ) and driven by voltage waveforms composed of four consecutive harmonics are investigated for different fundamental driving frequencies using PIC/MCC simulations and an analytical model. As has been observed previously for electropositive gases, the application of peakshaped waveforms (that are characterized by a strong amplitude asymmetry) results in the development of a DC self-bias due to the electrical asymmetry effect (EAE), which increases the energy of ions arriving at the powered electrode. In contrast to the electropositive case (Korolov et al 2012 J. Phys. D: Appl. Phys. 45 465202) the absolute value of the DC selfbias is found to increase as the fundamental frequency is reduced in this electronegative discharge, providing an increased range over which the DC self-bias can be controlled. The analytical model reveals that this increased DC self-bias is caused by changes in the spatial profile and the mean value of the net charge density in the grounded electrode sheath. The spatio-temporally resolved simulation data show that as the frequency is reduced the grounded electrode sheath region becomes electronegative. The presence of negative ions in this sheath leads to very different dynamics of the power absorption of electrons, which in turn enhances the local electronegativity and plasma density via ionization and attachment processes. The ion flux to the grounded electrode (where the ion energy is lowest) can be up to twice that to the powered electrode. At the same time, while the mean ion energies at both electrodes are quite different, their ratio remains approximately constant for all base frequencies studied here. Keywords: electrical asymmetry effect, electronegative plasmas, multi-frequency capacitive discharges, capacitively coupled radio-frequency plasmas, voltage waveform tailoring (Some figures may appear in colour only in the online journal) Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Introduction Plasmas have been used for various surface processing applications for many decades [1,2]. In particular, the selective and anisotropic etching of semiconductors as well as the deposition of functional coatings on large area substrates are performed in capacitively coupled radio frequency (CCRF) plasmas. These plasma systems are subject of continuous research, as the technological demands are rising [2]. In most cases, a combination of many feed gases with a complex plasma chemistry is used. Accordingly, many different species of positive ions, negative ions, neutral radicals, and electrons can be found in the plasma volume. The plasma chemistry is driven by energetic electrons, which transfer energy to the neutral background gas in collisions. These electrons, in turn, gain energy via their interaction with the RF electric field. The electron power absorption dynamics, therefore, varies strongly in space and time. Lowpressure electropositive plasmas typically operate in the α-mode, i.e. they are sustained by the energy gain of plasma electrons in the oscillating sheath regions adjacent to the surfaces [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. At the times of sheath expansion within the RF period, electrons are accelerated towards the quasineutral plasma bulk. In certain situations, the sheath electric field locally reverses its sign and accelerates electrons towards the surface during the phase of sheath collapse [5][6][7][8][9][18][19][20]. This mechanism of power absorption by electrons is present, when the sheath collapse is fast and/or the electron mobility is reduced, e.g. by frequent collisions with the background gas and especially with molecular gases. In the DA (drift ambipolar) mode [4,[15][16][17][21][22][23][24][25][26][27], a very similar mechanism occurs typically in the plasma bulk. Here, the RF conductivity is reduced by frequent collisions of electrons with neutrals at high pressures of molecular gases and/or by the presence of attachment processes resulting in the formation of negative ions, causing a depletion of the electron density. Therefore, a relatively strong electric field must develop to ensure the continuity of the current through the plasma. This is particularly important in electronegative plasmas operated at low radio frequencies, as the electronegativity is typically high under such conditions [21]. The group of Makabe et al showed in their pioneering spatio-temporally resolved invest igations of low RF single-frequency SF 6 plasmas, that the presence of negative ions in the sheath region strongly affects the electric field and electron heating dynamics [21]. In addition, secondary electrons, which are released from the surfaces mainly by the impact of positive ions, gain high energies in the sheath regions and may significantly affect or even dominate the total ionization leading to the so-called γ-mode [3,4,[6][7][8]21]. Similarly, electron-impact induced detachment of electrons in regions of strong electric fields may play a role in electronegative plasmas [6]. Thus, various mechanisms contribute to the electron power absorption dynamics, depending on the spatial dimensions, gas composition, pressure, and the externally applied voltage. Many aspects of these phenomena have been investigated in great detail in previous studies. However, almost all of these studies have been performed at a base frequency of 13.56 MHz, while the base frequency can be chosen within a wide band. At high frequencies, the plasma uniformity can be compromised by electromagnetic effects, although it has been shown that the non-uniformity is greatly reduced via the EAE at high frequencies [50]. Korolov et al found that the range over which the DC self-bias can be controlled is reduced at lower base frequencies in electropositive plasmas [54]. Until now, the effect of the base frequency on the discharge symmetry in electronegative plasmas, where the electron and ion dynamics are very different from the electropositive case, has not been studied. Some initial experimental results have suggested that the control range of the DC self-bias is enlarged by choosing a lower base frequency of 5.50 MHz compared to 13.56 MHz [56,57], but the reason for this effect was not explained. Here, we present a systematic investigation of the dependence of the efficiency of the EAE on the base frequency in electronegative CF 4 plasmas via PIC/MCC simulations. Plasmas in CF 4 are of high relevance for technological applications, as dielectrics such as SiO 2 are commonly etched in semiconductor manufacturing by plasma processing, using CF 4 with admixtures of O 2 and Ar [58][59][60]. In this work we show that, in contrast to electropositive plasmas, the symmetry control via the EAE is significantly enhanced by choosing lower base frequencies. This is an extremely important finding, since it shows that in many reactive gas mixtures used in processing applications, lower base frequencies should yield more control of process performance based on the EAE due to an improved control of the ion flux-energy distributions at the surfaces via VWT. The physical mechanisms will be discussed based on an analytical model and related to the frequency dependence of electronegative single-frequency CCRF plasmas. The paper is structured in the following way: in the next section both the simulation and model approaches are described. The results are discussed in section 3, which is divided into three parts. First, the effect of the base frequency on the control of the DC self-bias and discharge symmetry is presented. Second, the low-frequency case is studied in detail to examine the differences from the 13.56 MHz reference case. Third, the ion flux-energy distribution functions (IDFs) obtained from the simulations are discussed. Finally, conclusions are drawn in section 4. PIC/MCC simulation Our numerical studies are based on a particle-in-cell simulation code, which includes a Monte Carlo treatment of collision processes (PIC/MCC) [62][63][64]. The reactor geometry is simplified by assuming two plane and parallel electrodes. Accordingly, only one spatial coordinate needs to be resolved, while all components of the velocity space are resolved. The CF 4 plasma is created and sustained in a d = 2.5 cm wide gap between the two electrodes by applying a voltage waveform consisting of four consecutive harmonics, to one of the electrodes, whereas the other electrode is kept at ground potential. Here, φ 0 is the amplitude of the voltage waveform and f 0 is the base frequency, which is varied between 2.86 MHz and 13.56 MHz. The amplitude factor for each harmonic in the equation above maximizes the electrical DC self-bias control range in multi-frequency plasmas [41,42]. Here, we keep the voltage amplitude constant at φ = 240 0 V, so that the applied voltage waveform (see figure 1) exhibits a maximum of 240 V, a minimum of −60 V, and, accordingly, a peak-to-peak voltage of 300 V. The feasability of the impedance matching of such multi-frequency voltage waveforms has been demonstrated recently [61]. Although the discharge configuration is geometrically symmetric, a DC self-bias, η, develops due to the asymmetry of the applied voltage waveform [35,37,39,[41][42][43][44][45]54]. Note that we keep the phases of all harmonics at °0 . This phase combination is known to yield the largest absolute value of the DC self-bias [41][42][43]. The sign of η can be reversed by tuning the phases of the applied harmonics [41,43]. To ensure equal losses of positive and negative charges at each of the two electrodes on time average in the simulation, the DC self-bias is adjusted in an iterative manner. In this way, the realistic situation of an experimental setup is simulated, where a blocking capacitor in the matching unit prevents any DC current in steady state. In the simulation, the charged species CF + 3 , CF − 3 , F − , and electrons are traced. We use the cross section data provided by [65] for e − -CF 4 collision processes, with the exception of electron attachment processes (producing CF − 3 and F − ions), which are adopted from [66]. A table with all electron impact collision processes considered in the model can be found in [45]. Their energy dependent cross sections are displayed in figure 2. To simplify and speed up the calculations, the processes that create radicals or ion species other than CF + 3 , CF − 3 , F − are included in the set of electron collisions, but the collision products are not considered further. Reactive, as well as elastic collisions of the ions are included in the model [67][68][69][70]. Langevin type cross sections are used for the elastic collisions of these ions with the neutral gas molecules [1]. Again, a table listing all of the ion-molecule reaction processes considered in the simulation is given in [45]. The respective cross sections are displayed in figure 3. All charged products of the ion-molecule reactions are traced, except for the CF + 2 ions. As CF + 2 ions react in a similar way with CF 4 as CF + 3 ions, and as their recombination rate with electrons is only slightly higher than the corresponding rate of CF + 3 [71], we assume as a simplification that this process does not convert CF + 3 ions to CF + 2 ions. Hence, we avoid the explicit treatment of another species of minor importance in the computations, while the particle balances are hardly affected. The neglect of positive ions other than CF + 3 is justified by the high rates for CF + -CF 4 and CF + 2 -CF 4 reactions, which rapidly convert them into CF + 3 ions [72]. Recombination processes between positive and negative ions are included, using the rate coefficients available in the literature [73]. The recombination between electrons and CF + 3 ions is also included, using the data from [74]. The computional procedure is described in [75]. A table with all recombination processes considered in the simulation can be found in [45]. The gas pressure and temperature are set to 80 Pa and 350 K, respectively. The emission of secondary electrons is neglected to simplify the analysis of the complex dynamics of the energy gain and loss of electrons. Also, we find that this assumption leads to a good agreement with experimental data obtained at 5.50 MHz base frequency [56]. The reflection of electrons from the electrodes is assumed to occur with a probability of 0.2 [76]. Analytical model We use an analytical model of DC self-bias formation in CCRF plasmas to gain further understanding of the discharge physics. This model, based on the discharge voltage balance, was first introduced in [35] and extensively described in [37,41]. A simple expression for the DC self-bias, η, is obtained by evaluating this voltage balance at the times of maximum (φ ∼,max ) and minimum (φ ∼,min ) applied voltage [41]: ,min fl,p fl,g ,max ,min (2) Here, φ fl,p and φ fl,g are the floating voltages of the powered and grounded electrode sheaths (at the times of maximum and minimum applied voltage), respectively. φ b,max and φ b,min are the voltage drops across the plasma bulk at the times of maximum and minimum applied voltage. Note that φ fl,p and φ b,min are negative [41]. The first term on the right hand side is dominant in electropositive plasmas, where the second and third terms can often be neglected. However, particularly the voltage drop across the plasma bulk (i.e. the third term) cannot be neglected in electronegative plasmas [4,24,45]. All quantities involved in the second and third terms can be obtained from the simulations. As such, this is not a self-consistent model, rather than an aid to understanding. The parameter is a measure for the discharge spatial symmetry [35,37], and describes all factors that affect the DC self-bias apart from the voltage amplitude asymmetry. It is defined as the ratio of the maximum of the absolute value of the sheath voltage at the grounded and powered electrode (found at the times of minimum and maximum applied voltage), respectively. In the frame of this model, ε = 1 refers to a symmetric discharge, whereas any strong or weak deviation from unity corresponds to a more or less pronounced asymmetry. The physical origin of such an asymmetry can be understood by examining the individual ratios contributing to the symmetry parameter, as defined in the second equality in equation (3). A p and A g are the surface areas of the powered and grounded electrodes. Here, = A A p g for the geometrically symmetric discharge configuration used in this simulation. n SP and n sg are the mean net charge densities in the respective sheaths. In contrast to electropositive plasmas, where only positive ions need to be considered, these values are defined bȳ in electronegative plasmas to account for all charged heavy species, ( ) , which correspond to the density of the positively charged and the negatively charged ions, respectively. (Note that these expressions are also valid for multiple ionic species, but we assume that all types of ions are singly charged.) In low-pressure electropositive plasmas, it has been found that the ratio of the mean net charge densities in the two sheaths differs from unity for a driving voltage waveform described by equation (1) due to the stronger acceleration of ions in one of the sheaths, thereby causing a self-amplification of the EAE [37]. The maximum sheath widths, s p,max and s g,max in equations (4) and (5), are determined from the simulation data based on a criterion introduced by Brinkmann [77], again taking the presence of negative ions into account. The maximum charges are related to the mean net charge densities via = Q es n A mp p,max sp p and = Q es n A mg g,max sg g , respectively [35]. In a previous study of argon plasmas driven by voltage waveforms where the base frequency was varied over a range similar to the one used here, it was found that the charge dynamics affects the ratio of the maximum charges at low driving frequencies and reduces the range of control of the DC self-bias via the EAE [54]. The charge dynamics, in turn, was found to be enhanced by the long total time between two subsequent sheath collapses at low driving frequencies. Then, the ratio ( / ) > Q Q 1 mg mp 2 leads to a larger symmetry parameter ε > 1 and, thereby, to a less negative DC self-bias η if all harmonics' phases are set to 0° [54]. I sp and I sg appearing in equation (3) are the so-called sheath integrals [35], which are obtained via where p sp and p sg are the net charge density profiles in the two sheaths normalized by their mean values are the normalized spatial position coordinates in the two sheaths. The sheath int egral equals unity for a homogeneous net charge density profile. In the absence of negative ions, the decrease of the positive ion density towards the electrodes due to the acceleration of the ions by the sheath electric field at approximately constant ion flux leads to a slightly larger value. Furthermore, the ratio of the sheath integrals was found to be close to unity in such cases, so that the effect on the symmetry parameter and on the DC self-bias was negligible [35,37]. In electronegative plasmas, however, the presence of negative ions can cause a reduction of the net charge density in the regions around the sheath edges (i.e. the regions of large ζ p and ζ g , respectively), as will be shown below. As a consequence of the non-monotonic behavior of the resulting net charge density profile, the sheath integral value is reduced, so that values of < I I , 1 sp sg are expected. Figure 4 shows the DC self-bias obtained from the simulation and the analytical model as a function of the base frequency of the applied voltage waveform. The DC self-bias values have been normalized by the amplitude of the applied voltage, i.e. ¯/ η η φ = 0 with φ = 240 0 V. A large negative DC self-bias develops due to the strong amplitude asymmetry of the applied voltage waveform. Furthermore, the absolute value of η becomes much larger for lower base frequencies. This is in stark contrast to the findings in electropositive plasmas, where the control of the DC self-bias via the EAE is deteriorated by the charge dynamics at low frequencies [54]. The analytical model reproduces the simulation results well. It should be noted, that all terms in equation (2) have been taken into account here. The voltage drop across the plasma bulk at the time of maximum applied voltage (at t = 0) is about 10 V at 13.56 MHz and increases to about 16 V at 2.86 MHz. Hence, the influence of the bulk voltage on the DC self-bias is about 5-8%. Furthermore, the value of η = −58.0% obtained in the simulation at f 0 = 5.50 MHz agrees well with η ≈ −57 % determined experimentally at the same base frequency [56]. The increase of the DC self-bias as the base frequency is lowered (for a waveform with constant amplitude asymmetry) is caused by a change of the symmetry parameter, which is also depicted in figure 4. These values are determined from the simulation data and are used as input parameters for the analytical model of the DC self-bias. At the reference base frequency of 13.56 MHz, a value of ε ≈ 0.6 is found. If the simplified model is used by neglecting the bulk and floating voltages, a value of ¯/ η ≈ −1 3 would be obtained from the first term on the right hand side of equation (2) for ε = 1. Compared to that, the EAE is amplified by the low symmetry parameter values, leading to a shift of the DC self-bias to more negative values. The deviation of the symmetry parameter from unity becomes even stronger as the base frequency is reduced. It should be noted that values below ε = 0.4 have not previously been observed in any study of a geometrically symmetric CCRF discharge. What is the physical origin of this strong asymmetry? In order to gain an understanding of the mechanisms behind the decrease of the symmetry parameter, it is useful to examine the individual ratios in equation (3). Figure 5 shows the mean net charge densities in the powered and grounded electrode sheaths, as they are defined in equations (4) and (5), as well as their ratio obtained from the simulation at different base frequencies. The mean densities increase in both sheaths as the base frequency is increased. This is due to the enhanced electron power absorption and, hence, higher plasma densities for higher driving frequencies [1,2,21,22]. The mean density in the grounded electrode sheath is much larger than that in the powered electrode sheath. As will be shown below, strong ionization occurs deep in the grounded electrode sheath, which causes the difference in the densities between the two sheaths. Accordingly, the ratio ¯/n n sp sg is much smaller than one. This helps achieve a larger control range of the DC self-bias and is one of the important mechanisms causing the self-amplification of the symmetry control. Furthermore, the ratio ¯/n n sp sg becomes smaller for lower base frequencies, because the ionization adjacent to the powered electrode decreases more rapidly than that adjacent to the grounded electrode when f 0 is reduced, due to the completely different underlying mechanisms of electron power absorption. Therefore, the DC self-bias is further enhanced at lower base frequencies. DC self-bias and symmetry parameter The maximum charges located in the two sheaths behave in a similar way to the mean net charge densities: they increase as a function of the base frequency and the maximum charge in the grounded electrode sheath is larger than that in the powered electrode sheath (see figure 6(a)). The maximum charges in the two sheaths differ from each other, because the minimum charge in the grounded sheath is much larger than that in the powered electrode sheath due to the longer time of sheath collapse. Thus, the floating potential is higher adjacent to the grounded electrode, resulting in a larger minimum charge in this sheath ( > Q Q min,g m in,p ) and, for an almost constant total charge ( + ≈ + Q Q Q Q mg min,p m p m in,g see figure 6(b)), a smaller maximum charge at the same time in the powered electrode sheath. The ratio is larger than one and, therefore, decreases the control range of the DC selfbias. This is similar to low-frequency electropositive plasmas, where the ratio was also found to be larger than one [54]. In that case it is caused by the total charge dynamics. However, here the range is smaller (ratio of about 1.4 in CF 4 compared with a ratio of more than 2 in Ar at low frequencies). As shown in figure 6(b) the total charge ( + Q Q p g ) increases steeply at the times of sheath collapse at ≈ f t 0.0 0 and ≈ f t 0.2 0 , when electrons compensate the positive ion flux to the powered and grounded electrodes, respectively. The temporal variation of the total charge is small (about 10%) and becomes only slightly larger at smaller base frequencies due to the longer period T = 1/f 0 , during which ions flow to the boundary surfaces. The normalized total net charge, due to the charge dynamics [40], reaches its absolute maximum, when the charge in the powered electrode sheath is maximal (around ≈ f t 0.25 0 ). In contrast to single-frequency plasmas, where the time between periods of loss of electrons to either electrode at the times of collapsing sheaths is equal, here the grounded electrode sheath collapses shortly after the collapse of the powered electrode sheath. Therefore, the loss of electrons flowing to the electrodes occurs only in the first quarter of the RF period. During the long, remaining part of the RF period, only (positive) ions are lost to the electrodes. Thus, the total charge is continuously reduced. This leads to a smaller total charge at the time of Q mg compared to the one at the time of Q mp . The difference between the total uncompensated charge at these two times becomes larger for smaller base frequencies, as the total time during which ions are lost becomes longer, i.e. the slope of the total charge between ≈ f t 0.25 0 and ≈ f t 1 0 becomes more negative. As a consequence, the ratio / Q Q mg mp becomes smaller, resulting in a less pronounced decrease of the symmetry control. However, this effect of the base frequency is relatively small, since the total ion fluxes are smaller in electronegative plasmas compared with electropositive ones, as the diffusion of positive ions into the sheaths is reduced by the presence of negative ions. The last factor in the calculation of the symmetry parameter (equation (3)) is the ratio of the sheath integrals, which are determined based on the normalized sheath density profiles, ( ) ζ p sp p and ( ) ζ p sg g . Figure 7 shows these profiles for different base frequencies. The profile in the powered electrode sheath resembles the scenario that is typically found in electropositive plasmas or in electronegative plasmas in the absence of negative ions inside the sheath. The positive ion density gradually decreases towards the electrode (located at ζ = 0 p ). The profile in the grounded electrode sheath, however, looks completely different: it exhibits a peak close to the electrode, a local minimum adjacent to this peak followed by a local maximum and then decreases towards the sheath edge (at ζ = 1 g ). As the frequency is reduced this pattern becomes larger in amplitude and shifts deeper into the sheath region, i.e. it moves further towards smaller values of ζ g . At f 0 = 2.86 MHz, p sg even becomes negative over a narrow spatial region (around ζ ≈ 0.57 g ). This means that the negative ion density locally exceeds the positive ion density in the sheath region at the time of maximum applied voltage. The normalized sheath density profiles must be multiplied by the normalized position (ζ p and ζ g ) and integrated from the electrode to the maximum width of the respective sheath to obtain the sheath integrals (see equations (6) and (7)). Due to this mathematical procedure, the large peak in the profile of the grounded electrode is weighted by a small number (small ζ g ), whereas the increase of p sp towards the sheath edge is further emphasized. As a result, the sheath integral at the powered side is much larger than that at the grounded side, so that the ratio of the sheath integrals, / I I sg sp , is much smaller than one, as shown in figure 8. This causes a strong amplification of the symmetry control via the EAE, as it leads to a more negative DC self-bias for the driving voltage waveform specified by equation (1). The ratio decreases further, as the base frequency is reduced, since the peak in the grounded electrode sheath moves further towards smaller ζ g -values. Summarizing the analysis, figure 9 shows all of the individual ratios contributing to the symmetry parameter and the symmetry parameter itself as a function of the base frequency. We find that both the ratio of the mean net charge densities and the ratio of the sheath integrals are smaller than one and decrease further for lower base frequencies. Therefore, these two ratios lead to an enhanced control range for the DC selfbias via the EAE already at f 0 = 13.56 MHz and an even stronger enhancement if f 0 is reduced. In contrast, the ratio of the maximum charges is slightly larger than one, thereby increasing the value of ε. This counteracts the effect of the other two terms and reduces the absolute value of the DC selfbias, but the effect is rather weak. Space and time resolved analysis at 5.50 MHz base frequency In order to understand the mechanisms behind the strong self-amplification of the symmetry control via the EAE in electronegative plasmas at low base frequencies, we examine the space and time resolved data in the 5.50 MHz case as an example. This base frequency is chosen because initial measurements have already been performed at this frequency [56], so that a comparison between the findings of the present paper and such measurements in the near future is facilitated. The spatio-temporal distribution of the density of power absorption by electrons is given by the product of the electron current density and the electric field [1,2]. These three quantities are depicted in figures 10(a)-(c). Due to the shape of the driving voltage waveform (equation (1)), the sheath at the grounded electrode is collapsed for most of the fundamental RF period, while the sheath at the powered electrode is expanded for most of the time. Both sheaths expand and collapse quickly once per RF period. At the beginning of the RF period, when the powered electrode sheath expands and the grounded electrode sheath collapses, the electron conduction current is strongly positive in the plasma bulk (i.e. electrons flow towards the grounded electrode), whereas the reversed situation is found at the end of the RF period. In the region > − z d s g,max the conduction current density is smaller compared with that in the discharge center due to the presence of a significant displacement current density. However, in this region an intermediate electric field strength is found, i.e. the electric field is stronger here than in the bulk region, but weaker than in the electron-free sheath regions. Around the time of the collapse of the sheath adjacent to the grounded electrode (at ≈ f t 0.15 0 ), the electric field becomes negative locally (within and around the dashed rectangle in figure 10(b)) to enable the transport of electrons out of the bulk towards the grounded electrode. This field is required because the RF conductivity is reduced by the presence of negative ions and the large loss of electron momentum due to the high collision frequency at 80 Pa gas pressure. The feature can be regarded as a double layer, which typically develops in CCRF plasmas in highly electronegative gases [21]. As will be shown below, negative ions are created between = − z d s g,max and z = d and may remain in this region due to the long time during which the sheath is collapsed (resulting in the time-averaged electric field profile discussed below), whereas no negative ions are found in the powered electrode sheath region. As a result of the electric field facilitating the electron transport, a relatively large number of electrons gain sufficient energy to overcome the thresholds for inelastic collisions, causing collisional attachment of negative ions and/or ionization. This reversed field located at the bulk side of the oscillating plasma sheath edge is the main power source for electrons in the entire discharge volume within the RF period, so the power absorption rate at the collapsing grounded electrode sheath is stronger than that caused by any other mechanisms such as sheath expansion at either side. Moreover, we observe a strong cooling of electrons in the positive field, that represents the drop of the floating potential at the grounded electrode (around ≈ f t 0.15 0 and z 24 mm). The presence of a significant density of negative ions in the grounded electrode sheath region is the result of dissociative attachment processes in e − -CF 4 collisions. Figure 11(a) shows the rate of F − production in electron collisions with CF 4 . (The formation pattern of CF − 3 ions (not shown) looks very similar, but the rate is only about 20% of the F − production rate.) These collisions require energies above the threshold of 5 eV for the projectile electrons. As the gas pressure is relatively high, the electron mean free path is short and these processes can be expected to occur close to the regions of high electron energy gain in space and time. Accordingly, the biggest and strongest pattern is observed in the grounded electrode sheath region at the beginning of the RF period. This is a direct consequence of the strong electron power absorption found in this region. Therefore, the maximum generation of negative ions occurs in the region between the grounded electrode and the maximum grounded electrode sheath extension at the time, when the respective sheath collapses and electrons flow out of the plasma bulk towards the electrode. The dissociative ionization rate shows a peak in space and time caused by highly energetic electrons (see figure 11(b)), where the attachment rates exhibit a hollow pattern around this peak. This is because the attachment processes have a smaller threshold energy and are effective over only a narrow energy range (see figure 2). Figure 12(a) shows the time-averaged profiles of all charged species across the entire discharge gap. An abrupt transition from an electropositive sheath at the powered electrode to an electronegative plasma bulk is found at = z s p,max . The profiles in the grounded electrode sheath region (see figure 12(b) for details) are found to be strongly different from those in the powered electrode sheath. In the case of the electrons, the maximum and minimum density at each position obtained from the temporally resolved electron density are also shown. The distribution of all heavy (ion) species is static to a very good approximation. In contrast to the usual situation of negligible negative ion densities in the sheath regions (which is the case for the powered electrode sheath region), a significant density of negative ions is present in the grounded electrode sheath region. This is a consequence of the high ionization and attachment rates in the grounded electrode sheath region due to the power absorption of electrons by the reversed field, as discussed above. A similar pattern of the spatio-temporal electric field and electron-impact ionization has been found in highly electronegative CCRF plasmas driven by a single low radio frequency [21]. The maximum densities of both negatively charged ion species are located between = − z d s g,max and z = d. The negative ion density is larger than the electron density in the region up to ≈ z 23.7 mm. The dashed lines indicate the positions of the maxima in the normalized charge density profile ( ) ζ p sg g shown in figure 7. The major peak at ζ = 0.29 g or z = 23.8 mm occurs around the transition point where the plasma generally changes from electronegative behavior to electropositive, as the electron density starts to exceed the density of negative ions. Furthermore, the minor peak at ζ = 0.60 g or z = 22.4 mm can be associated with the position, which the plasma electrons reach at the time of maximum applied voltage (i.e. at the time of maximum voltage and spatial extension of the grounded electrode sheath). However, it should be noted that the maximum sheath width is significantly larger, as indicated in figure 12(b). Accordingly, a fraction of the bulk electrons is capable of penetrating into the grounded electrode sheath at f 0 t = 0. This is because the density of thermal electrons decays slowly due to the very weak electric field close to the sheath edge. In general, such effects of field reversals during sheath collapse may occur for different applied voltage waveforms and at different base frequencies. For instance, the ionization will be dominated by energetic electrons in the double layer region during field reversal in electronegative single frequency capacitive discharges, if the frequency of the applied voltage is lowered [21]. However, the customized voltage waveform used here is beneficial for two reasons. First, the energy gain of electrons in the field reversal is enhanced due to the fast decrease of the applied voltage, leading to a fast collapse of the grounded electrode sheath voltage and width. Therefore, the electrons accelerated by the reversed field dominate the ionization and attachment. Secondly, the long period of low applied voltage and, hence, low grounded electrode sheath voltage and width directly after the collapse is advantageous, because the electric field in the region of negative ion generation is relatively small on time average, so that negatively charged heavy species may build up large densities. This can be understood based on the time averaged electric field profile, to which these heavy species react. Figure 13(a) shows the mean, maximum and minimum electric field strength across the discharge gap, while figure 13(b) provides a detailed view into the grounded electrode sheath region. In electropositive plasmas, the electric field in the grounded electrode sheath usually oscillates between a strongly positive value and about zero [1,2]. However, in the case studied here the mean electric field changes sign inside the sheath region adjacent to the grounded electrode. This is due to the strong influence of the field reversal during sheath collapse on the time-averaged profile [21]. The electric field is even negative on time average up to z = 22.7 mm. This also means that negative ions are attracted to this region and positive ions are repelled. This explains why the density profiles of the negative ions exhibit peaks in this region. In the direct vicinity of the electrode the plasma turns electropositive and the electric field stays positive throughout the entire RF period. Therefore, negative ions accumulate between the positions marked by the vertical dashed lines in figure 13(b), causing a reduction of the normalized charge density profile. Moreover, their presence causes a strong field reversal (minimum electric field) via a local depletion of the RF conductivity. The acceleration in this field is the dominant mechanism of energy gain for the electrons, as it has also been found in electronegative CCRF plasmas driven by a single low radio frequency [21]. Thus, hot electrons are generated, which in turn lead to the generation of positive and negative ions in electron-neutral collisions. Electrons may diffuse quickly from the ionization region, leaving a positive space charge behind, which in turn attracts the negative ions via the (time-averaged) electric field profile. Hence, the generation of both positive and negative ions eventually results in the charge density profiles presented in figure 12. Accordingly, the plasma is electronegative in the sheath region. This, again, enhances the field reversal effect on the electron power absorption dynamics. Thus, the effect is self-enhancing, as the physical mechanisms form a positive feedback loop. The two consequences of the generation and presence of both positive and negative ions in the grounded electrode sheath are that, firstly, the net charge density profile remains relatively flat, whereas it drops strongly in the electropositive powered electrode sheath. This leads to an improved symmetry control, as it causes ¯< n n sp sg . Secondly, the maximum of the net charge density at the time of maximum sheath expansion is not found close to the sheath edge, but at a position deep inside one of the two sheaths. This position is within the region where the behavior turns electropositive. We find it to be at about the same position (around ≈ z 23.5 mm at p = 80 Pa and d = 25 mm), independent of the base frequency. The sheath integral value, I sg , becomes smaller for lower base frequencies, though, because the maximum sheath width becomes larger at lower base frequencies (see figure 14), so that the peak moves closer to the electrode on a normalized scale. Thereby, the sheath integral, I sg , is strongly reduced, causing an enhanced symmetry control due to > I I sp sg . Thus, the symmetry control can be enhanced by the electron heating and subsequent ionization dynamics in electronegative plasmas at low driving frequencies. Similar to single-frequency cases [21], our findings in a CCRF plasma driven by a multi-frequency voltage waveform can be generalized such that a large density of negative ions can be present in a RF sheath, if there is sufficient electron power absorption (e.g. due to a field reversal) and, subsequently, sufficient ioniz ation to ensure an equilibrium of positive ions and electrons arriving at the electrode on time average. (No negative ions reach the electrode.) Then, the flux out of the plasma bulk is much smaller than the flux of electrons and positive ions generated at a certain position inside the sheath. Although the sheath extends into a region beyond this position during a part of the RF period, this region may remain electronegative. Under single frequency operation, both sheaths become electronegative and the plasma remains spatially symmetric [21], while we find that an asymmetric voltage waveform breaks this symmetry. Specific customization of the applied voltage waveform and its base frequency are important for the effect to be significant. Effect of the base frequency on the IDFs The very different physics of the two sheaths has a strong impact on the ion properties at the electrode surfaces. Figures 15(a) and (b) show the flux-energy distribution of CF + 3 ions at the powered and grounded electrodes at various base frequencies. At relatively high base frequencies the shape of the distribution function looks similar at both sides. The energy scale and maximum ion energy is different, however, because of the difference in the mean sheath voltages caused by the asymmetric driving voltage waveform. At low base frequencies the shape of the distribution function at the powered electrode peaks at low energies and decreases continuously as a function of the ion energy. This shape is typical for ions, which undergo primarily elastic collisions while being accelerated by the sheath electric field [1]. The maximum of the distribution function at the grounded electrode, however, is found at energies, which are relatively high considering the small total width of the distribution function. This is because many of the CF + 3 ions arriving at the electrode are generated at a distinct position deep inside the grounded electrode sheath, as discussed above. Therefore, the majority of the ions at the electrode originate from the same region relatively close to the electrode. Thus, these ions gain about the same energy during their motion through the sheath electric field, while the probability for collisions is lowered due to the reduced transit space and time. Furthermore, it is found that the maximum ion energy becomes smaller for lower base frequencies at the grounded electrode, but it stays approximately constant at the powered electrode. , and the mean energy, ( ) , calculated from the CF + 3 ion flux-energy distribution functions, ( ) f E i . At high base frequencies, the total ion flux is nearly identical at both electrodes. The mean ion energy is different due to the asymmetry of the discharge, which is a consequence of the asymmetry of the applied voltage waveform. If the base frequency is reduced, a transition into the regime of electron energy gain by field reversal will occur, with a significant negative ion density in the grounded electrode sheath [21]. This does not affect the ratio of the mean ion energies at both for all base frequencies). Note that this corresponds to a very large control factor of about 7 for the mean ion energy, as the roles of the two electrodes can be reversed by tuning the phases of the applied harmonics [20,24,[35][36][37][38][39][40][41][42][43][44][45]54]. However, the transition to a strongly asymmetric electron power absorption regime strongly affects the total ion flux, which is significantly higher at the grounded electrode, where the ion energies are lower. The flux ratio changes from / Γ Γ ≈ 1 i,g i,p at the highest base frequency to / Γ Γ ≈ 2 i,g i,p at the lowest base frequency, because the maximum ionization occurs within the grounded electrode sheath and all of the ions generated there will eventually flow to the grounded electrode. Conclusions The effect of the base frequency on the symmetry control of electronegative CCRF plasmas driven by tailored, multifrequency voltage waveforms was investigated using PIC/ MCC simulations of geometrically symmetric CF 4 plasmas and an analytical model. We found that the Electrical Asymmetry Effect (EAE) is enhanced at lower base frequencies. This was explained by changes in the symmetry parameter, ε, as a function of the base frequency, f 0 . At lower base frequencies, ε becomes much smaller than unity, thereby allowing for a better control of the DC self-bias compared to the standard case at 13.56 MHz. Thus, the frequency dependence of the symmetry control in electronegative plasmas is completely different from that of electropositive plasmas, where the EAE has been found to be less effective at lower base frequencies [54]. A detailed analysis of the individual factors influencing the symmetry parameter revealed that this amplification of the EAE can be attributed to the different physical behaviors of the two sheaths: one sheath (in this case, with a 'peak-type' waveform applied, the sheath adjacent to the powered electrode) shows normal electropositive or slightly electronegative behavior, where the negative ion density in the sheath is negligible and the major part of the positive ions flows from the plasma bulk towards the electrode with a spatially and temporally almost constant flux. The other sheath (in this case the sheath adjacent to the grounded electrode), however, behaves completely differently, as electrons are accelerated by a reversed field during the collapse of the sheath towards the electrode. Subsequent collisions of these electrons with the neutral background gas may dominate the overall ionization and attachment rates in the entire discharge, so that the plasma becomes electronegative within a part of the sheath region. This, again, enhances the field reversal effect. Thus, hot electrons are generated, which in turn lead to the formation of negative ions and ionization. These physical mechanisms form a positive feedback loop. In that sense, the effect is self-enhancing. This generation of positive and negative ions results in a particular charge density profile, with a maximum very close to the electrode and decreasing density towards the electrode and towards the plasma bulk. The sheath charge density profiles directly affect the symmetry parameter and, thereby, the control of the CCRF plasma. The electronegative sheath exhibits a larger mean charge density (¯/¯< n n 1 sp sg ) and a smaller sheath integral ( / < I I 1 sg sp ), which reflects the spatial profile of the charge density. These two ratios cause a decrease of the symmetry parameter for lower base frequencies, which enhances the symmetry control. According to the different dynamics in the two sheath regions, the discharge consists of two halves with entirely different physical properties; in a qualitative comparison, one half is similar to that of a single-frequency electronegative plasma under low frequency operation, whereas the other half rather compares to that of a high frequency case [21]. As a consequence of the maximum in the spatial ionization profile within the grounded electrode sheath region, the ion flux to that electrode is considerably larger than that at the opposing electrode. The shape of the ion flux-energy distribution function is also altered, whereas the mean energy of the ions is hardly affected. Summarizing these findings, the dynamics of the electron energy gain is completely different in electronegative plasmas driven by customized voltage waveforms with low base frequencies compared to the case of higher base frequencies and/or electropositive gases. In particular, if one of the two sheaths collapses quickly and stays close to the collapsed state for a large fraction of the RF period, a large region of this sheath will become electronegative, causing a transition of the electron heating mode. This causes a strong asymmetry in the ionization and attachment profiles, thereby amplifying the symmetry control via the EAE. A similar frequency dependence can be expected when controlling the discharge symmetry by driving electronegative plasmas with sawtoothshaped voltage waveforms, i.e. using the slope asymmetry effect. This improved symmetry control should be very useful for certain applications, because the abnormal sheath behavior results in a high ion flux with a very small energy.
11,557
2016-05-31T00:00:00.000
[ "Physics" ]
The Molecular Evolution of the p120-Catenin Subfamily and Its Functional Associations Background p120-catenin (p120) is the prototypical member of a subclass of armadillo-related proteins that includes δ-catenin/NPRAP, ARVCF, p0071, and the more distantly related plakophilins 1–3. In vertebrates, p120 is essential in regulating surface expression and stability of all classical cadherins, and directly interacts with Kaiso, a BTB/ZF family transcription factor. Methodology/Principal Findings To clarify functional relationships between these proteins and how they relate to the classical cadherins, we have examined the proteomes of 14 diverse vertebrate and metazoan species. The data reveal a single ancient δ-catenin-like p120 family member present in the earliest metazoans and conserved throughout metazoan evolution. This single p120 family protein is present in all protostomes, and in certain early-branching chordate lineages. Phylogenetic analyses suggest that gene duplication and functional diversification into “p120-like” and “δ-catenin-like” proteins occurred in the urochordate-vertebrate ancestor. Additional gene duplications during early vertebrate evolution gave rise to the seven vertebrate p120 family members. Kaiso family members (i.e., Kaiso, ZBTB38 and ZBTB4) are found only in vertebrates, their origin following that of the p120-like gene lineage and coinciding with the evolution of vertebrate-specific mechanisms of epigenetic gene regulation by CpG island methylation. Conclusions/Significance The p120 protein family evolved from a common δ-catenin-like ancestor present in all metazoans. Through several rounds of gene duplication and diversification, however, p120 evolved in vertebrates into an essential, ubiquitously expressed protein, whereas loss of the more selectively expressed δ-catenin, p0071 and ARVCF are tolerated in most species. Together with phylogenetic studies of the vertebrate cadherins, our data suggest that the p120-like and δ-catenin-like genes co-evolved separately with non-neural (E- and P-cadherin) and neural (N- and R-cadherin) cadherin lineages, respectively. The expansion of p120 relative to δ-catenin during vertebrate evolution may reflect the pivotal and largely disproportionate role of the non-neural cadherins with respect to evolution of the wide range of somatic morphology present in vertebrates today. Introduction The integration over time of increasingly sophisticated signaling and cell-cell adhesion mechanisms has likely been an essential and ongoing process in the evolution of complex metazoan life. Interestingly, the Wnt signaling and cadherin-based adhesion functions of b-catenin have coexisted at least as far back as the origin of animals [1] (though C. elegans is a notable exception [2]), with coordination of these roles by a single protein perhaps facilitating evolution of the first multicellular organisms. Indeed, the evolutionary importance of b-catenin is reflected by phylogenetic analyses, which suggest a widespread and persistent stabilizing selection on each of the Armadillo (Arm) repeat sequences from Cnidarian to mouse b-catenin [3], and virtually no change in b-catenin over the ,400 million year course of vertebrate evolution [3]. In vertebrates, b-catenin (or Plakoglobin) coexists with two other so-called ''catenins'' (i.e., p120-catenin and acatenin) that together form a regulatory protein complex on the cytoplasmic tail of classical cadherins (i.e., Type I and type II cadherins). Evolutionary histories for cadherin-and b-cateninfamilies have been studied extensively [3,4,5,6,7,8] but similar analyses for the p120-catenin (hereafter p120) and a-catenin families have yet to be reported. The appearance of cadherins is clearly a watershed event in metazoan evolution. While adhesion per se likely predates metazoans [6], the origin and diversification of the greater cadherin family has permitted an explosion in functional diversity of intercellular interactions. Interestingly, vertebrate evolution has favored a particular paradigm, the classical cadherin, which has duplicated and reduplicated from a single vertebrate ancestor [5] to form a 26-member family. Structurally, the ''classical cadherin'' is comprised of five extracellular cadherin (EC) repeats and a highly conserved cytoplasmic tail containing a p120-binding juxtamembrane domain (JMD) and a C-terminal ''catenin binding domain'' (CBD) that interacts with b-catenin. As the predominant cadherin type in vertebrate cell-cell adhesion, the classical cadherins have also taken on fundamentally important roles in cell-cell adhesion, development and cancer, and mediate the majority of cell-and tissue-specific interactions in vertebrates. In vertebrates, p120 behaves as a master regulator of classical cadherin stability, and is critical for proper cell-cell adhesion in most solid tissues [9,10,11]. Deletion (or knockdown) of the p120 gene in vertebrates (e.g., mouse, xenopus, zebrafish) is embryonic lethal despite the presence of ARVCF, d-catenin, and p0071, closely related family members with at least partially overlapping functions [12,13,14,15]. Paradoxically, the single p120 family member in invertebrates (e.g., Drosophila melanogaster, Caenorhabditis elegans) is not essential for life in most species (although this point has been debated in drosophila) [16,17,18]. Thus, in vertebrates, p120 has evolved one or more essential functions relative to its invertebrate counterpart, and a critical role with respect to the classical cadherins. p120 family members share a conserved central domain composed of 9 Arm repeats and flanking N-and C-terminal regions that diverge from one another ( Figure 1). The ''core'' family members interact in adherens junctions with classical cadherins via Arm repeats 1-6 [19]. In contrast, the more distantly related plakophilins have evolved specialized roles in desmosomal junctions, which are mechanistically and spatially distinct from the adherens junction [20]. Surprisingly, despite structural similarity to p120, their interaction with desmosomal cadherins is not mediated by the Arm domain, but occurs instead through the plakophillin N-terminal head domain [21,22,23]. Knockout studies in mice reveal that plakophillin 2 ablation is embryonic lethal [24] while plakophilins 1 and 3 can be eliminated with relatively little effect [20,25]. p120 also interacts directly with the transcription factor Kaiso [26]. Kaiso belongs to a large family of BTB/ZF proteins, most of which are important in development and cancer, and a closely related Kaiso subfamily consisting of Kaiso, ZBTB38 and ZBTB4 and [27]. Interestingly, Kaiso is bimodal in that it interacts with a conventional sequence-specific DNA motif referred to as the Kaiso Binding Site (KBS) [28] and also with methyl-CpG containing motifs [29]. The latter are high affinity interactions that have been reported to suppress the transcription of several tumor suppressors (e.g., pRb, p16, HIC) through interaction with inappropriately methylated CpG islands [30]. Kaiso has also been shown in Xenopus to suppress several Wnt pathway genes (e.g., Wnt 11, Siamois) by association with the KBS [31,32]. Interestingly, a third mechanism has been proposed that does not involve direct interaction with DNA. Instead, Kaiso binds TCF, a b-cateninassociated transcription factor. Kaiso and TCF associate with one another via their DNA binding motifs, thereby mutually excluding interaction with chromatin [33]. According to this scenario, p120 may interact with and/or modulate canonical Wnt signaling via regulation of Kaiso. Indeed, overexpressed p120 promotes translocation of Kaiso out of the nucleus [31,34], potentially facilitating TCF interaction with chromatin. A feature shared by most, if not all members of the p120 family is physical and/or functional interaction with a number of Rho-GTPases, -GEFs and -GAPs [35]. For example, p120 can inhibit RhoA directly [36,37], or indirectly through p190RhoGAP [38], and has been shown to promote Rac1 activation [39,40]. In general, these activities are thought to play critical roles in regulating the cytoplasmic interface between the various cadherin receptors and the cytoskeleton. Here, we have analyzed proteomes from 14 diverse metazoan species to understand the evolution of the p120 protein family and the origin of its functional association with classical cadherins and Kaiso. We find that all invertebrates as well as several earlybranching chordate lineages contain a single family member with a ''d-catenin-like'' set of functions, suggesting that the p120family ancestor was ''d-catenin-like'' and highly conserved in prevertebrate metazoans. Gene duplications in chordate and vertebrate evolution gave rise to the six-seven family members in present day vertebrates, and provided the raw material and opportunity for functional diversification. Together with phylogenetic studies of the classical cadherins, our data suggest that p120-and d-catenin-like lineages split from one another in chordates and then separately co-evolved with nonneural (E-and P-cadherin) and neural (N-and R-cadherin) branches, respectively, of the vertebrate classical cadherins. A similar scenario with respect to a-catenin (also called a-catenin-1) and a-N-catenin (neural a-catenin, also called a-catenin-2) (E. Gaucher, personal communication) suggests that these distinct branches of the (vertebrate) classical cadherin family co-evolved with their own distinct subsets of both p120 (p120 vs. d-catenin) and a-catenin (a-catenin vs. a-N-catenin) family members. Thus, the rapid expansion of p120, relative to d-catenin, during vertebrate evolution may in large part reflect the broader spectrum of tissue and organ diversity outside of the nervous system. Other p120-specific innovations of note include the evolution of alternative splicing relevant to epithelial to mesenchymal transformation (EMT), loss of the C-terminal PDZ ligand motif, and interaction with Kaiso. The vertebrate specific appearance of Kaiso and its unique interactions with p120, TCF4 and methyl-CpG DNA suggest other p120 connections relevant to Wnt signaling and vertebrate-specific mechanisms of transcriptional regulation. p120 protein family A total of 65 protein sequences were retrieved from the 14 species examined (Table 1). All protostome species examined and early-branching chordates (the cephalochordate Branchiostoma floridae and the echinoderm Strongylocentrotus purpuratus) contain a single member from the d-catenin family. The urochordates (represented by Ciona intestinalis), the closest evolutionary relatives of vertebrates, contain two members, whereas all vertebrate species contain typically seven protein members of the p120catenin family. The only exceptions within the vertebrates are X. tropicalis, whose proteome contains six proteins, and the two fish, D. rerio and T. rubripes, whose proteomes contain 10 and 13 members of the p120-catenin family, respectively, almost twice the number of members of the protein family as the non-fish vertebrates (Table 1). Phylogenetic analysis of all protein family members identified suggests that the seven protein members typically found in vertebrates correspond to the seven delta-catenin protein subfamilies previously identified in humans (Figures 1 and 2) [35]. Specifically, these are the plakophilin 1, plakophilin 2, plakophilin 3, p0071, delta, p120 and ARVCF subfamilies. These seven subfamilies are robustly placed into three major clades: the first clade is composed of plakophilin 1, plakophilin 2, and plakophilin 3, the second clade of d-catenin and p0071, and the third clade of p120 and ARVCF. The phylogeny of protein members within each one of these seven functional categories is consistent with the vertebrate phylogeny, suggesting that the vertebrate ancestor possessed a single protein from each of the seven functional categories. Interestingly, the proteome of C. intestinalis, the closest relative of vertebrates included in this study, contains only two proteins. One protein consistently groups with the p120 -ARVCF clade (Figure 2), whereas the other protein is nested within the deuterostome d-catenin clade, but is not robustly grouped with any of the seven subfamilies or any of the three clades identified. The same is true for the single protein members identified in B. floridae and S. purpuratus, the two other deuterostome lineages included in our study. Finally, all protostomes examined contain a single member of the p120 family and likely are the outgroup of the deuterostome p120 family ( Figure 2). The increase in the number of p120 family members observed in vertebrates and the further increase in fish are consistent with studies suggesting that the ancestral vertebrate underwent two rounds of whole-genome duplication (WGD) [41] and that actinopterygian fish underwent additional rounds of WGD [42,43]. For example, for every single non-fish vertebrate subfamily member, two or three fish subfamily members are typically identified. However, the increase in number of members is unlikely to have been solely due to the WGDs and additional gene duplications likely contributed to the generation of the current diversity of protein members of the p120 family observed today. Kaiso protein family A total of 17 protein sequences were retrieved from the 14 species examined (Table 2). No Kaiso protein family members were identified in protostomes and in non-vertebrate chordates. All vertebrates contain two or three of the Kaiso family proteins. All vertebrates contain Kaiso, but several are missing either ZBTB4 or ZBTB38 (Table 2). Phylogenetic analysis of all protein family members identified three major clades that correspond to the three proteins ( Figure 3) [27]. . The four ''Core'' members also contain an amino-terminally located coiled-coil domain (green boxes). In the case of p120 and ARVCF, alternative splicing in this region gives rise to two major isoforms, which either contain (isoform 1) or not (isoform 3) this coiled-coil region. All ''Core'' members, except p120, also have a carboxy-terminally located PDZ ligand domain (purple boxes). (B) The invertebrate members of the p120-catenin family similarly possess centrally located Armadillo repeats, though in the case of Amphioxis this region contains 6 rather than 9 repeats. N-terminal regions show more diversity with no distinct domain structure (D. melanogaster, Amphioxis, Ciona d-catenin-like), Fibronectin type III domains (orange circles, C. elegans), or a coiled-coil domain (Ciona p120-like). Similar to vertebrate members, the C. elegans family member also contains a carboxy-terminally located PDZ ligand domain (purple box). doi:10.1371/journal.pone.0015747.g001 Phylogenetic relationships of p120 family to other functionally relevant proteins Figure 4B). Of particular interest is the existence of a single member of each of the catenin families (i.e.. d-catenin, a-N-catenin, and b-catenin) conserved throughout pre-vertebrate metazoan evolution, a pattern that coincides with the phylogenetic origins of Wnt family proteins. The plakophillins, on the other hand, along with Kaiso and Desmosomal cadherins, are vertebrate innovations. Of note, the d-catenin-like and p120-like ancestors of the present day p120 family arise just prior to vertebrates, as does the first common ancestor of the classical cadherins (i.e.. vertebrate Type I and Type II cadherins). Discussion Conservation of a d-catenin-like gene over the course of metazoan evolution suggests an ancient and evolutionarily important role, but the effects of deleting the only d-catenin-like gene present in worms and flies is not as dire as one might expect. d -catenin knockdown in xenopus is, in fact, embryonic lethal [45], but the effects of d-catenin KO in mice appear to be largely cognitive [46,47,48,49]. Although fly p120/d-catenin associates and colocalizes with fly E-cadherin [16], the evidence overall suggests that its role is not directly comparable to that of vertebrate p120. A strong possibility is that fly p120/d-catenin has an ancient function that is nonessential for life but nonetheless confers a strong evolutionary advantage. For example, the significant cognitive abnormalities exhibited by d-catenin KO mice [46,49] may not be immediately apparent in captivity but could markedly affect their ability to compete and survive in the wild. Indeed, dcatenin is one of several genes deleted in human Cri-du-chat patients and may contribute to the mental retardation associated with the disorder [46,47]. The vertebrate p120 family consists of seven members. Four ''core'' members (i.e., d-catenin, p120-catenin, ARVCF, and p0071) function in adherens junctions, and three less well conserved members function in desmosomes (Plakophillins 1, 2, and 3) (Figure 1). The phylogenetic analyses presented here show that they evolved through rounds of gene duplication and functional diversification from an ancient ''d-catenin-like'' gene that is conserved throughout metazoan evolution. The ancestral dcatenin was probably similar in function to the gene member currently present in invertebrates, echinoderms and cephalochordates (Figures 1 and 2; Table 1). The first gene duplication took place in the urochordate-vertebrate ancestor, giving rise to ''dcatenin-like'' and ''p120-catenin-like'' progenitors. Additional gene duplication(s), most likely a consequence of the two rounds of whole genome duplication at the origin of vertebrates, gave rise to (1) a d-catenin clade consisting of vertebrate d-catenin and p0071, and (2) a p120 clade consisting of vertebrate p120 and ARVCF. The plakophillins represent a vertebrate specific offshoot of the d-catenin-like progenitor. The phylogeny of the p120 family is relatively straight forward, but exactly how or why p120 has evolved to become the predominant family member in vertebrates is harder to explain. One possibility is that p120 has evolved uniquely advantageous features important for cadherin function. Indeed, comparison of current structural and functional characteristics of the various family members reveals several potentially critical p120 adaptations. First, p120 is the only core family member in vertebrates that lacks a C-terminal PDZ ligand domain. This domain mediates protein-protein interactions with a number of important PDZ domain containing proteins (e.g. PSD-950, erbin, densin-180). The PDZ ligand domain itself is an ancient feature of the p120 lineage as it is present, for example, in the sole family member of various protostomes such as C. elegans. The p120-like progenitor of p120 and ARVCF, on the other hand, has a Cterminal sequence that differs at one residue from known consensus motif sequences (i.e., NSWV). Notably, the p120 progenitor is equally similar to p120 and ARVCF by most criteria, but a bona fide C-terminal PDZ ligand would imply that the progenitor was functionally more similar to ARVCF than p120. Regardless, p120 is clearly the only core member of the vertebrate p120 family that lacks the C-terminal PDZ ligand domain and conceivably, certain physical and functional evolutionary constraints imposed by preexisting PDZ binding partners of ARVCF, d-catenin and p0071. Indeed, spine (and synapse) density in mouse hippocampal neurons is significantly increased by d-catenin ablation, but the effect is not cadherin-dependent. Instead, it clearly depends on a PDZ-ligand mediated interaction with one or more PDZ domain-containing proteins [49]. In contrast, p120 ablation in the same tissue has the opposite effect on spine density and works through a very different mechanism associated with modulation of Rho GTPases [13]. These data highlight the functional importance of the C-terminal PDZ ligand, and illustrate how it can contribute to the markedly different roles for d-catenin and p120 in hippocampal neurons, as well as other tissue types. Overall, these observations strongly support the notion that the absence of a PDZ ligand domain may have endowed p120 (and p120 bound cadherin complexes) with significant flexibility to evolve novel physical and functional interactions that are independent of PDZ-mediated roles. Second, a potentially critical adaptation is the evolution of alternative splicing in the amino-terminal regulatory domain of p120 and ARVCF [50,51,52], but apparently not in d-catenin or p0071. The ability to use alternative start sites allows p120 (and Evolution of the p120 Family PLoS ONE | www.plosone.org ARVCF) to separately express isoform 1 and/or isoform 3, forms of p120 that likely have significantly different roles. Specifically, isoform 1, but not isoform 3, contains the N-terminal coiled coil (CC) domain, a ,40 amino acid N-terminal domain that is presumed to be important because it is almost perfectly conserved in all core family members. p120 isoform 1 is expressed predominantly in mesenchymal (e.g., fibroblasts) and certain other non-epithelial cell types (e.g., neurons), whereas the shorter isoform 3 is preferred in epithelial and other relatively sessile cell types. Importantly, p120 isoform switching (e.g., from isoform 3 to isoform 1) is dynamic and typically coordinated with classic cadherin switching (e.g., E-cadherin to N-cadherin) that occurs during epithelial to mesenchymal transformation (EMT) [53,54]. The ability to directly modulate and/or participate in EMT is likely to be significant, as this process is critically important during development, wound healing and cancer. Notably, p120 is the only family member possessing both of these innovations (i.e., absence of a PDZ ligand domain and presence of alternative start sites). ARVCF undergoes N-terminal alternative splicing, but contains a C-terminal PDZ ligand motif. Whether one or both of these factors substantially influenced the adoption of p120 by classical cadherins is largely speculation. Nonetheless, if adaptive advantage did in fact play a role, the most likely determinant of such an event is p120 itself, and both factors offer plausible advantages relevant to flexibility and/or function. d-catenin, on the other hand, is likely constrained by PDZmediated interactions and the inability to generate an isoform that lacks the coiled-coil domain. Interaction with Kaiso provides a third p120 adaptation that is absent from other family members. In support of a previous study by Fillion et al [27], we find that Kaiso is vertebrate-specific, and thus coincides with both the origin of vertebrate p120 and the vertebrate specific expansion of the classical cadherins. Kaiso belongs to a unique family of transcription factors that can associate selectively, and with high affinity, to methylated CpG DNA via zinc finger domains [29]. Kaiso is actually bimodal in that it also binds with lower affinity to a conventional DNA motif [55]. A recent report shows that Kaiso can shut down the transcription of key tumor suppressors (e.g., pRb, p16, Hic1) by interaction with inappropriately methylated CpG islands. Thus, Kaiso may link p120 to epigenetic transcriptional regulation via CpG island methylation, a cancer-relevant and largely vertebrate- specific mechanism associated with the use of hypomethylated CpG islands as sites of active transcription [56]. Interestingly, Kaiso and TCF are reported to associate physically via their DNA binding domains, thereby preventing one another from interacting with chromatin [33]. These data raise the possibility that p120's interaction with Kaiso modulates canonical Wnt signaling through TCF4. While it is unlikely that p120 and Kaiso are essential for Wnt signaling, their influence might be important in the context of complex developmental and regulatory vertebrate environments. Given that Kaiso is absent from non-vertebrate metazoans, the evolution of interactions with both p120 and TCF may represent a vertebrate-specific adaptation connecting cadherin complexes in general, and p120 in particular, to canonical Wnt signaling pathways. What exactly this means for vertebrate Wnt signaling and/or related functions has yet to be determined, but in contrast to b-catenin and TCF, lessons in vertebrate p120 or Kaiso functions are unlikely to be guided by genetic studies in nonvertebrate model systems. As mentioned, the increase in the number of p120 family members observed in vertebrates is consistent with studies suggesting that the ancestral vertebrate underwent two rounds of whole-genome duplication [41]. Evidently, these were instrumental in the evolution of at least two broad categories of classical cadherin complexes. The ancestral invertebrate forms of a-catenin and p120 were duplicated and have emerged in vertebrates as a-N-catenin and d-catenin, both of which are found primarily in neural tissues [57,58,59,60]. Their duplicated counterparts, on the other hand, evolved to become a-catenin and p120, respectively, and are expressed in all solid tissues, including the nervous system. In parallel, the classical cadherins evolved from a single vertebrate ancestor by gene duplications that led to the evolution of at least four classical cadherins, most likely the ancestors of present day N-, R-, E-and P-cadherins [4,5]. These cadherin paralogues appear to represent early neural (N-and R-cadherins) and nonneural/epithelial (E-and P-cadherins) lineages that subsequently evolved at different rates [4]. Thus, in vertebrates, the ancestral ''invertebrate counterparts'' of p120 and a-catenin (i.e., d-catenin and a-N-catenin) appear to be primarily confined to the nervous system, while p120 and a-catenin are found in essentially all solid tissues, nervous system included. The very different features of dcatenin and p120, as discussed above, may account for the relatively restricted tissue specific expression of d-catenin, and the subsequent emergence of p120 as the most widely expressed member of the p120 family in vertebrates. The apparently analogous co-distribution of the a-catenin isoforms (E. Gaucher, personal communication) is probably related to these events, although coordinated alterations in gene regulatory elements could contribute to such events in any of the scenarios described above. In any event, these observations suggest that in most vertebrate tissues, the main functional unit defined by the present-day classical cadherin complex came together for the first time as a result of whole genome duplications that caused the ancestral catenins (d-catenin and a-N-catenin) to partition with the neuronal lineage, (presumably in association with N-cadherin), and their vertebrate-specific derivatives (p120 and a-catenin) to form a second lineage (presumably in association with a common ancestor of non-neural cadherins -perhaps E-and/or P-cadherins), which was then favored as the raw material for diversification of most other tissues. The former was likely constrained by the need to conserve complex neuronal functions whereas the rapid evolution of the latter is consistent with a cadherin complex that is more flexible with respect to expansion of novel interactions. The extraordinary success of this ultimate classical cadherin complex is evidenced by the repeated duplication and diversification of the classical cadherins to at least 26 members, most of which use the same basic set of p120-, aand b-catenin building blocks. This core design has thus been preserved and reutilized by classical cadherins for approximately half a billion years, while simultaneously serving as a key driver of vertebrate cell-and tissuediversification. Interestingly, a similar paradigm appears to extend to the desmosomal cadherins and their interaction with the more distant members of the p120 family, the plakophillins. Figure 2 shows that the plakophillins are of vertebrate origin and share a common ancestor with the vertebrate d-catenin clade. Their appearance coincides with that of several other important components of desmosomes, which also originate in vertebrates. Plakoglobin, for example, evolved around the same time via gene duplication of b-catenin, and functions in both adherens junctions and desmosomes. Interestingly, the desmosomal cadherins later diverge from other cadherins and the family appears to expand within mammals [5], permitting evolution of the desmosome. Our analyses also show that the plakophilins are the fastest evolving members of the p120-family ( Figure 2). Importantly, like p120 and b-catenin, the plakophillins also have roles in the nucleus [51,61,62,63,64], suggesting other potentially significant functions that have yet to be defined. Overall, the fastest evolving clade of the p120 family and the desmosomal cadherins appear to be recycling the evolutionary game plan of the classical cadherins. Data matrix construction The complete proteome sequence files of 7 vertebrates (Homo sapiens, Canis familiaris, Mus musculus, Gallus gallus, Xenopus tropicalis, Danio rerio, and Takifugu rubripes), 1 urochordate (Ciona intestinalis), 1 cephalochordate (Branchiostoma floridae), 1 echinoderm (Strongylocentrotus purpuratus) and 4 protostomes (Drosophila melanogaster, Caenorhabditis elegans, Helobdella robusta, and Lottia gigantea) were retrieved from the Ensembl FTP Server (http://uswest.ensembl.org/info/data/ftp/index. html) and JGI Genome Portal (http://genome.jgi-psf.org/) websites. All proteome sequence files were processed so that only the longest protein sequence product of a given gene was retained using a custom Perl script. Members of the d-catenin protein family were identified using the BLASTP similarity search algorithm, version 2.2.16 [65]. This was done by blasting the human p120 protein (GENBANK accession number: NP_001078927.1) against each proteome and retrieving all protein sequences showing significant similarity. Similar results were obtained using other members of the d-catenin protein family from Homo sapiens or from other species. Phylogenetic analyses Phylogenetic analyses were performed using the optimality criteria of Bayesian inference (BI) and Maximum Likelihood (ML). According to the BI optimality criterion, the tree that best explains our protein alignment is considered the best estimate of the true phylogeny of our proteins [66]. According to the ML criterion, the tree that makes our protein alignment the most probable evolutionary outcome given a specific model of protein evolution is considered the best estimate of the true phylogeny of our proteins [66]. BI and ML analyses were performed on two data matrices: the first data matrix was generated by the alignment of whole proteins, whereas the second data matrix was generated by concatenating the individual alignments of each of the nine Arm domains. All alignments were constructed using DIALIGN, version 2.2 [67]. DIALIGN is a local alignment algorithm that does not attempt to align proteins from start to finish. Instead, it only aligns the conserved protein regions between proteins and identifies all remaining (poorly conserved) regions as unaligned. This feature is particularly useful for aligning proteins like the p120 family where conserved domains are flanked by poorly conserved regions of varying length. Importantly,DIALIGN displays all aligned residues in capital letters, and all unaligned residues in lowercase letters. In all cases, all unaligned amino acids were converted to ''X'', the IUPAC symbol for unspecified amino acids, and were effectively filtered out from downstream phylogenetic analyses. Sequences belonging to each Arm domain were manually identified through careful comparison with the human p120 protein sequence. Alignments of each of the nine Arm domains were done in exactly the same fashion. BI analyses were conducted using MRBAYES, version 3.1.2 [68,69,70]. BI phylogenetic trees were constructed using a mix of empirical amino acid substitution matrices, allowing for rate of heterogeneity among sites by assuming that a certain proportion of sites were invariable and that the rates of the rest are determined according to the shape parameter alpha of the gamma distribution. Two independent analyses were run in parallel. Each analysis contained 4 chains (1 cold and 3 incrementally heated) and trees were sampled every 1,000 generations. The analyses were run for 2,000,000 generations by which time the average deviation of split frequencies was below 0.01. The trees and parameters sampled from the first 10% of generations from each of the two analyses were discarded as the burn-in. Clade support in BI analyses was assessed using posterior probabilities. ML analyses were conducted using RAXML, versions 7.2.5 and 7.2.6 [71]. ML phylogenetic trees were constructed using the WAG amino acid matrix [72], allowing for rate of heterogeneity among sites by assuming that the rates of the rest are determined according to the shape parameter alpha of the gamma distribution. Clade support in maximum likelihood analyses was assessed using non-parametric bootstrap re-sampling (100 replicates).
6,675.2
2010-12-31T00:00:00.000
[ "Biology" ]
Kriegel on the Phenomenology of Action I focus on Uriah Kriegel’s account of conative phenomenology. I agree with Kriegel’s argument that some conative phenomenology is primitive in that some conative phenomenal properties cannot be reduced to another kind of property (e.g., perceptual or cognitive). I disagree, however, with Kriegel’s specific characterization of the properties in question. Kriegel argues that the experience of deciding-and-then-trying is the core of conative phenomenology. I argue, however, that the experiences of trying and acting better occupy this place. Further, I suggest that the attitudinal component of the experiences of trying and acting is not, as Kriegel suggests, best characterized in terms of commitment to the rightness or goodness of the objects of experience. Rather, I argue that the attitudinal component is best characterized in imperatival terms. The highest phenomenal determinable is phenomenality per se (what-it-is-like-ness as such, if you will). It is the phenomenal property that is not a determinate of any other phenomenal property.2 Ultimately, at the second layer Kriegel finds at least six primitive types of phenomenology. Perceptual and algedonic phenomenology are accepted as primitive more or less without argument. Argument is given for the existence of cognitive, entertaining, conative, and imaginative phenomenology. Kriegel admits there may be others, and his willingness to consider a range of potential candidates at places in the book lends his discussion considerable interest. The whole thing is worth reading. In what follows, however, I restrict my attention primarily to Kriegel's arguments on behalf of conative phenomenology -that is, the phenomenology associated with motivation and action. Kriegel presents his account of conative phenomenology in chapter 2 of The Varieties of Consciousness. Two claims are critical, and form the core of the account. First, Kriegel argues that some conative phenomenology is primitive in that some conative phenomenal properties cannot be reduced to another kind of property (e.g., perceptual or cognitive). Second, Kriegel argues for a specific characterization of the properties in question: the fundamental form of our conative experience is a proprietary phenomenology of deciding-and-then-trying .3 In what follows, I first elucidate Kriegel's arguments for both claims. Next, I assess the arguments. I agree with Kriegel's irreducibility claim. But I question his characterization of the core properties of conative phenomenology. This disagreement may run deep. For the reasons I offer suggest a separate way to draw the boundaries around conative phenomenology. On this way, the phenomenology of trying and acting may be distinct from conative phenomenology more broadly, with ramifications for how we think of the nature and scope of the primitive kinds of phenomenology Kriegel identifies. Kriegel's conative phenomenology For Kriegel, conative phenomenology involves phenomenal properties associated with motivation and action. These are properties attached to states and processes described as desiring to A, wishing that P, valuing or disvaluing X, preferring X to Y, intending to A, planning to A, deciding to A, trying to A, doing A, and so on. Kriegel endorses primitivism about conative phenomenology -the claim that «some phenomenal property is (i) instantiated by some unquestionably conative state, (ii) not instantiated by any nonconative states, and (iii) irreducible to any (combination of) other phenomenal properties».4 Kriegel argues primarily by elimination: he considers and rejects a number of (more or less) plausible proposals for eliminating or reducing conative phenomenology. I am convinced: for purposes of exposition, I consider the last two proposals, which are in my view the most plausible. The first proposal I will consider has it that the phenomenology of doing something with one's body -e.g., clenching one's fist -can be reduced to three things: (i) tactile phenomenology of one's hand's various parts touching each other, (ii) visual (or for that matter cognitive) phenomenology of seeing (or judging) that one caused the fist to clench, and (iii) proprioceptive phenomenology of feeling one's fist muscles contracting. 5 Kriegel's primary problem with this proposal is that it gets the timing of the phenomenology wrong. We do not experience the phenomenology of doing something after it is done, as this account entails. Rather, «the phenomenology of doing the contracting of one's muscles takes place during, or rather leading up to, the muscle contraction».6 The last proposal Kriegel considers is due to William James. The Jamesian proposal appeals not to tactile or proprioceptive feedback, but to anticipative tactile or proprioceptive imagery. As Kriegel notes, «on this view, the key element for capturing the conative dimension of the experience of clenching one's fist is the feel of imaginatively anticipating one's fist muscles contracting».7 Against this proposal, Kriegel makes two points. First, it again gets the timing of the phenomenology wrong, placing it before the doing takes place. Second, it «seems false to our experience». Kriegel elaborates: We experience a representation of the act to follow, but also of the act following, and following because we make it follow. That is, we experience not only an anticipation of the act, but also the causing of the act in real time.8 I agree with Kriegel against James that there is an irreducibly agentive or actional element to the phenomenology. Elsewhere, I have argued that there is no good empirical reason to identify this aspect of phenomenology with anticipative imagery, and furthermore that there is some reason to think that this aspect is at least partially constituted by executive states such as intentions and command signals.9 The difficult part is getting the description of this aspect of the phenomenology right. For without a compelling account of the nature of the phenomenology at issue, skeptics will likely find space to dissent. I turn, then, to Kriegel's characterization of the core properties of conative phenomenology. The first aspect of this characterization has to do with the nature of conative attitudes. What seems to characterize conative states is their value-commitment. To want ice cream, to wish for ice cream, to like ice cream, to approve of ice cream -all these commit to the goodness of ice cream. The notion of goodness at play here is maximally neutral -a kind of completely generic goodness. It covers both moral and other kinds of goodness (e.g., aesthetic). It covers relative goodness ("good for") and absolute goodness (good tout court). It covers the goodness of states of affairs, but also the goodness of actions ("rightness"), mental states ("fittingness"), and persons ("virtue"). It covers intrinsic and final goodness, as well as instrumental goodness. We may call this generic goodness, or goodness-G for short. Positive conative states (such as liking or approving of something) are characterized by their goodness-G-commitment; negative ones (disliking, disapproving) by their badness-G-commitment. 10 Kriegel distinguishes the way that conative attitudes commit to goodness from two other ways of committing to goodness. The first is a belief's representing-as-true p's being good. The second is a sensory state's (e.g., a pain's) sensuous representing-as-good p. Conative attitudes are nonsensuous, and thus they represent-as-good in a nonsensuous way. Kriegel comments: If nonsensuous representing-as-good-G is the mark of the conative, then all conscious conative states exhibit what we may call nonsensuous presenting-asgood-G.11 The second aspect of Kriegel's characterization involves the identification of the most fundamental conative states and processes. Here Kriegel draws on the work of Paul Ricoeur, arguing that the core of conative phenomenology is that of deciding-and-then-trying. Kriegel first considers the experience of deciding, as explicated by Ricoeur.12 The phenomenology of deciding is marked by several features. First, decisions are directed to projects represented as in the future. They thereby have a «character of futurity»; second, deciding «presents the project as in my power»; third, deciding involves a felt pull to action: «unless a mental state involves a pull to action, it is not a decision».13 As Ricoeur has it, in deciding «I feel myself somehow charged, in the way a battery is charged: I have the power to act».14 Further, this pull to action is categorical, distinguishing deciding from related states such as desire. Kriegel claims that the categorical nature of deciding is an attitudinal feature -decisions are characterized by an attitude of commitment to the project or plan that is the decision's content. How does the specific nature of deciding relate to the general mark of the conativeto what Kriegel calls presenting-as-good-G? Kriegel claims that «decision's categorical pullto-action feel casts decision as directed at the right».15 This is because, for Kriegel, rightness is an attribute of actions, and goodness of states of affairs. And decisions are always about actions. So decisions present-as-right the actions they are about. According to Kriegel, however, the experience of deciding is incomplete on its own. It requires a complement. Deciding feels impatient: its pull to action is unnerving, strongly calling me to act it out. Not only does the decision dispose me to act, but until the decision is acted upon -until the disposition is manifested -there is a subtly unpleasant feeling of tension in my consciousness. Thus, by its very nature, a decision desperately wants to be realized -realized in action. Phenomenologically, the exercise of the will is not exhausted when a decision has been formed -only when the process of realizing the decision is underway.16 Although Kriegel writes in the above passage that a decision wants to be realized in action, he argues that the essential complement of the experience of deciding is an experience of trying. Kriegel is aware that this may seem unnatural. Why think of trying rather than acting as decision's complement? Kriegel's reasoning on this point is as follows. We are attempting to characterize some aspect of phenomenology, and phenomenology is «an entirely mental phenomenon».17 Action, however, is not entirely mental. At least in the case of bodily actions, it is constituted in part by bodily movements. Bodily movements on their own lack intentionality, as they fail Chisholm's test for intentionality. That is, action verbs that involve the body support existential generalization and do not evince substitution failure. From "Anatole moved his hand" […] one can validly infer "there is something that Anatole moved"; so existential generalization is supported rather than failed. Further, from "Anatole moved his hand" […] in conjunction with "Simone's favorite object is Anatole's hand" […] one can validly infer "Anatole moved Simone's favorite object"; so there is no substitution failure either.18 Kriegel favors trying over action as the complement of deciding because trying is, Kriegel avers, entirely mental. Trying passes Chisholm's test for intentionality. As such, trying is «the mental "core" of action».19 As for the phenomenology of trying, Kriegel makes three observations. First, like deciding, trying aims at the right: trying represents-as-right the object of the trying. Second, the experience of trying essentially involves an experience of effort: «trying involves the experience of mobilizing force in the face of resistance».20 Third, the experience of trying in some sense satisfies the tension inherent in the experience of deciding. This allows Kriegel to bring together deciding and trying as the joint core of conative phenomenology, as follows: [T]he feel of deciding to ϕ inherently requires a complement in trying. This marks a deep difference between decision and desire. Since desire's pull-to-action feel is merely hypothetical, there is nothing phenomenologically problematic about desiring something but trying to do nothing about it. Things are different with decision: given decision's categorical pull-to-action feel, it is strictly impossible that one should decide to ϕ without trying to ϕ. In that respect, the experiences of deciding and trying are, au fond, two components of a single experience, which for want of a better term I will call the "phenomenology of deciding-cum-trying".21 So concludes my brief elucidation of Kriegel's account of the core of conative phenomenology.22 There is much to like about this account, and many features of it that I am happy to accept. In what follows, however, I focus on areas of disagreement. A different view of the terrain Kriegel has covered is available. My aim is to shed some light on it so that we may compare alternatives and, if things go well for us, to sharpen our understanding of the terrain. Assessing the characterization of conative phenomenology's core Recall that Kriegel's account has two aspects. The first has to do with the nature of conative attitudes. The second has to do with a characterization of the core conative states and processes. I discuss these aspects in reverse order, beginning with the phenomenology of deciding. I agree with some features of Kriegel's Ricoeurian account. Conscious deciding has a character of futurity. Conscious deciding involves a felt pull-to-action. (Although I might emphasize that there is an active aspect to this "felt pull", perhaps better described as a felt charge to act.) Conscious deciding is categorical in nature. But I am not convinced that conscious deciding requires a complement. Here is why I am not convinced. Action theorists distinguish between distal intentions and proximal intentions. These are intentions to A at some point in the future, and intentions to A now, respectively. Accordingly, we can distinguish distal from proximal decisions. Distal decisions are intentional mental actions of distal intention formation; proximal intentions are intentional mental actions of proximal intention formation. If any conscious decisions require a complement, it is conscious proximal decisions. But -here is a crucial claim in my reasoning -the phenomenology of distal and proximal deciding is the same qua deciding (that is, as regards intention formation). Since distal decisions do not require a complement, then we cannot draw the claim about complement requirement from the phenomenology of deciding. In my view, the core of the phenomenology of deciding can be described as that of performing the mental action of assenting or committing to a plan.23 How best to understand the causal processes that undergird the phenomenology is a difficult issue.24 But as for the phenomenology, it seems to me that the mental action of assenting or committing is not irreducible. It involves trying -the mobilizing of effort as Kriegel puts it -and it typically is successful. So the phenomenology of deciding is just the phenomenology of trying to do or of doing a certain thing, namely, deciding. If this is right, then the core of the phenomenology in question will be reduced to trying, or acting. Certainly Kriegel will opt for trying, for reasons we have seen. But I am not convinced we should dispatch with the phenomenology of acting. As Kriegel recognizes, the view that trying is the mental core of action is open to the following objection. Normally, we do not experience ourselves as trying, but as acting. Indeed, Kriegel attributes to Ricoeur the very plausible observation that in our actual experience it is action that manifests itself to us first and foremost, while trying is relatively obscured and requires careful and somewhat tutored attention.25 In response, Kriegel makes two points. The second is an argument that trying, not acting, is the natural complement of deciding. Since I have argued that deciding needs no complement, and is reducible anyway to trying or acting, I leave this point aside. The point on which I focus involves an analogy with perception. Kriegel notes that we experience ourselves as seeing the world, even though nothing in the experience guarantees success: «our experience is in fact a state which might be either a seeing or a hallucinating». 26 Similarly, Kriegel notes, for experiences of trying and acting. When it is successful, our experience of ourselves as acting is veridical, and when it is unsuccessful, nonveridical. It remains that nothing in the conative experience itself guarantees its success, just as nothing in a visual experience guarantees its veridicality. So the experience itself is just a trying.27 I fail to see how the lack of a guarantee of success renders the experience itself just a trying. Perhaps Kriegel is thinking that in the absence of a guarantee, trying is all we can know that we have done. But again, I fail to see how a lack of knowledge about the experience's veridicality makes the experience a trying. Here is one way to think about the mental core of action. Either the common experience attached with acting is an experience of trying or one of acting. Consider Ricoeur's example of the clenching of a fist. When I consciously and successfully clench the fist, there are different aspects to my phenomenology. I experience directing activity (or mobilizing effort) towards the clenching, and I experience certain things attached to the fist actually clenching. I think it is an open question whether what I experience is best described as a decomposable sum of the experience of trying along with perceptual elements related to the fist actually clenching, or instead best described as a non-decomposable unity of the experience of acting -of my clenching the fist. On either description, the total experience involves perceptual elements. On the former description, we can phenomenologically separate the trying from the clenching. Thus, even in veridical cases, it seems appropriate to describe the agentive core (if not the mental core) of the action as an experience of trying.28 On the latter, it seems inappropriate to do so -the trying and the clenching are unified. I do not consciously try to clench the fist. I consciously clench it. On this description, then, the experience of acting has as much claim to the title "mental core of action" as does the experience of trying. We might, then, have to make room for both. I turn to a different aspect of Kriegel's account of conative phenomenology -his characterization of the conative attitudes. I accept that the phenomenology attached with desiring, wishing, hoping, valuing, and preferring commits to the goodness of its objects in the way Kriegel describes. But I am not convinced that the phenomenology attached with trying and acting does so -at least not essentially. Rather than positing conative attitudes as essential to these kinds of experiences, we might posit something akin to imperatival attitudes. Imperatives issue commands. Drawing on Kriegel's way of explicating attitudes, imperatival attitudes do not represent-asgood (or represent-as-right) their objects. Instead, they represent-as-to-be-done their objects. The difference here is, in part, that imperatival attitudes contain no value commitment. They are concerned only to command (clusters of) action(s). I think this imperatival proposal can capture the phenomenology of intending (and thereby a part of the phenomenology of deciding, namely, the part associated with the pull-to-action and the charge-to-act mentioned above). Recall Kriegel's claim: If nonsensuous representing-as-good-G is the mark of the conative, then all conscious conative states exhibit what we may call nonsensuous presenting-asgood-G.29 Conscious conative states nonsensuously present-as-good their objects. As I have said, I do not disagree that this is accurate as applied to desiring, preferring, and so on. But in the case of intending, I think we confront a subtly distinct type of phenomenology, characterized by a nonsensuous imperatival attitude.30 Conscious intentions nonsensuously present-as-to-bedone their objects. Something else is needed, however, if we wish to capture trying and acting. This is because in trying and acting, an agent does not experience her attempts or actions as to-be-done -she experiences them as what she is doing. In consciously acting, the agent experiences herself at once fulfilling the command she herself generates and maintains.31 It is better to say, then, that the attitude that characterizes the phenomenology of trying and acting is a proprietarily executional attitude. When consciously trying or acting, the agent experiences herself as executing the plan that is her goal: she has an experience of directing activity towards goal-fulfillment. This kind of experience makes no comment on the goodness or rightness of the thing done -the experience is only concerned with the doing and with what is being done. should decide to ϕ without trying to ϕ». As applied to proximal decisions, I think it is very rare for an agent to decide to ϕ now without trying to ϕ. But I view these as logically distinct experience-types, and thus as separable in principle. It is not inconceivable, in my view, that an agent could have the experience of deciding to ϕ and then, before a trying to ϕ or a ϕ-ing can begin, change her mind. Sometimes evidence for or against a decision continues to accumulate (via sub-personal assessment mechanisms) after a decision is made, leading to rapid changes of mind.32 Even so, my proposal regarding the nature of the attitudes at issue in intending, trying and acting does suggest a disagreement with Kriegel regarding the structure of what Kriegel calls second-layer phenomenal primitives -that is, the kinds of phenomenology that share whatit-is-like-ness and nothing else. If my proposal is right, then what Kriegel identifies as the core of conative phenomenology might be better thought of as the core of a different kind of primitive phenomenology -agentive phenomenology. This is not to say that conative (or motivational) phenomenology is non-existent. A primitive conative phenomenology might still be associated with experiences that are, as Kriegel notes, fundamentally committed to the value of their objects. Experiences associated with desires, wishes, hopes, and preferences seem to be paradigm examples. But we might go further than this. In his chapter on emotional phenomenology Kriegel discusses a proposal due to Brentano that lumps together conative and emotional phenomenology insofar as «both frame their object as good». 33 Kriegel notes that if we accept this proposal, «It would then be natural to hold that experiences exhibiting presenting-as-good form a second-layer phenomenal category on a par with experiences exhibiting presenting-as-true and experiences exhibiting mere-presenting».34 Emotional and conative phenomenology would turn out to be different classes of the same second-layer phenomenology -what we might call evaluative phenomenology. One might go even further than this, arguing that algedonic phenomenology -the phenomenology attached with pleasure and pain -constitutes a third kind of evaluative phenomenology. Certainly it is not implausible to think that the valenced aspects associated with pain and pleasure can be explained via attitudes that sensuously present-as-valenced their objects. Against the backdrop of Kriegel's bigger project, the possibility seems worth considering. I cannot defend these possibilities at length here, but it is worth noting that on this proposal, we have an explanation for the sense in which emotional, conative and algedonic states motivate action and feature in deliberation. In virtue of the fact that these experiences frame their objects as good or bad in various ways, these states are capable of informing valuebased action and decision. When it comes to the proprietarily agentive experiences of trying and acting, however, something different is going on. The experiences of trying and acting do not directly commit to the goodness or rightness of their objects. Rather, these experiences are concerned only with execution. That our tryings and actings can be rationalized by evaluative states and experiences requires an additional layer of mentalityone not found in the tryings and actings themselves.
5,272.4
2016-08-06T00:00:00.000
[ "Philosophy" ]
Feasibility and accuracy of real-time 3D-holographic graft length measurements Abstract Aims Mixed reality (MR) holograms can display high-definition images while preserving the user’s situational awareness. New MR software can measure 3D objects with gestures and voice commands; however, these measurements have not been validated. We aimed to assess the feasibility and accuracy of using 3D holograms for measuring the length of coronary artery bypass grafts. Methods and results An independent core lab analyzed follow-up computer tomography coronary angiograms performed 30 days after coronary artery bypass grafting in 30 consecutive cases enrolled in the FASTTRACK CABG trial. Two analysts, blinded to clinical information, performed holographic reconstruction and measurements using the CarnaLife Holo software (Medapp, Krakow, Poland). Inter-observer agreement was assessed in the first 20 cases. Another analyst performed the validation measurements using the CardIQ W8 CT system (GE Healthcare, Milwaukee, Wisconsin). Seventy grafts (30 left internal mammary artery grafts, 31 saphenous vein grafts, and 9 right internal mammary artery grafts) were measured. Holographic measurements were feasible in 97.1% of grafts and took 3 minutes 36 s ± 50.74 s per case. There was an excellent inter-observer agreement [interclass correlation coefficient (ICC) 0.99 (0.97–0.99)]. There was no significant difference between the total graft length on hologram and CT [187.5 mm (157.7–211.4) vs. 183.1 mm (156.8–206.1), P = 0.50], respectively. Hologram and CT measurements are highly correlated (r = 0.97, P < 0.001) with an excellent agreement [ICC 0.98 (0.97–0.99)]. Conclusion Real-time holographic measurements are feasible, quick, and accurate even for tortuous bypass grafts. This new methodology can empower clinicians to visualize and measure 3D images by themselves and may provide insights for procedural strategy. Introduction Three-dimensional (3D) imaging modalities have been indispensable in driving the rapid development of cardiovascular interventions in the past decade, 1 however, they are limited by the 2-dimensional display of computer screens.Numerous extended reality technologies have, therefore, been developed to bring 3D images into the physical world as 3D holograms.Among them, mixed reality (MR) technology can simultaneously display high-definition 3D holograms while preserving the user's situational awareness, making it the ideal tool for pre-procedural planning and intra-procedural visualization. 2 MR technologies have been implemented in many clinical scenarios, such as surgery for congenital heart disease or structural heart intervention. 3,4With high-precision motion capture technology onboard state-of-the-art MR headsets, users of MR can manipulate 3D images with gestures and voice commands without breaking sterility, including making measurements of real objects or 3D holograms.The measurement of real-world objects with MR headsets has been tested for medical use in small experiments. 5However, measurement on 3D-rendered holograms, which can be expanded/ shrunk, rotated, and cut by the user, is a whole different difficulty level.Thus, an additional study is warranted to assess whether mixed-reality measurement on 3D holograms is a feasible and accurate modality. Coronary computed tomographic angiography (CCTA) is a noninvasive and highly sensitive modality for diagnosing coronary artery disease (CAD) and is ideal for the assessment of graft patency after coronary artery bypass graft (CABG) surgery. 6Graft tension, overstretching, kinking, or redundancy are important mechanisms for early graft failure. 7ost-CABG CCTA offers a unique window to assess graft length and trajectory; however, these measurements are time-consuming and technically challenging and, thus, not routinely reported by radiologists in clinical practice.However, for surgeons, this knowledge may help improve surgical planning in the future.In this study, we aim to assess the feasibility and accuracy of using CT-derived 3D holograms for measuring the length of coronary artery bypass grafts. Methods We prospectively analyzed the 30-day post-CABG follow-up CCTA of 30 consecutive cases in the FAST-TRACK CABG trial, which is an investigatorinitiated single-arm, multicenter, prospective study aiming to prove the feasibility and safety of planning surgical revascularization solely based on CCTA, without knowledge of the anatomy defined by invasive coronary angiography. 8All patients in this study received post-operative CCTA scans at 30-day follow-up using the Revolution computed tomography (CT) scanners (GE Healthcare, Milwaukee, WI, USA), which have a nominal spatial resolution of 230μm along the X-Y planes, a rotational speed of 0.28 s, and a Z-plane coverage of 16 cm enabling imaging of the heart in one heartbeat.All patients received nitrates before CCTA acquisition and beta-blockers in cases of heart rates ≥65 bpm.Image quality was controlled by expert reviewers using the five-point Likert scale at patient and segment levels. The CCTA images were transferred and analyzed by an independent core lab (Corrib, Galway, Ireland), where they were processed using the CarnaLife Holo software (Medapp, Krakow, Poland) for hologram reconstruction and the CardIQ W8 CT system (GE Healthcare, Milwaukee, Wisconsin) for validation analysis.The 3D holograms were pre-processed by removing the chest wall, rib cage, and posterior mediastinal structures with the built-in scissor tool, which takes approximately 3 min per case.Two analysts (TT, XH) blinded to clinical information used the CarnaLife Holo software and a HoloLens 2 headset (Microsoft, Redmond, Washington, USA) to perform real-time holographic graft length analysis with voice command and hand gestures (Figure 1A).The third analyst (S.K.), with full access to clinical information, performed validation measurements on the CardIQ W8 CT system following the automatically generated vessel centrelines with manual adjustments (Figure 1B).Graft lengths were measured from the subclavian artery [for internal thoracic arteries (IMA)] or the aorta [for saphenous vein graft (SVG)] to the last anastomosis to the coronary arteries.The inter-observer agreement was assessed in the first 20 cases to evaluate the consistency of holographic analysis. Quantitative variables are reported as mean ± standard deviation (SD) or median and interquartile range (interquartile range, 25-75%) according to distribution.Categorical variables are expressed as numeric values and percentages.The comparison between the CT and holographic measurement was done using the non-parametric Mann-Whitney U test or paired-sample t-test when appropriate.The Pearson correlation, intra-class correlation coefficients (ICC), and Bland-Altman method were used to quantify the correlation between paired graft length measurement with hologram and CT system.A two-sided P-value < 0.05 was considered to be statistically significant.All data were processed using SPSS version 27.0 (IBM Inc, Armonk, NY, USA) and R 4.1.1(The R Foundation for Statistical Computing, Vienna, Austria). Results A total of 70 grafts (30 left IMA grafts, 31 SVG, and 9 right IMA graft) were analyzed (Table 1).The feasibility of graft length measurement with CarnaLife Holo and CardIQ W8 was both 97.1% (all grafts were analyzable except for an SVG, and a right IMA graft occluded on 30-day follow-up CCTA).On average, the holographic measurement took 3 min and 36 s ± 50.7 s per case compared to around 20 min per Feasibility and accuracy of real-time holographic measurements case on the CT system.There was excellent inter-observer agreement in the length measurement of the 47 grafts of the first 20 patients using hologram software, with median graft lengths measured by analysts A and B of 187.5 mm (170.0-209.6)and 187.5 mm (157.7-211.4),respectively [mean difference 1.4 ± 9.1 mm, 95% lower limit of agreement (LLA) −25 mm and upper limit of agreement (ULA) 22.3 mm] with an ICC of 0.99 (0.98-0.99, Figure 1C).There was no significant difference between the graft length measured with the hologram or CT system 187.5 mm (157.7-211.4) vs. 183.1 mm (156.8-206.1),P = 0.50.There was also no significant difference between modalities in the length measurement across different graft types (left IMA, SVG, right IMA), as shown in Table 1.The Bland-Altman plot showed that the mean difference between the CT system and holographic graft length measurement was 4.14 ± 8.9 mm, (95% LLA −13.31 and ULA 21.59, Figure 1D).The measurements on hologram and CT were highly correlated (r = 0.97, P < 0.001) with an excellent agreement [ICC 0.98(0.97-0.99), Figure 1E]. Discussion In this small prospective feasibility study, we demonstrated that holographic measurements, enabled using MR technology, offer an accurate and efficient option for evaluating complex cardiovascular structures in real time.This enables operators to fully leverage the power of advanced imaging and reduce the time spent at the imaging workstation after a short training session to familiarize themselves with the software.MR measurements offer the opportunity for clinicians to make custom measurements that are not routinely provided; the length of bypass grafts is a perfect example of such clinician-initiated measurements.The ability to freely measure the graft length would allow the surgeons to appreciate the 'in vivo' length of the graft implanted and to appreciate whether the graft material is in excess or insufficient.This may help them assess the effect of graft material treatment, for instance, skeletonization of arterial grafts or different vasodilator treatments, on graft length.With the ability to intuitively make measurements on 3D holograms, surgeons can potentially plan the precise length of arterial and venous grafts that need to be harvested for each CABG operation, reducing unnecessary tissue loss and potentially the risk of early graft failure. 7,9In addition, MR tools can potentially grant clinicians the mobility to leave computer screens and visit the bedside, whilst the intuitive images can also help engage patients in interactive education and discussion sessions.MR technology also offers the unique ability to bring 3D images, including echocardiography, CT, and magnetic resonance image to the operating table for the operator to examine at their fingertips.This may prove essential in complex procedures and operations that require detailed planning and measurements on images, for instance, planning the size of surgical implants.In addition, should the need for additional measurements arise, these can be performed immediately ad-hoc and without breaking sterility.The next iteration of MR would be even more powerful with the fusion of multiple imaging modalities, including fluoroscopy, and echocardiography. 10ur study marks the first step in the evolution of 3D holograms from a pure visualization tool to an interactive imaging modality for real-time reference.Although we showed excellent agreement between holographic graft length measurements and their actual length, further study with other types of measurement, such as quantified tortuosity, area, or volume, and in other clinical scenarios, including structural heart disease, or percutaneous coronary intervention, should be performed to expand the generalizability of hologram measurements.3D holograms measurements on other modalities, such as echocardiography and intravascular imaging, are potential applications of this technology in the future. Conclusion Real-time holographic measurements are feasible, quick, and accurate even for tortuous coronary bypass grafts.This new methodology enables surgeons to visualize and measure the results of their work by themselves and may provide insights for procedural strategy.Eventually, these 3D holograms may empower surgeons to plan the precise length of arterial and venous grafts to be harvested for every individual case. Institutional Review Board (IRB) Approval: IRB approval was obtained at each individual participating centre. Informed consent statement Written consent was obtained for all patients. Figure 1 Figure 1 Study workflow and results.Panel A shows an example of real-time measurements on holographic reconstructions with hand gestures and voice commands.Panel B shows the study workflow.Thirty consecutive FAST TRACK CABG trial cases were analyzed using the traditional CT workstation or 3D holograms.An example was provided in the lower panel.Panel C shows the Bland-Altman plot of the holographic graft length measurement in the first 20 cases by two analysts.Panel D shows the Bland-Altman plot of the graft length measurement by hologram vs. CT system.Panel E shows the correlation between the holographic and traditional CT system measurements. Table 1 Graft length measurement with hologram and CT ICC = intra-class correlation coefficients, LIMA = left internal mammary artery, GSV = great saphenous vein graft, RIMA = right internal mammary artery.a P-value of comparison between the CT and holographic measurement using the non-parametric Mann-Whitney U test or paired-sample t-test.
2,643.2
2023-11-14T00:00:00.000
[ "Medicine", "Engineering" ]
CSRR-SICW High Sensitivity High Temperature Sensor Based on Si3N4 Ceramics A new type of wireless passive, high sensitivity, high temperature sensor was designed to meet the real-time temperature test in the harsh aero-engine environment. The sensor consists of a complementary split ring resonator and a substrate integrated circular waveguide (CSRR-SICW) structure and is based on high temperature resistant Si3N4 ceramic as the substrate material. Temperature is measured by real-time monitoring of the resonant frequency of the sensor. In addition, the ambient temperature affects the dielectric constant of the dielectric substrate, and the resonant frequency of the sensor is determined by the dielectric constant, so the function relationship between temperature and resonant frequency can be established. The experimental results show that the resonant frequency of the sensor decreases from 11.3392 GHz to 11.0648 GHz in the range of 50–1000 °C. The sensitivity is 123 kHz/°C and 417 kHz/°C at 50–450 °C and 450–1000 °C, respectively, and the average test sensitivity is 289 kHz/°C. Compared with previously reported high temperature sensors, the average test sensitivity is approximately doubled, and the test sensitivity at 450–1000 °C is approximately three times higher. Therefore, the proposed high sensitivity sensor has promising prospects for high temperature measurement. Introduction The aero-engine is the heart of an aircraft [1]. In the process of aero-engine development, temperature is an important parameter for performance analysis, design verification and improvement, and flow heat transfer analysis [2]. The aero-engine is characterized by high temperature, high pressure, high speed, complex internal flow, complex structure, small space, etc., so temperature measurement under such working conditions has always been a hot issue in aviation test and test technology, and also one of the difficulties in aero-engine test technology [3]. In view of the complex environment inside the aero-engine, researchers have tried temperature measurement technology based on various principles. Thin film thermocouple, radiation temperature sensing, and temperature indicator paint are the main temperature measurement methods used in aero-engines at present [4][5][6][7][8]. The thin film thermocouple designed in literature [4] basically eliminates the influence of embedded thermocouple on the measured temperature field. However, the thin film thermocouple is not suitable for large-scale installation due to a lead line problem, especially for the temperature measurement of high-temperature rotating parts of engines (such as turbine blades). In the literature [5,6], the temperature measuring crystal has characteristics of small size and no lead line, but it can only test the highest temperature in the transformation process, and cannot be applied to real-time temperature monitoring. [7,8] designed the thermopaint temperature measuring method, which is a non-interference temperature measurement method. As a functional paint whose color changes with temperature, thermopaint has the advantages of not destroying the target structure, not affecting the target temperature field, and intuitionistic results. While the measurement characteristics Working Principle and Structure Analysis The measurement principle of wireless passive high temperature parameters based on microwave scattering technology is shown in Figure 1. The system is composed of two parts: an inquiry antenna and a high temperature sensor. The inquiry antenna sends out a sweep signal including the resonant frequency f 0 of the resonant cavity to the temperature sensor which is integrated with the slot antenna and the resonant cavity, the sensor through the slot antenna structure to the incoming signal coupling into the cavity. Among them, only the signal of frequency component f 0 can oscillate inside the sensor and be attenuated gradually, while the other frequency signals are reflected back to the inquiry antenna. When the ambient temperature changes, the dielectric constant of the resonant cavity material changes accordingly, which affects the resonant frequency of the resonant cavity. The resonant frequency of the sensor under different ambient temperatures can be obtained by measuring the return loss of the reflected signal of the sensor received by the inquiry antenna, namely the S(1,1) parameter, and the temperature of the measured environment can be calculated according to the variation of the resonant frequency of the sensor. As shown in Figure 2a,b, the temperature sensor consists of an SICW resonator and a CSRR structure. The SICW resonator consists of four parts: medium substrate, upper and lower metal surface, and side wall metal cylinder. The medium material of the sensor is high temperature resistant Si3N4 ceramics. The upper and lower surfaces of the dielectric substrate are covered with a metal platinum paste, and the metal cylindrical through holes in the side walls are connected with the upper and lower metal surfaces. By achieving a metallization aperture, a dielectric substrate can realize the structure of the waveguide, resulting in an electromagnetic field distribution that is nearly the same as that of a conventional waveguide. The upper metal surface etched the CSRR structure. The main function of the CSRR structure is that it can generate a centralized electromagnetic field to improve sensor sensitivity and realize wireless signal transmission. As shown in Figure 2a,b, the temperature sensor consists of an SICW resonator and a CSRR structure. The SICW resonator consists of four parts: medium substrate, upper and lower metal surface, and side wall metal cylinder. The medium material of the sensor is high temperature resistant Si 3 N 4 ceramics. The upper and lower surfaces of the dielectric substrate are covered with a metal platinum paste, and the metal cylindrical through holes in the side walls are connected with the upper and lower metal surfaces. By achieving a metallization aperture, a dielectric substrate can realize the structure of the waveguide, resulting in an electromagnetic field distribution that is nearly the same as that of a conventional waveguide. The upper metal surface etched the CSRR structure. The main function of the CSRR structure is that it can generate a centralized electromagnetic field to improve sensor sensitivity and realize wireless signal transmission. As shown in Figure 2a,b, the temperature sensor consists of an SICW resonator and a CSRR structure. The SICW resonator consists of four parts: medium substrate, upper and lower metal surface, and side wall metal cylinder. The medium material of the sensor is high temperature resistant Si3N4 ceramics. The upper and lower surfaces of the dielectric substrate are covered with a metal platinum paste, and the metal cylindrical through holes in the side walls are connected with the upper and lower metal surfaces. By achieving a metallization aperture, a dielectric substrate can realize the structure of the waveguide, resulting in an electromagnetic field distribution that is nearly the same as that of a conventional waveguide. The upper metal surface etched the CSRR structure. The main function of the CSRR structure is that it can generate a centralized electromagnetic field to improve sensor sensitivity and realize wireless signal transmission. Where D is the diameter of the metal cylindrical hole on the side wall, b is the distance between the two adjacent cylindrical hole centers, R 0 is the radius of the sensor, R eff is the distance between the metal cylindrical hole on the side wall and the sensor center, and H is the thickness of the sensor, namely the distance between the upper and lower metal surfaces. R 1 is the external radius of the external resonant ring of the CSRR structure, s 1 is the gap width of the external resonant ring, R 2 is the external radius of the internal resonant ring of the CSRR structure, s 2 is the gap width of the internal resonant ring, and t is the opening width of the resonant ring of the CSRR structure. R0 The resonant frequency of the SCIW structure is [11]: where f 0 is resonant frequency, c is the speed of light, P 11 is the first zero of a first-order Bessel function(P 11 = 2.4048), ε for the dielectric constant of dielectric material, µ for the magnetic permeability of medium material. When the size of the sidewall metal cylinder is D < 0.1λ g , b < 4D and D < 0.2R eff , the sidewall metal cylinder can be regarded as an ideal electromagnetic wall, and the electromagnetic wave leakage from it can be ignored. To a certain extent, the electromagnetic interference of the external metal environment to the sensor signal can be reduced. When the size of the sensor is fixed, the resonant frequency is determined by the dielectric constant of the dielectric material. The dielectric constant of the sensor material will increase with the temperature increasing accordingly, leading to the decrease of the resonant frequency of the sensor, so as to realize the temperature measurement. The equivalent circuit of the designed sensor is analyzed, as shown in Figure 2c. The metal cylinder on the side wall of the substrate integrated waveguide structure can be equivalent to a parallel inductor (L r ), and the upper and lower metal plates can be equivalent to a capacitor (C r ). CSRR structure can be equivalent to the parallel connection of two inductors (L s ) and their inter-ring coupling capacitors (C s ), wherein L s1 and L s2 are equivalent circuits of the inner and outer resonant rings, respectively, and the metal walls on both sides of the inner and outer resonant rings are equivalent to C s1 and C s2 in turn. Among them, the equivalent inductance of CSRR structure and the equivalent capacitance of the upper and lower metal surfaces of SICW structure play a dominant role, so other parts of the equivalent circuit can ignore its influence. Then the resonant frequency of the sensor is: where ε is the dielectric constant of the medium between the plates, S is the opposite area of the capacitor plate, d is the distance between the plates, k is the static force constant (k = 8.987551 × 10 9 N·m 2 /C 2 ). When CSRR structure is determined, L s and C s are determined. C r is determined by the medium material between the upper and lower metal sheets. The dielectric constant of Si 3 N 4 ceramics increases with the increase of temperature. According to Equation (4), the equivalent capacitance C r increases, and the resonant frequency of the sensor decreases accordingly. The simplified equivalent circuit of the sensor is shown in Figure 2d. The sensor can be simplified and equivalent to the parallel connection of inductor and capacitor, and the resonant frequency can be simplified to Equation (5). Simulation and Optimization In order to improve the transmission efficiency of the sensor and reduce the loss, HFSS software was used to model and simulate the sensor and the inquiry antenna, respectively. The performance of the sensor was judged by the return loss in the response curve, and the optimal size parameters were obtained. The resonant frequency of the sensor set in this paper is f 0 = 11.5 GHz. The hightemperature resistant ceramic (Si 3 N 4 ceramics) is used as the sensitive material of the sensor. At room temperature, the dielectric constant is 3.6 and the relative permeability is 0.98. The standard rectangular waveguide is used as the excitation source of the CSRR-SICW sensor. The size of the rectangular waveguide is 20.78 mm × 9.24 mm × 46 mm. Under the condition of satisfying the leak-proof size of the metal cylinder on the side wall, combined with formulas (1) and (2), the dimensional parameters of the sensor are preliminarily calculated as: R eff = 5.5 mm, R 0 = 7 mm, D = 0.5 mm, H= 1.1 mm. The number of metal cylinders on the side wall was 36. In order to improve the performance of the substrate integrated waveguide sensor, the external resonant ring radius R 1 , the external resonant ring gap width s 1 , the internal resonant ring radius R 2 , and the internal resonant ring gap width s 2 of the CSRR structure were simulated and analyzed, respectively. The simulation results of parameter optimization are shown in Figure 3. Simulation and Optimization In order to improve the transmission efficiency of the sensor and reduce the loss, HFSS software was used to model and simulate the sensor and the inquiry antenna, respectively. The performance of the sensor was judged by the return loss in the response curve, and the optimal size parameters were obtained. The resonant frequency of the sensor set in this paper is f0 = 11.5 GHz. The hightemperature resistant ceramic (Si3N4 ceramics) is used as the sensitive material of the sensor. At room temperature, the dielectric constant is 3.6 and the relative permeability is 0.98. The standard rectangular waveguide is used as the excitation source of the CSRR-SICW sensor. The size of the rectangular waveguide is 20.78 mm × 9.24 mm × 46 mm. Under the condition of satisfying the leak-proof size of the metal cylinder on the side wall, combined with formulas (1) and (2), the dimensional parameters of the sensor are preliminarily calculated as: Reff = 5.5 mm, R0 = 7 mm, D = 0.5 mm, H= 1.1 mm. The number of metal cylinders on the side wall was 36. In order to improve the performance of the substrate integrated waveguide sensor, the external resonant ring radius R1, the external resonant ring gap width s1, the internal resonant ring radius R2, and the internal resonant ring gap width s2 of the CSRR structure were simulated and analyzed, respectively. The simulation results of parameter optimization are shown in Figure 3. According to the simulation results, the radius of the CSRR external resonant ring R1, the width of the gap of the external resonant ring s1 and the radius of the internal resonant ring R2 all affect the resonant frequency of the sensor. When R1 increases, the resonant frequency decreases accordingly. This is because the equivalent capacitance increases According to the simulation results, the radius of the CSRR external resonant ring R 1 , the width of the gap of the external resonant ring s 1 and the radius of the internal resonant ring R 2 all affect the resonant frequency of the sensor. When R 1 increases, the resonant frequency decreases accordingly. This is because the equivalent capacitance increases with the increase of R 1 , leading to the decrease of the resonant frequency of the CSRR structure. When R 1 remains unchanged, when the metal walls on both sides of the external resonant ring become larger, the distance between the two plates of the equivalent capacitor becomes larger, and the equivalent capacitance decreases accordingly. The resonant frequency of the improved CSRR structure increases. According to the simulation results, when the distance between the plates reaches a certain distance, the influence on the equivalent capacitance gradually decreases. Similarly, the influence of the inner circle radius and the gap of the internal resonant ring on the resonant frequency of CSRR structure can be obtained. The resonant frequency of the CSRR structure of the sensor is mainly determined by the radius and the gap of the external resonant ring and the radius of the internal resonant ring; however, the gap of the internal resonant ring mainly plays a role in strengthening. Therefore, the precise regulation of the resonant frequency can be realized by flexibly adjusting the size parameters of the CSRR structure. The structural size of the sensor is shown in Table 1. In order to obtain the quality factor of the sensor, HFSS software was used to model and simulate the designed sensor in the eigen-mode. According to the simulation results, the quality factor of the designed sensor was 1215.93. The schematic diagram of the designed coplanar waveguide (CPW) antenna is shown in Figure 4. Respectively, W and L are the width and length of the inquiry antenna, W 1 and L 1 are the width and length of the radiation patch, W 2 and L 2 are microstrip transmission line width and length, m and n are the spacing widths between the receiving floor, the radiation patch, and the transmission line, respectively. The dimensions of the coplanar waveguide antenna are shown in Table 2. itor becomes larger, and the equivalent capacitance decreases accordingly. The resonant frequency of the improved CSRR structure increases. According to the simulation results, when the distance between the plates reaches a certain distance, the influence on the equivalent capacitance gradually decreases. Similarly, the influence of the inner circle radius and the gap of the internal resonant ring on the resonant frequency of CSRR structure can be obtained. The resonant frequency of the CSRR structure of the sensor is mainly determined by the radius and the gap of the external resonant ring and the radius of the internal resonant ring; however, the gap of the internal resonant ring mainly plays a role in strengthening. Therefore, the precise regulation of the resonant frequency can be realized by flexibly adjusting the size parameters of the CSRR structure. The structural size of the sensor is shown in Table 1. In order to obtain the quality factor of the sensor, HFSS software was used to model and simulate the designed sensor in the eigen-mode. According to the simulation results, the quality factor of the designed sensor was 1215.93. The schematic diagram of the designed coplanar waveguide (CPW) antenna is shown in Figure 4. Respectively, W and L are the width and length of the inquiry antenna, W1 and L1 are the width and length of the radiation patch, W2 and L2 are microstrip transmission line width and length, m and n are the spacing widths between the receiving floor, the radiation patch, and the transmission line, respectively. The dimensions of the coplanar waveguide antenna are shown in Table 2. The previously simulated CSRR-SICW high temperature sensor is placed under the CPW antenna to receive and send signals. The model and simplified equivalent circuit diagram is shown in Figure 5. The resonant frequency of the sensor at room temperature is 11.5 GHz. The distribution of electric field and magnetic field of the sensor is shown in Figure 6, indicating that the strongest electromagnetic field is mainly distributed around the CSRR structure. The previously simulated CSRR-SICW high temperature sensor is placed under the CPW antenna to receive and send signals. The model and simplified equivalent circuit diagram is shown in Figure 5. The resonant frequency of the sensor at room temperature is 11.5 GHz. The distribution of electric field and magnetic field of the sensor is shown in Figure 6, indicating that the strongest electromagnetic field is mainly distributed around the CSRR structure. resonant ring become larger, the distance between the two plates of the equivalent capacitor becomes larger, and the equivalent capacitance decreases accordingly. The resonant frequency of the improved CSRR structure increases. According to the simulation results, when the distance between the plates reaches a certain distance, the influence on the equivalent capacitance gradually decreases. Similarly, the influence of the inner circle radius and the gap of the internal resonant ring on the resonant frequency of CSRR structure can be obtained. The resonant frequency of the CSRR structure of the sensor is mainly determined by the radius and the gap of the external resonant ring and the radius of the internal resonant ring; however, the gap of the internal resonant ring mainly plays a role in strengthening. Therefore, the precise regulation of the resonant frequency can be realized by flexibly adjusting the size parameters of the CSRR structure. The structural size of the sensor is shown in Table 1. In order to obtain the quality factor of the sensor, HFSS software was used to model and simulate the designed sensor in the eigen-mode. According to the simulation results, the quality factor of the designed sensor was 1215.93. The schematic diagram of the designed coplanar waveguide (CPW) antenna is shown in Figure 4. Respectively, W and L are the width and length of the inquiry antenna, W1 and L1 are the width and length of the radiation patch, W2 and L2 are microstrip transmission line width and length, m and n are the spacing widths between the receiving floor, the radiation patch, and the transmission line, respectively. The dimensions of the coplanar waveguide antenna are shown in Table 2. The previously simulated CSRR-SICW high temperature sensor is placed under the CPW antenna to receive and send signals. The model and simplified equivalent circuit diagram is shown in Figure 5. The resonant frequency of the sensor at room temperature is 11.5 GHz. The distribution of electric field and magnetic field of the sensor is shown in Figure 6, indicating that the strongest electromagnetic field is mainly distributed around the CSRR structure. Preparation of Sensor The substrate material of the sensor is Si3N4 ceramics. First, the purchased ceramic pieces are cut into discs with a diameter of 14 mm. By using laser drilling technology, the cut circular substrate is placed under the laser drilling machine for drilling, and the diameter of the through hole is 0.5 mm, and the sidewall cylindrical through-hole array of Preparation of Sensor The substrate material of the sensor is Si 3 N 4 ceramics. First, the purchased ceramic pieces are cut into discs with a diameter of 14 mm. By using laser drilling technology, the cut circular substrate is placed under the laser drilling machine for drilling, and the diameter of the through hole is 0.5 mm, and the sidewall cylindrical through-hole array of SICW structure is realized. Because the platinum pulp can withstand a high temperature environment of 1800 • C and the material properties are relatively stable in the high temperature environment, we use the platinum pulp as the surface metal and through hole filling material of the sensor. We then put the prepared substrate into the micro-hole filling machine and injected the platinum pulp, so that the upper and lower surfaces can be connected, and placed in an environment of 100 • C drying for 30 min. After that, the dust on the upper and lower surfaces of the substrate was wiped with 99% anhydrous alcohol and dust-free paper, and the surfaces of the SICW and the slot antenna were brushed by screen printing technology, respectively, with printing thickness of 20 µm. Finally, the sensor was placed in a muffle furnace for sintering; the sintering curve is shown in Figure 7b. High temperature sintering can remove organic solvents from the paste, so that a dense platinum film can be formed on the Si 3 N 4 ceramics substrate. The final high temperature sensor and inquiry antenna are shown in Figure 7c. Preparation of Sensor The substrate material of the sensor is Si3N4 ceramics. First, the purchased ceramic pieces are cut into discs with a diameter of 14 mm. By using laser drilling technology, the cut circular substrate is placed under the laser drilling machine for drilling, and the diameter of the through hole is 0.5 mm, and the sidewall cylindrical through-hole array of SICW structure is realized. Because the platinum pulp can withstand a high temperature environment of 1800 °C and the material properties are relatively stable in the high temperature environment, we use the platinum pulp as the surface metal and through hole filling material of the sensor. We then put the prepared substrate into the micro-hole filling machine and injected the platinum pulp, so that the upper and lower surfaces can be connected, and placed in an environment of 100 °C drying for 30 min. After that, the dust on the upper and lower surfaces of the substrate was wiped with 99% anhydrous alcohol and dust-free paper, and the surfaces of the SICW and the slot antenna were brushed by screen printing technology, respectively, with printing thickness of 20 μm. Finally, the sensor was placed in a muffle furnace for sintering; the sintering curve is shown in Figure 7b. High temperature sintering can remove organic solvents from the paste, so that a dense platinum film can be formed on the Si3N4 ceramics substrate. The final high temperature sensor and inquiry antenna are shown in Figure 7c. High Temperature Test of the Sensor In order to verify the temperature sensing performance of the sensor, the high temperature test platform built is shown in Figure 8. Temperature testing mainly includes computers, network analyzer (Keysight P5008A, Keysight Technologies, Santa Rosa, CA, USA), coaxial transmission lines, CPW antennas, sensors, and a high temperature muffle furnace. In the test process, the sensor and inquiry antenna were placed in the muffle furnace, and the mullite was used to prevent the cold end of the inquiry antenna from being damaged by the high temperature environment in the furnace. The cold end of the inquiry antenna is connected with the network analyzer through the coaxial transmission line, which is used to transmit electromagnetic signals to the sensor and receive reflected signals from the sensor. The network analyzer is connected to the display to display and save the test data. At room temperature, the test results of the sensor are slightly different from the simulation resonant frequency, as shown in Figure 9a. This result may be caused by errors in the sensor machining process, or the sensor is in an ideal environment during simulation. The temperature test in this paper starts from 50 • C, and then the temperature rise test is carried out step by step, and the data is recorded every 50 • C. To ensure repeatability of the sensor, the maximum temperature was raised to 1000 • C. The measured resonant frequency of the high temperature sensor decreases with the increase of temperature, and the curve of change is shown in Figure 9b. In the figure, S(1,1) is the self-reflection coefficient of the inquiry antenna, frequency is the trough point represents the resonant frequency of the sensor. The resonant frequency of the sensor is 11.3392 GHz at 50 • C, 11.2904 GHz at 450 • C, and 11.0648 GHz at 1000 • C. By extracting the resonant frequency of the trough point from the curve directly tested, the change curve of the resonant frequency of the high temperature sensor at 50-1000 • C is finally obtained. The resonant frequency decreases with the increase in temperature, which is consistent with the theory. The temperature test was repeated three times, and the test results were shown in Figure 9c. According to the results, the designed sensor could better reproduce the experimental results. After preliminary analysis and processing of the data, the quaternion polynomial fitting curve is shown as a green line in Figure 9d.The fitting curve is expressed in Equation (6). However, the quaternion polynomial is only suitable for small range of calculation, not speculative. After further processing of the test data, the test results show that the resonant frequency has different change rates with the increase of temperature in the temperature range of 50-450 • C and 450-1000 • C. Linear fitting was conducted for the data; the fitting curves were shown as the red line and blue line in Figure 9d, respectively, and their expressions are shown in Equation (7). According to analysis of the test data, the resonant frequency of the sensor changes linearly at different temperatures. The sensitivity of the resonant frequency of the sensor is 123 kHz/ • C at 50-450 • C, and 417 kHz/ • C at 450-1000 • C. The average test sensitivity of the sensor is 289 kHz/ • C. According to the literature [17], ion displacement polarization is the main factor affecting the variation of dielectric constant with temperature. The polarization of the Si 3 N 4 dipole moment decreases with the increase of temperature, which results in the increase of the dielectric constant of Si 3 N 4 ceramics. According to Equation (2), the resonant frequency is affected not only by the dielectric constant of the dielectric material, but also by the metal cylinder on the side wall. The thermal expansion coefficient of the Si 3 N 4 ceramics increases with the increase of temperature, which results in the increase of the relative position of the metal column on the side wall. This explains why the sensor is more sensitive at 450-1000 • C. Table 3 shows a comparison between the sensor designed in this paper and the sensor previously reported. The sensor designed in this paper has the following advantages: 1. The temperature range is large enough to test relatively high temperatures; 2. The sensor has high sensitivity. In the temperature test, it was found to have a large frequency change, and the resonant frequency signal is stronger and easier to be captured; 3. The smaller size of the sensor makes it easier to mount inside the engine, such as on the metal blade surface. Conclusions In this paper, a wireless passive high temperature sensor based on Si 3 N 4 ceramics is proposed. The sensor consists of an improved CSRR and SICW structure, and its equivalent circuit is analyzed. The dimensional parameters of the sensor were optimized and determined by theoretical calculation and HFSS software simulation. The high temperature test results show that the resonant frequency of the high temperature sensor decreases from 11.3392 GHz to 11.0648 GHz in the range of 50-1000 • C. When the temperature is 50-450 • C and 450-1000 • C, the sensitivity of the sensor is 123 kHz/ • C and 417 kHz/ • C, respectively, and the average test sensitivity is 289 kHz/ • C. Through analysis, the sensor designed in this paper was found to be small in size, easy to be installed on metal blade surfaces, and had higher sensitivity at 450 • C or above. The experimental results verify the rationality of the design and feasibility of the CSRR-SICW wireless passive high temperature sensor based on Si 3 N 4 ceramics, and show its application potential in the harsh environment of ultra-high temperatures. Author Contributions: Conceptualization and methodology, S.S.; software and writing, T.R.; formal analysis, L.Z.; data curation, F.X. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the National Nature Science Foundation of China (Grant No. 51875534) and the Shanxi "1331 project" Keys Subjects Construction.
6,973.4
2021-04-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Implementation of cryogenic tender X-ray HR-XANES spectroscopy at the ACT station of the CAT-ACT beamline at the KIT Light Source The capabilities for the investigation of radionuclide materials by high-resolution X-ray emission spectroscopy techniques at the CAT-ACT beamline of the Karlsruhe Institute of Technology (KIT) Light Source have been expanded to allow for the investigation of redox-labile low-concentration samples at cryogenic temperatures. Introduction Synchrotron radiation based speciation methods for radioactive samples are often limited to dedicated beamline endstations due to the strict safety and radiation protection regulations for handling radionuclide materials at light sources (Scheinost et al., 2021). Two of those few endstations -the INE-Beamline and the ACT station at the Karlsruhe Institute of Technology (KIT) Light Source (Rothe et al., 2012;Zimina et al., 2017) -are operated by KIT-INE (the Institute for Nuclear Waste Disposal at KIT). However, some synchrotron facilities accept proposals to investigate radioactive materials on conventional beamlines if the radionuclide activities stay below the exemption limit. Others temporarily upgrade the safety status of selected endstations to allow for radionuclide research up to a certain extent -generally excluding highly radioactive materials like spent nuclear fuel or certain isotopes. During the past decade, high (energy) resolution X-ray absorption near-edge structure (HR-XANES) spectroscopy -also called high-energy-resolution fluorescencedetected X-ray absorption near-edge structure (HERFD-XANES) -has proven to be a highly valuable tool for the oxidation state determination of actinides (An). However, in this case, on the one hand the inefficient detection process requires high photon fluxes (where samples might be prone to beam-induced alterations) and rather high analyte concentrations. On the other hand, it is necessary to find a trade-off between necessary sample containment and radiation safety precautions at synchrotron radiation facilities and sufficiently transparent X-ray windows. Hence, already for investigations at the moderately high X-ray energies corresponding to the An L 3 absorption edges (Th-Es: $16.3-20.4 keV), HR-XANES investigations of actinides are experimentally highly demanding. This applies all the more in the 'tender' X-ray regime below $4.5 keV, where the corresponding An M absorption edges are probed. It is for this reason, for example, that HR-XANES data for the L-and M-level absorption edges of transuranium elements within the An family basically stem from only a few X-ray emission spectrometers at four synchrotron light sources worldwide. While reports of transuranium experiments are generally scarce, we want to mention several beamlines that have started to develop their capabilities towards L 3 -edge X-ray emission spectroscopy experiments of various uranium or thorium samples. Pioneering work was carried out for intermetallic U compounds by Rueff et al. (2007) at ID16, and for U oxide compounds by us and collaborators at ID26 of the European Synchrotron Radiation Facility (ESRF) (Vitova et al., 2010) and at the INE-Beamline, KIT Light Source [Karlsruhe Research Accelerator (KARA) storage ring] (Walshe et al., 2014). Different types of three-analyzer-crystal X-ray emission spectrometers were used for U L 3 -edge studies at BL39XU at SPring-8 Honda et al., 2020) and at beamline ID20 at Diamond Light Source (Pan et al., 2020), and for Th L 3 -edge experiments at BL14W1 at the Shanghai Synchrotron Radiation Facility (SSRF) (Bao et al., 2018;Duan, Bao et al., 2017;Duan, Gu et al., 2017). The microXAS beamline at the Swiss Light Source (SLS) started experiments with high spatial resolution (1 mm) using an emission spectrometer with cylindrical von Hamos geometry (Szlachetko et al., 2012) at the U L 3 -edge (Grolimund, 2021). Moreover, a portable single-analyzer-crystal X-ray emission spectrometer was used at BL11-2 at the Stanford Synchrotron Radiation Laboratory (SSRL) to obtain U L 3 -edge X-ray emission spectroscopy data (Ditter et al., 2020). Concerning transuranium elements, early work has been performed at the high-resolution X-ray emission spectrometer (HR-XES) at beamline ID16 of the ESRF, where already in 2010 Am L 3 -edge resonant inelastic X-ray scattering (RIXS) data were recorded (Heathman et al., 2010). Another beamline facility equipped to temporarily perform radionuclide work is beamline 6-2 at the SSRL. Here, a seven-analyzercrystal Johann-type X-ray emission spectrometer with 1 m bending radius was used, e.g. to study the Pu L 3 -edge of intermetallic plutonium phases (Booth et al., 2012(Booth et al., , 2014, PuO 2 (Tobin & Shuh, 2015) and the Np L 3 -edge for NpSe 2 solid phases (Jin et al., 2019). Popa et al. (2015) analyzed the structure of Pu(III) phosphate, applying the Johann-type X-ray emission spectrometer at the MARS beamline at SOLEIL for Pu and Am L 3 HR-XANES measurements, and herewith confirming the +3 valence state of Pu as well as that of its Am daughter resulting from À decay. At the INE-Beamline in 2014, in a collaborative project with the Joint Research Center Karlsruhe, we investigated the oxidation states of Pu at the Pu L 3 -edge in a study exploring the phase diagram of UO 2 -PuO 2 at high temperatures (Bö hler et al., 2014) and for Pu oxide nanocrystals (Hudry et al., 2014). A large increase in sensitivity to differences in the electronic structure unfolds for HR-XANES spectroscopy at the An M 4,5 -edges (Butorin et al., 1996;Rothe et al., 2012;Kvashnina et al., 2013;Vitova et al., 2013). Difficulties arise because the corresponding absorption and emission energies between $3 and 4.5 keV belong to the tender X-ray region. At these energies, air molecules as well as containment materials strongly scatter and/or absorb the X-rays [incident beam impinging onto the sample, X-ray fluorescence /inelastically scattered radiation emitted by the sample, and monochromated radiation diffracted and focused by the analyzer crystal(s) onto the detector]. For this reason, a specific adaption of the experimental stages, such as He-filled bags bridging the air gap between sample, crystal(s) and detector, an He encasement enclosing the whole spectrometer or even vacuum conditions, is required to enable efficient X-ray emission spectroscopy. Up to now, this has limited the reported M-edge studies of transuranium elements to only four beamlines worldwide. At the KIT Light Source a five-analyzer-crystal Johann-type X-ray emission spectrometer based on an original ID26 design was initially commissioned and successfully tested at the INE-Beamline for radionuclide research (Prü ßmann, 2016; Vitova et al., 2017;Bahl et al., 2017;Popa et al., 2016;Rothe et al., 2012), and was later on transferred to the ACT station of the new CAT-ACT wiggler beamline after completion of the endstations in 2016 (Zimina et al., 2017;. With this spectrometer a large contribution to the development of the M-edge spectroscopy for the transuranium elements was achieved, e.g. the first-ever measured Pu M 5 -edge XANES/HR-XANES (Rothe et al., 2012) was already published in 2012. Later on, the first Pu and Np M 5edge HR-XANES and 3d 4f RIXS and various Np, Pu, Am and U M-edge HR-XANES spectra were recorded (Vitova et al., , 2020Bahl et al., 2017;Epifano et al., 2019). At the MARS beamline of the SOLEIL synchrotron -belonging to the few dedicated synchrotron radiation stations for the investigation of nuclear materials including waste forms -an He-filled bag between the sample position, a single analyzer crystal with 1 m bending radius in Johann geometry and the detector can be used to reduce absorption in air. With this setup, Pu M 4 -edge HR-XANES/RIXS experiments on plutonium carbonate samples , PuO 2 phases (Gerber et al., 2020) and uranium compounds were performed (Hunault et al., 2019). At beamline ID26 of the ESRF a five-analyzer-crystal Johann-type X-ray emission spectrometer with 1 m bending radius was employed to record, for example, the Pu M 4 -edge of plutonium nanoparticles (Gerber et al., 2020;Kvashnina et al., 2019). A similar spectrometer is meanwhile situated at the ESRF Rossendorf beamline (BM20 ROBL), another dedicated beamline for radionuclide research. Although only L 3 -edge data of Pu, U and Th have been published so far (Amidani et al., 2021;Gerber et al., 2020), the whole spectrometer can be placed inside a bag filled with He to record edges at tender energies. Besides applying crystal spectrometers in the tender to hard X-ray spectral range, a few beamlines worldwide took provisions for high-resolution An spectroscopy research in the soft X-ray to vacuum ultraviolet range, giving access to shallower absorption edges -with naturally narrow line widths -at the An N or O levels (or the core levels of low-Z ligand atoms) or to hard X-ray inelastic scattering with meV resolution range, enabling the detection of specific X-ray photon-phonon interactions, i.e. accessing basic solid-state properties such as superconductivity in solid-state An materials. In the former research area, pioneering An spectromicroscopy work employing the soft X-ray scanning transmission X-ray microscopy endstation at Advanced Light Source (ALS) beamline 11.0.2 (Lawrence Berkeley National Laboratory) has been reported (Dalodiè re et al., 2017). Scientists employing ALS beamline 7.0.1's soft X-ray absorption and resonant scattering capabilities have provided, for example, Pu N 6,7 -edge XANES data as well as NpO 2 RIXS at the Np O 5 -edge ($100 eV) (cf. Modin et al., 2011;Tobin et al., 2002;Butorin et al., 2013Butorin et al., , 2016. Feasibility of experiments in the latter field has been demonstrated, for example, in Pu metal/PuO 2 scattering experiments at the Advanced Photon Source (APS) beamline 30-ID-B,C (HERIX endstation, Argonne National Laboratory), offering momentum-resolved inelastic X-ray scattering with high resolution ($1.5 meV) (Manley et al., 2009(Manley et al., , 2012. Almost all of the reported HR-XES studies work with relatively high An concentrations. This, however, often prevents studies of environmental samples from contaminated land sites or sorption and diffusion samples from experiments in the context of safety case studies in nuclear waste disposal research, where loadings of actinides below the p.p.m. range should prevail. In this article, we present the technical developments at the ACT station towards enabeling those low An loading HR-XANES experiments while still having the flexibility for other experimental techniques such as conventional high-energy X-ray absorption fine structure (XAFS) in transmission or total fluorescence-yield detection mode and Laue-type high-energy or wide-angle X-ray scattering (HEXS/ WAXS) up to $55 keV photon energy. As aforementioned, another important point is sample integrity upon irradiation conditions at highly brilliant synchrotron radiation sources. Different types of beaminduced alterations are observed at ambient conditions, especially for often redox-labile An specimens (Wilk et al., 2005;Denecke et al., 2005). These photo-oxidations or photoreductions mostly occur for aqueous systems or hydrated pastes and, thus, are especially relevant for environmental samples. Beam-induced changes need to be thoroughly monitored and excluded at the utmost degree to obtain unbiased speciation information. One possibility to exclude those changes is cooling the samples below 180 K (Gö ttlicher et al., 2018). From screening the relevant literature, to the best of our knowledge, all 'photon hungry' HR-XANES/XES experiments on transuranium elements up to now have been performed at room temperature. In Section 3 we will present the development of a cryogenic sample holder for radioactive samples adapted to a commercial liquid N 2 /He cryostat. Here the challenge was to design and approve a system for double encapsulation featuring sufficiently X-ray transparent windows for spectroscopy in the tender X-ray region whileat the same time -withstanding thermal-isolation vacuum conditions. INE beamline facilities at the KIT Light Source INE at KIT (KIT-INE) operates two experimental stations dedicated to the investigation of radionuclide materials by X-ray based methods at KARA (former ANKA synchrotron light source) -the INE-Beamline at a bending magnet port (fully operational since 2005) and the ACT laboratory at the CAT-ACT wiggler beamline (commissioned in 2016) (Rothe et al., 2012Zimina et al., 2017). Both beamline hutches are equipped and licensed to investigate radioisotopes up to activities equal to 10 6 times the (European) exemption limits and 200 mg for the fissile isotopes 235 U or 239 Pu (the exemption limits are generally 10 3 or 10 4 Bq for most relevant radionuclides under investigation at these beamlines). They are permanently designated as monitored areas for handling radioactive materials. Their status can be upgraded to temporary controlled areas whenever radionuclide inventories exceed the exemption limits (applying a sum rule in the presence of multiple isotopes). The license at both stations enables the investigation of 'hot' materials including genuine nuclear-waste forms as well as in situ investigations at nonambient conditions (e.g. high p and/or high T) of radionuclides within a double containment. The beamline concept benefits from the flexibility of evaluating and approving new experimental setups by INE's own technical commission, ensuring adherence to safety regulations while at the same time avoiding the limitation of experiments by standardized sample containments or an -box environment. The focus at both beamlines has been originally placed on XAFS-based speciation investigations in the context of the nuclear-wastedisposal safety case (encompassing processes during interim storage of spent nuclear fuel or nuclear-waste glass and final deep geological disposal of these materials). More recently, another emphasis has been placed on fundamental An studies exploiting the capabilities of HR-XES techniques within the basic KIT/Helmholtz research (NUSAFE program topic) or third-party-funded projects such as the European Research Council (ERC) Consolidator grant 'THE ACTINIDE BOND properties in solid, liquid and gas state'. Recent upgrades at the ACT station The Johann-type X-ray emission spectrometer at the ACT laboratory is routinely applied for HR-XES/HR-XANES and RIXS experiments in a broad energy range encompassing the An M-and L-edges (Th-Es feasible). Several analyzer-crystal sets [five each of Si(111), Si(110) and Ge (111), and four each actinide physics and chemistry of Ge(220) and Ge(311), Saint-Gobain, France] with a bending radius of 1 m are available to cover the relevant absorption levels. The tender X-ray range at ACT is accessible with an Sih111i crystal pair in the cryo-cooled double-crystal monochromator (DCM) down to $3.4 keV [cf. Zimina et al. (2017) for details]. The beam is focused by a toroidal Si mirror, resulting in a spot size of $1 mm  1 mm. The photon flux between 3.6 and 4 keV shown in Fig. 1 has been precisely determined using a short ionization chamber (Oken, Japan, model S-1329A1 with 33 mm electrode length) filled with N 2 at ambient pressure. In the tender X-ray region, scattering and absorption of X-rays is efficiently minimized by enclosing all beam paths -i.e. that of the impinging beam and those between the sample, the analyzer crystals and the single-diode silicon-drift-detector entrance window (KETEK VITUS SDD, Germany) arranged in a vertical Rowland circle geometry -in He atmosphere. A rigid plexiglass box housing the spectrometer components has been designed and recently installed on the ACT breadboard table (Fig. 2). The improved design allows one to keep stable conditions with less than 150 p.p.m. oxygen inside the box through a controlled He flow of $5 l min À1 . The five crystal holders are placed inside a flexible polyvinyl chloride (PVC) bag clamped by the five crystal mounts and spanned by an oriel protruding from the left-hand side wall (in beam direction) of the He box. This setup allows for sufficient freedom of motion of the crystals along their individual Rowland circles. The He box is equipped with a spacious rectangular lock chamber for exchanging sample cells (e.g. in situ cells or combined UV-Vis/XES setups) and a panel at the back side (in beam direction) providing various media and power feedthroughs (e.g. He/N 2 /Ar inert gas supply, cooling water, motor power/encoder/limit-switch lines, detector high-voltage/power supply and signal lines, vacuum pump hose, etc.). A special access port based on a gear-stick sleeve at the front-side right-hand corner (in beam direction) of the box allows one to insert the supply lines for the modified LN 2 cryostat, which are bundled in a flexible stainless-steel tube (cf. Fig. 5, left image, and the detailed description below). The plexiglass box is further equipped with a large detachable lid sealed by a PVC gasket at the wall opposite from the crystal stage (right-hand side in beam direction). The large opening provides inside access for installation of the standard transmission/fluorescence XAFS detection equipment with ionization chambers (Poikat, Germany, positioned on X-95 rails) and an electrically cooled eight-element LEGe detector (Mirion, France, cf. Fig. 5, right image) or a Laue diffraction setup using X-ray sensitive storage screens (PerkinElmer, USA). In closed configuration at inert gas conditions, longsleeved gloves at various positions at all four side walls and the oriel allow one to handle sample cells and manipulate samples and equipment inside the box, including the exchange of analyzer-crystal sets. Compared with the original setup described by Zimina et al. (2017), the improved design of the box enclosing the HR-XES setup offers the following advantages: (i) The box remains permanently installed on top of the breadboard table, minimizing the time to switch between HR-XES and standard X-ray absorption spectroscopy (XAS) experiments at ACT. (ii) The front and back side walls of the box are fitted with ISO-KF 50 flange feedthroughs, simplifying switching between ACT and CAT stations by bridging the box with a vacuum pipe (possible without opening the box). (iii) There is significantly improved He gas purity and correspondingly higher photon flux. Current photon-flux conditions in the tender X-ray energy range at the ACT experiment. Synchrotron radiation beam path conditions from source to I 0 monitor (ionization chamber): 2.5 GeV storage-ring electron energy, average electron-beam current ' 120 mA, 2 mm (h)  1 mm (v) front-end (white beam) slit aperture, cylindrical collimating first Si mirror, 250 mm Be vacuum protection plus 100 mm graphite thermalprotection window, Sih111i DCM crystal pair, toroidal focusing second Si mirror, 25 mm KAPTON window and synchrotron radiation beam spot size ' 1 mm  1 mm. (iv) There is significantly reduced He consumption during tender X-ray measurements. (v) It has a large lock chamber, e.g. for transfer of in situ sample cells. Implementation of a liquid nitrogen flow-through cryostat for HR-XES measurements A commercial flow-through cryostat primarily developed for microscopy applications (Oxford Instruments MicrostatHe, UK) -optionally operational with LHe or LN 2 as cryogenic coolant -was selected to be adapted to the HR-XES setup at ACT. It has been modified for tender X-ray (An M-edge) requirements while providing a special clamp mechanism enabling fast sample changes with the new cryo-sample cells. The instrument was chosen based on the special vacuumchamber dimensions offering a large solid-angle field of view ($140 opening angle) onto the sample(s) and a narrow gap of $2 mm between the sample surface and the outer vacuum window (as required, e.g. for cryo-microscopy investigations). This in turn allows the X-rays isotropically emitted from the sample to be captured and diffracted by all five analyzer crystals in the 1 m Rowland circle arrangement. The original cryostat sample holder -bolted to the heat-exchanger block with the liquid coolant circulating inside absorbing the thermal energy -was replaced by a copper fork with a slot clamping the actual sample cells (Fig. 3). The sample cell (Fig. 4) -adhering to the double containment rule for radioactive samples -consists of six stacked components (from bottom to top): the threaded anodized aluminium body with a groove for the copper fork and up to six elongated cavities milled into one side receiving different sample materials (solids/powders, wet pastes or liquids), a flat ring-shaped TEFLON gasket, an 8 or 12.5 mm KAPTON (polyimide) disk, a second TEFLON gasket, a second KAPTON disk, and a brass cap nut with a large opening giving access to the sample cavities below the transparent KAPTON membranes. The nut is tightly screwed onto the cell body, pressing the stacked windows and gaskets against each other, the disk and the cap on top and thereby tightly encapsulating the radioactive materials. Extensive pumping tests exposing the sealed sample cell to the thermal-insulation vacuum (in the 10 À5 mbar range) have been carried out with inactive dummy samples prior to initial experiments with radioactive materials. The loaded sample cells are pre-frozen in an LN 2 bath and introduced via the lock chamber into the dry He atmosphere inside the box. The cylindrical cryostat chamber is opened at cryogenic temperatures and the sample cell is attached to the copper fork. The original quartz window of the sample chamber flange facing the top side of the sample cell has been replaced by an epoxy-glued 12.5 mm KAPTON disk. Although already absorbing $35% of the photon intensity at the Np M 5 -edge energy [E Np(3d 5/2 ) ' 3664 eV, E Np(M 1 ) ' 3261 eV], the cryostat window thickness has been a necessary compromise between vacuum stability and X-ray transparency. The vacuum chamber attached to the flexible tube containing the LN 2 supply and exhaust lines, as well as thermal sensor (PT-100 type) and heater connections, is mounted by a half-shell adapter on top of the sample positioning stage at 45 relative to the impinging beam (Fig. 5, Top -a cross-section CAD drawing of the modified MicrostatHe vacuum chamber (side view). The original sample holder has been replaced by a copper fork clamping the sample cell, as described in Section 4. Bottoma 3D CAD drawing of the vacuum chamber with removed flanges mounted on top of the positioning stage, exhibiting the sample holder (copper fork) and sample cell assembly. position is precisely adjusted by a set of crossed alignment lasers. A modified sample cell was fitted with a second PT-100 sensor to measure the temperature directly at one of the sample cavities. This setup enabled a sample temperature of 141.2 AE 1.5 K at irradiation conditions with the heat-exchange block cooled down and stabilized at LN 2 temperature ($77 K). So far no attempts to operate the cryostat equipment with liquid He as coolant have been made. Future design modifications aim at improving the thermal contact between sample cell and clamping mechanism. Np M 5 -edge HR-XANES measurements at low concentrations and cryogenic conditions As already mentioned above, oxidation-state changes of redox-sensitive An elements [primarily U, Np and Pu, which may exist (or even co-exist) at different oxidation states (IV-VI) at the relevant geochemical conditions] are not exclusively induced by redox partners such as Fe(II) in mineral surface reactions. These changes have been observed to occur for mostly wet samples in XAS-based speciation experiments under inert gas conditions. It is suspected that water radicals formed in bright X-ray beams might interact with the An cations and change their oxidation state. Another conjecture is effects due to increased sample temperatures at X-ray irradiation conditions. As an example to illustrate the performance of our improved tender X-ray emission spectroscopy setup, we present Np M 5 -edge HR-XANES results in relation to our recent study on the interaction mechanisms of 237 Np -a long-lived -emitting isotope generated during operation of nuclear fission reactors -with clay minerals, which are highly relevant sorbents in the multi-barrier concept for nuclearwaste disposal in deep-geological formations. In this context, illite is considered as an important mineral fraction of several clay formations discussed as potential host rocks. Details on the successful measurement of low Np concentrations on illite are currently under revision in a previously submitted article (Schacherl et al., 2021). Therein, experiments at conditions expected to prevail in the far field of a breached disposal site, i.e. low Np(IV/V) concentrations down to 1 p.p.m., are described in order to verify the lowest possible Np loadings on clay surfaces for which Np speciation using the HR-XANES technique is still possible. Moreover, in order to suppress radiation-induced changes, the samples in our Np M 5 -edge HR-XANES experiments were cooled down to 141.2 AE 1.5 K using the setup described above. These results were subsequently compared with room-temperature measurements. An Illite du Puy (6.94% Fe 2 O 3 ) (Montoya et al., 2018) sample was contacted with Np at an initial concentration of c 0 [Np(V)] = 1  10 À6 mol l À1 for 11 days at pH 9.2 with a solid-to-liquid ratio of 2 g l À1 and I = 0.1 mol l À1 NaCl, resulting in a sorbed Np loading on illite of 83 AE 2 p.p.m. The sample preparation procedure is as well described in detail by Schacherl et al. (2021). After centrifugation at 15 000 r min À1 for 80 min Lab Logistics Group GmbH,Germany), the wet illite paste was transferred to the cryostat sample cell in an intert gas (Ar) glove box and encapsulated by two 8 mm KAPTON layers (polyimide film, Advent Research Materials, United Kingdom). The sealed sample cell was checked for the absence of surface contamination and transferred to the beamline inside a gastight transport container. At the ACT station the sample cell was pre-frozen in a liquid nitrogen bath and subsequently mounted at the sample holder inside the He box as described in Section 4. A OriginPro (OriginLab, 2018) was used to calibrate the spectra using the reference scans recorded before and after each sample scan. The higher photon flux achieved with the advanced He box setup, further optimization of the beamline optics alignment and recent improvements of the KARA storage ring operation have led to the observation of beam-induced alterations in hydrated samples such as wet Np-sorbed illite pastes. This is clearly shown in Fig. 6(a), where the average of several Np M 5 -edge HR-XANES scans of the Np/illite sample with the progression of irradiation time of a series of measurements performed at room temperature on the same sample spot are depicted. Peak B -significant for the presence of Np(V) 'neptunyl' species (Vitova et al., 2020) -disappears after prolonged irradiation time, strongly suggesting reduction of Np(V) to Np(IV). The 'white line' maximum (peak A) position does not significantly change upon reduction of Np(V) to Np(IV) -a well known anomaly for pentavalent 'actinyl' species upon the loss of the trans-dioxo conformation upon transition to the tetravalent state (Vitova et al., 2015(Vitova et al., , 2018(Vitova et al., , 2020Podkovyrina et al., 2016). In contrast to that, it becomes clearly evident from the comparison of scans averaged for different time intervals, as depicted in Fig. 6(b), that beam-induced reduction is successfully suppressed when applying the cryostat setup stabilizing a sample temperature of 141.2 AE 1.5 K. Within the noise level, no significant changes in the spectra were detected when irradiating the same spot for up to more than 130 min. Outlook Several technical upgrades and extensions at the ACT station are foreseen to be realized in the near future. Some of them have been already specified and are in the state of procurement: (1) A single-element SDD (Hitachi Vortex-EX60, USA) will be added to the detector pool to allow for standard (total) fluorescence-yield XAFS spectra recorded parallel to the HR-XANES measurements. This device will facilitate the precise localization of sample coordinates in the case of multiple-position sample holders, e.g. the cryo-cells described in Section 4. (2) The flexible reusable storage phosphor screens used so far for Laue-type diffraction (WAXS/HEXS) experiments (cf. e.g. Bouty et al., 2021) read out by a laser scanner (Perkin-Elmer Cyclone Plus, USA) following each exposure -a timeconsuming procedure preventing the investigation of dynamic processes -will be replaced by an X-ray camera with a large active area (164 mm-diameter fluorescence screen) with a tapered waveguide (Photonic Science sCMOS_37MP, United Kingdom). (3) Based on the excellent performance at INE-Beamline, a new set of mono-and poly-capillary X-ray lenses (Helmut Fischer GmbH, Germany), covering the tender to hard X-ray range, has been ordered to collimate or focus the impinging X-rays, increasing angular resolution and flux density for Laue-type diffraction measurements as well as enabling spatially resolved m-XAFS/XRF (optionally in confocal detection mode) and m-HR-XANES measurements with spot sizes down to 10 mm (FWHM). The capillary optics will be positioned and precisely aligned in the beam using a hexapod microrobot (Physik Instrumente H-811, Germany). Five years after its initial commissioning, the ACT station at the CAT-ACT wiggler beamline has been developed into a unique X-ray spectroscopy station for the investigation of various radionuclide materials with state-of-the-art speciation techniques -strongly focusing on 'flux hungry' photon-in/ photon-out techniques. The outlined modifications and improvements will even increase the high flexibility and sensitivity in the near future. The ACT station -although officially situated at a KIT in-house large-scale research facility -may be accessed by external users through direct cooperation with KIT-INE (https://www.ine.kit.edu).
6,494.6
2022-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
Measuring holographic entanglement entropy on a quantum simulator Quantum simulation promises to have wide applications in many fields where problems are hard to model with classical computers. Various quantum devices of different platforms have been built to tackle the problems in, say, quantum chemistry, condensed matter physics, and high-energy physics. Here, we report an experiment towards the simulation of quantum gravity by simulating the holographic entanglement entropy. On a six-qubit nuclear magnetic resonance quantum simulator, we demonstrate a key result of Anti-de Sitter/conformal field theory (AdS/CFT) correspondence—the Ryu-Takayanagi formula is demonstrated by measuring the relevant entanglement entropies on the perfect tensor state. The fidelity of our experimentally prepared the six-qubit state is 85.0% via full state tomography and reaches 93.7% if the signal-decay due to decoherence is taken into account. Our experiment serves as the basic module of simulating more complex tensor network states that exploring AdS/CFT correspondence. As the initial experimental attempt to study AdS/CFT via quantum information processing, our work opens up new avenues exploring quantum gravity phenomena on quantum simulators. INTRODUCTION The study of quantum systems requires an exponential amount of resources on conventional computers due to the exponentially growing dimensionality of Hilbert spaces, which makes it impossible to model even with supercomputers. Quantum simulators, conceived by Feymann in 1982, 1 are special purpose devices designed to imitate the behaviors or properties of other less accessible quantum systems. 2,3 Over the past few years, proofof-principle experiments have been realized in simulating quantum phase transitions, 4,5 topological order, 6,7 molecular energies, 8,9 quantum chaos, 10,11 and so on. However, there exists a significant field-quantum gravity-that has never been explored by experimental quantum simulation. Many important ideas such as holographic principle and Anti-de Sitter/conformal field theory (ADS/CFT) correspondence remained unrevealed in experiment. Recent development of a discrete version of AdS/CFT correspondence in terms of tensor networks (TN) motivates us to studying AdS/CFT correspondence on quantum simulators. In this work, we make first steps toward to the simulation of quantum gravity on a 6-qubit nuclear magnetic resonance (NMR) quantum processor, where rank-6 perfect tensor that forms the building block of complex TN is realized with high accuracy. We start from a basic introduction to AdS/CFT correspondence. AdS/CFT correspondence is one of the most prominent approaches towards a quantum theory of gravity for over two decades. 12,13 It is the most successful realization of the holographic principle to date, by stating that the quantum gravity theory in the bulk anti-de Sitter spacetime is equivalent to a quantum conformal field theory on the lower-dimensional boundary of the spacetime. The AdS/CFT correspondence has recently become a bridge connecting quantum gravity to quantum information theory, 14,15 which inspires revolutionary ideas of developing quantum gravity using the methods in quantum information and entanglement. A key result in this perspective is the holographic entanglement entropy characterized by the Ryu-Takayanagi (RT) formula, which relates the entanglement entropy of the boundary quantum system to the bulk geometry: S EE ðAÞ is the entanglement entropy of a (d − 1)-dimensional boundary region A, while Ar min is the area of the bulk (d − 2)dimensional minimal surface anchored to A. [16][17][18] G N is the Newton constant. See Fig. 1a for a brief illustration. Recently, a discrete version of AdS/CFT is realized on a type of lattices called TN, [19][20][21][22] making it possible to be demonstrated on a quantum simulator device in practice. In general, TN states are ways of rewriting a many-body wave function in terms of contractions of tensors, aiming at obtaining the ground states of interacting many-body Hamiltonians in a numerically efficient way. As a key observation related to AdS/CFT, the TN state has an emergent bulk dimension built by the layers of tensors, making it an ideal ground for manifesting AdS/CFT in many-body systems. Indeed, the theoretical studies have found that the TN made of perfect tensors (PT) can demonstrate interesting holographic properties. In particular, the entanglement entropy of perfect tensor TN gives a discrete realization the above RT formula. 20 In this work, we demonstrate the RT formula on a quantum simulator that simulates a PT of rank-6. Using a six-qubit quantum register in the NMR system, we create the rank-6 PT and subsequently measure its holographic entanglement entropy. The experimental results demonstrate the RT formula if the decoherence effect is taken into account. As the rank-6 PT serves as the building block to construct the entire TN, our experiment also opens up a new and practical way of studying AdS/CFT and the holographic principle at large. Perfect tensors The TN that we focus on is shown in Fig. 1b, where each hexagon represents a special six-qubit state |ψ〉. |ψ〉 is called a PT, if and only if that any three-qubit subsystem out of six is maximally entangled with the rest. It is shown that, for a TN made by the PT, its entanglement entropy is holographic and gives the discrete RT formula on the lattice. Actually, the entanglement entropy of such TN equals the minimal number of links cut by the virtual surfaces anchored to the boundary, as illustrated in Fig. 1b. To prove the above statement, we first introduce the form of the rank-6 PT, which is the building block of the TN. Given the single-qubit Hilbert space H ' C 2 , a rank-6 PT |ψ〉 is a state in H 6 , such that for any bipartition of qubits m + k = 6, the entropy of the reduced density matrix is maximal. Assuming m ≥ k, and labeling the orthonormal basis in H m and H k by |α〉 and |i〉 (b) (a) (c) Fig. 1 a A sketch of the RT formula. The hexagonal tiling indicates that the disk is a 2-dimensional ads space. The red solid arc in the bulk is the minimal surface (a line in this case) anchored to the two ends of a chosen boundary region A. b A discretization of a by a tensor network comprised of rank-6 tensors. Each hexagonal node represents a rank-6 tensor state jψi 2 H 6 , and the collection of all such nodes corresponds to the tensor product of all |ψ〉's. Each link ' represents a maximally entangled state . Connecting one leg of the node to a link corresponds to taking the inner product in H. The dangling legs are physical qubits in the many-body system. The red dashed arc illustrates the virtual surface S anchored to region A, which cuts a minimal number of links. c Rank-6 PT from the TN with the minimal number of cuts equal to three. The six legs represent six qubits. Three qubits are at the boundary and the other three are bulk qubits. This is the model realized in our experiment respectively, a PT jψi ¼ P In other words, the reduced density matrix ρ (k) by tracing out m qubits is an identity matrix, whose entanglement entropy is simply k, the number of remained qubits. In this Letter, we use the superscript (k) to represent the k-qubit subsystem. With the rank-6 PT (explicit form in appendix A (see Supplemental Information for a detailed description of the theory and experiment)) in hand, the TN state illustrated in Fig. 1b is constructed as follows. Each internal link ' represents a two-qubit maximally entangled state where two qubits associate respectively to the two end points of '. If we denote by |ψ(n)〉 the PT associated to the hexagon node n, the total TN state |Ψ〉 in Fig. 1b is written as a (partial) inner product form The inner product takes place at the end points of each internal link ', between one qubit in j'i and the other in |ψ(n)〉. The qubits in |ψ(n)〉 not participating the inner product are boundary qubits corresponding to the dangling legs, and these boundary ones are actually physical qubits, indicating that |Ψ〉 is a state on the boundary. We then pick a boundary region A which collects a subset of the boundary qubits, as shown in Fig. 1b. The reduced density Þis computed by tracing out all boundary qubits outside A. Initially, this partial trace boils down to computing the reduced density matrix of individual tensors closest to the boundary. By applying Eq. (2) and noticing that j'i is maximally entangled, the trace computation can be effectively pushed from the boundary into the bulk, meaning that the partial trace on the boundary is now equivalent to computing the reduced density matrix of the PT inside the bulk (see Supplemental Information for a detailed description of the theory and experiment). Once again, we can apply Eq. (2) and push the trace further inside. This iteration procedure is repeated until the trace reaches S in Fig. 1b, where Eq. (2) is not anymore valid, as the number of qubits participating the trace (number of links cut by S) is less than three for each tensor. Now we have presented a sketch about how to calculate the entanglement entropy of ρ A via Eq. (3), and direct readers to Appendix B (see Supplemental Information for a detailed description of the theory and experiment) for a concise proof using the graphical computation of TN. Firstly, tr ρ A ð Þ is found to be equal to the number of qubits on S, i.e., the same as the number of links cut by S. Moreover, the product ρ 2 A , involving the inner product of boundary qubits in A, gives that ρ 2 A / ρ A . Note that we have ignored all numerical prefactors but they all cancel when calculating trρ n A ðtrρ A Þ n in the entanglement entropy. As a result, the Von Neumann entropy gives (see Supplemental Information for a detailed description of the theory and experiment) S EE ðAÞ ¼ lim n!1 1 1Àn log 2 trρ n A ðtrρ A Þ n ¼ minimal number of cuts by S: The above result is a discrete version of the RT formula in Eq. (1). The "minimal number of cuts" represents the minimal area Ar min (in the unit of Planck scale) in the RT formula. The bulk surface S with minimal area emerges effectively from the entanglement entropy of the TN state. Equation (5) demonstrates explicitly that the bulk geometry are created holographically by the entangled qubits of the boundary many-body system. It is worth emphasizing that, all descriptions about constructing the TN originate from the PT in Eq. (2). Therefore, this rank-6 PT plays the fundamental role in holographic entanglement entropy, and is a key of emerging bulk gravity from TN states. If we choose S as shown in Fig. 1c by which the minimal number of cuts is three, a rank-6 PT is generated where the boundary and bulk qubits are both three. Here, we demonstrate the emergent gravity program in AdS/CFT for the first time in a six-qubit NMR quantum simulator, by creating the rank-6 PT in Fig. 1c and measuring the relevant entanglement entropies. Experiment implementation of a rank-6 perfect tensor The six qubits in the NMR quantum register are denoted by the spin-1/2 13 C nuclear spins, labeled as 1 to 6 as shown in Fig. 2a, in 13 C-labeled Dichloro-cyclobutanone dissolved in d 6 -acetone. All experiments were carried out on a Bruker DRX 700 MHZ spectrometer at room temperature. The internal Hamiltonian of this system is where ν j is the resonance frequency of the jth spin and J jk is the Jcoupling strength between spins j and k. All parameters including the relaxation times for each spin are listed in appendix C (see Supplemental Information for a detailed description of the theory and experiment). To control system dynamics, we have external control pulses with four adjustable parameters: the amplitude, frequency, phase, and duration, based on which arbitrary singlequbit rotations can be realized with simulated fidelities over 99.5% (see Supplemental Information for a detailed description of the theory and experiment). A rank-6 PT can be created from |0〉 ⊗n through the circuit as illustrated in Fig. 2b, which involves only Hadamard gates and 4 (a) (b) Fig. 2 a Molecular structure of the 13 C-labeled six-qubit quantum processor. The six qubits of the rank-6 PT are mapped to 1 to 6, respectively. b Quantum circuit that evolves the system from |0〉 ⊗6 to the PT, constructed by several Hadamard gates (blocks) and controlled-Z operations (lines connecting two dots) controlled-Z gates. Experimentally, this requires an initialization of the system onto |0〉 ⊗n . However, initializing an NMR processor to | 0〉 ⊗n is based upon the pseudo-pure state technique, which leads to an exponential signal attenuation. Here, we adopt a temporal averaging approach that enables the PT preparation directly from the thermal equilibrium of NMR, while skipping the intermediate pseudo-pure state stage to avoid the above problem, as shown in Appendix C (see Supplemental Information for a detailed description of the theory and experiment)). After the creation, we conducted k-qubit (1 ≤ k ≤ 5) quantum state tomography in the corresponding subspace of the whole system, respectively. For simplicity, the cutting of the links was chosen to be continuous in experiment, i.e., in a cyclic manner. It means that six state tomographies for any given k are performed, e.g., when k = 2, we reconstructed ρ Combined with the fact that S (k) = S (6−k) for a six-qubit pure state, we have S (k) = min{k, 6 − k} for the theoretical PT, as shown by the orange dashed line in Fig. 3a. In experiment however, inevitable errors lead to imperfection and hence impurity in the truly prepared state, so we cannot just measure k ≤ 3 cases to deduce other k's. Therefore, we measured and compare the experimental S (k) for each 1 ≤ k ≤ 5 (red circles) with their theoretical predictions in Fig. 3a. For each k, the mean and error bar of the experimental S (k) value are calculated from the six cyclic tomographic results. When k ≤ 3, the measured entanglement entropies match extremely well with the theory; when k > 3, there are notable discrepancies between theory and experiment, which should be primarily attributed to decoherence errors, as discussed in the following. The pulse sequence that creates the PT is around 60 ms; this is not a negligible length compared to the T Ã 2 time (~400 ms) of the molecule, meaning that decoherence will induce substantial errors during experiments(see Method). As T Ã 2 relaxation is the dominating factor, the off-diagonal terms in the PT density matrix are mainly affected. To estimate this imperfection, we performed full state tomography 23 on the prepared state and got ρ e . The real part of ρ e is depicted in the right panel of Fig. 3b, by projecting each element onto a two-dimensional plane. As a comparison, the figure of the theoretical PT ρ pt = |ψ〉〈ψ| is placed in the left panel of Fig. 3b. In fact, the diagonal elements of ρ e are almost the same as that of ρ pt , but the off-diagonal are lower due to the T Ã 2 errors. The state fidelity between ρ e and ρ pt , defined as is about 85.0%. Direct observations of ρ e in terms of NMR spectra are also shown in Fig. 3c, where experimental and simulated spectra highly match if the experimental signal is rescaled by 1.25 times to compensate for the decoherence effect. Although the reconstructed state ρ e is prone to the decoherence errors, the entanglement entropies for the cases k ≤ 3 in Fig. 3a are still in excellent accordance with the theory. The reason is, when we trace out three or more qubits, the reduced density matrix is predicted to be identity according to Eq. (2), so the measured k ≤ 3 reduced density matrices are almost irrelevant to the imperfection of the off-diagonal elements in ρ e . However, when k > 3, the reduced density matrix is no longer the identity, meaning that the imperfect off-diagonal terms in ρ e start to be responsible for calculating S (k) . As a result, in Fig. 3a we have S (4) = 2.91 ± 0.20 and S (5) = 2.32 ± 0.25 (red circles) respectively, which are quite distant from the theoretical curve. After numerically simulating and compensating for the decoherence errors 24,25 during the PT creation, we found that the two entanglement entropies S (4) and S (5) approach much closer to the theory, which are now 2.27 ± 0.46 and 1.37 ± 0.28 (blue squares), respectively. We also calculated the current fidelity between the rescaled experimental state and ρ pt via Eq. (7), and found it improved to 93.7%, which is 8.7% greater than that of ρ e . Experimental results are represented by the red circles, where S (4) and S (5) do not fit very well. If the signal's decay due to decoherence is taken into account, the experimental results are rescaled to the blue squares, which fit much better. As a upper-bound reference, the maximal entropy of a k-qubit subsystem is also plotted (green dotted line) by assuming a six-qubit identity. b Density matrices of the theoretical rank-6 PT ρ pt (left) and the experimentally reconstructed state ρ e (right) on a two-dimensional plane. The rows and columns are labeled by the sixqubit computational basis from |0〉 ⊗6 to |1〉 ⊗6 , respectively. c Direct observation of ρ e in the NMR spectra (red), with probe qubits C 1 (top) and C 4 (bottom), respectively. The simulated spectra of the PT are also shown in blue. For a better visualization, experimental signals are rescaled by 1.25 times to neutralize the decoherence error DISCUSSION RT formula, or explicitly the TN built by the rank-6 PT in Fig. 1b, tells us how to deduce the bulk geometry using the entanglement on the boundary. The implicit condition here is that the global TN state is pure. Otherwise, the information on the boundary cannot uniquely (up to local unitaries) determine the bulk geometry, e.g., it cannot specify whether the TN state is the maximally mixed identity or PT since both give the same entanglement entropies on the boundary (meaning k ≤ 3) as shown in Fig. 3a. In experiments, however, under realistic noises, it is difficult to guarantee the purity of the truly created states because experimental procedures inevitably involve errors-in particular the decoherence that render the TN states mixed. In our experiment of a 6-qubit PT-a build block of a complex TN, we have achieved 85% fidelity, which is already state-of-the-art; however, there is yet some non-negligible decoherence due to the T Ã 2 errors. Therefore, our results successfully test the RT formula up to the decoherence. The simulation of the holographic entanglement entropy can be generalized to TN s with multiple perfect tensors. In the Section E of the supplemental material, we demonstrate a simulation of the holographic entanglement entropy on a TN with seven tensors. The key to performing the simulation is that measuring the Rényi entropies of the TN s can be reduced to measuring the reduced density matrices of ρ e , and their multiplications and traces, while ρ e is simulated experimentally. The result of the simulation demonstrates agreement with the RT formula, up to the experimental noise in ρ e . The simulation can be generalized to other TNs. In conclusion, our work is an endeavor to demonstrate on a quantum simulator the RT formula (the discrete PT version) in the AdS/CFT correspondence. We utilize a temporal average technique to create the rank-6 PT and perform full state tomography to reconstruct the experimental state. This is also the largest full state characterization in an NMR system to date. Although the imperfection of the created state due to decoherence errors makes the holographic entanglement entropy not exactly agree with the theoretical prediction, we simulate and compensate for such type of errors under the realistic experimental environment, and demonstrate the accordance between theory and experiment thereafter. As the first step towards exploring AdS/CFT correspondence using a quantum simulator, our work provides valid experimental demonstrations about studying quantum gravity in the presence of realistic noises. Decoherence simulation To numerically simulate the decoherence effect in our six-qubit system, we made the following assumptions: the environment is Markovian; only the T Ã 2 dephasing mechanism is taken into account since T 1 effect is negligible in our circuit; the dephasing noise is independent between all qubits; the dissipator and the total Hamiltonian commute in each pulse slice as the Δt = 10 μs is small. With these assumptions, we simplified and solved the master equation in two steps for each Δt: evolve the system by the propagator calculated by the internal and control pulse Hamiltonian, and subsequently apply the dephasing factors according to the coherent orders for Δt which is an exponential decay of the off-diagonal elements in the density matrix. For each experiment of the 64 runs, we simulated the above process and obtained the signal's decay due to decoherence. From the experimental result, we then compensated for this decay, and a new state in which the decoherence effect was taken into account was thus achieved. The fidelity now between the rescaled experimental state and the theoretical PT is boosted to 93.7%.
4,963
2019-04-23T00:00:00.000
[ "Physics" ]
Techno-Economic Analysis of Stand-Alone Hybrid Energy System for the Electrification of Iran Drilling Oil Rigs This paper explores the potential of use of stand-alone hybrid wind/solar energy system in electrification of calibrating equipment of drilling oil rig in Iran. To achieve this, different hybrid energy system configurations based on calibration equipment demand are proposed. This study puts emphasis on the energy production and cost of energy from both wind turbine and photovoltaic (PV) in the hybrid system. In addition, to make conditions more realistic, the real meteorological data is used for HOMER software to perform the technical and economic analysis of the hybrid system. Results indicate that the PV array shares more electricity production than the wind turbine generator if both wind turbine and PV array are utilized in the wind/solar hybrid system. Moreover, results show that the operational cost will be reduced by the suggested hybrid system. Introduction Oil is the most important energy source in world and oil drilling rigs are used for oil extraction.However, oil rig location is not permanent and changes with oil well location [1].Iran's oil rigs are scattered in south west which is desert and in inaccessible regions such as Masjed Soleyman [2].Generally, electrification of oil drilling equipment is done by grid [3].However, electrification is by grid and extension of the grid requires major big investment that does not generally fit with disperse regions with medium-low energy demand [4].As a result, several technological alternatives are being implemented, mainly diesel generators, microhydro-turbines, wind-power generators, photovoltaic systems or some hybrid configurations [5].Although the use of diesel generators is widely extended throughout the world, it has high maintenance and operation cost.In addition, the environmental concerns of fossil fuels at global and local level are taken into account in recent years [6].Hybrid power systems combine the advantages of conventional and renewable power conversion systems.Renewable power sources, in opposition to conventional power sources, offer independence from fossil fuel and hence independence from world fuel pricing while increasing sustainability of the power supply.Conventional power sources, on the other hand, are independent from environmental conditions (irradiation, wind velocity, etc.) [7].They can assist the renewable sources in situations of deficient environmental circumstances, thereby increasing the reliability of the entire power supply system.With this objective, hybrid energy is best solution for electricity generation in remote areas such as oil drilling rigs.On drilling oil rigs in Iran, calibrating units are fed by isolated diesel generator which is supported by UPS [8].The capacity of this generator is negligible compared with the main one, and must be separated from the main generator because the controlling units are very sensitive to changes in power [9]. The main contribution of this paper is to identify a configuration among a set of systems that meets the desired system reliability requirements with the lowest electricity unit cost for electrification of calibrating equipment of drilling oil rig in Masjed Soleyman.In this regard, To perform analysis with HOMER, Different combinations of PV, batteries, and wind turbine were selected in order to identify the optimal combination of the hydrogen based system. Masjed Soleyman Wind and Solar Characteristics The availability of renewable energy is the most important factor in energy utilization.Then, economical operation and reliability are evaluated precisely.In the desired location, wind energy, solar energy and solar-thermal energy are accessible.This study is focused on wind/PV/battery to support energy demand. Wind Speed Hourly wind speed and solar radiation data of desired area which has been measured in a year is extracted from NASA website [9].Table 1 portrays the average wind speed in January, which is 4.8 m/s.Wind speed remained constant in February and reached 4.9 m/s in March.The average wind speed decreases monthly from 4.5 m/s in April to 3.5 m/s in August.However, it has upward trend from September with 3.9 m/s and reached 4.8 m/s in December.The average wind speed during a year is 4.3 m/s.It is obvious that average wind speed is higher in winter than summer.Moreover, the maximum wind speed in January is 6.5 m/s; but it is minimum in June, a bit more than 4.7 m/s.The maximum wind speed usually occurs between 11 am to 16 pm.Therefore, the wind turbine generates maximum power around afternoon and power generation of wind turbine from midnight until dawn is significantly reduced.Weibull distribution function is used to explain the wind speed in the HOMER software [10].This function contains "c" and "k" parameters which are used in software to extract wind profile [11].The Weibull coefficients for the desired location are k=1.98 and c=4.87 m/s.The Weibull curves are widely used in statistical analysis.In wind energy analysis, it is used to represent the wind speed probability density function, commonly referred to as the wind speed distribution.The Weibull distribution function is given by [12]: Where ̃ is the average wind speed, ) is gamma function,"c" is Weibull scale parameter, "k" is the unit less Weibull wind shape parameter, is a particular wind speed.The calculation of Weibull coefficient is shown in Figure 1.The green bars illustrate the wind data and red line is the best-fit Weibull.The wind speed commenced with less than 1.00% frequency, but jumped up to 9.00% frequency at 3.5 m/s.The wind speed fluctuates and start to decrease to zero after 14 m/s.Therefore, the best-fit Weibull coefficient are calculated by HOMER software in Figure 2 Solar Radiation Ratio Figure 2 illustrates daily radiation (bar graph) and clearness index (red line) simultaneously.The average daily radiation is a bit less than 2 kW/m2/d in January.It gradually increases from January and reached a high point of 5.54 kW/m2/d in July.However, daily solar radiation starts to decreases monthly to 0.91 kW/m2/d in December.Hence, the average daily solar radiation is slightly more than 3.24 kW/m2/d.Moreover, clearness index portrays that atmosphere is clean in January and December.The index is 1.00 for February and November, yet it is around 0.8 in March.The clearness index has gone down monthly and dropped to 0.6 in May, which shows the dirtiest atmosphere of the year.June and July are not clear months in the desired location and the index is around 0.65.The clearness index is gradually increased and reached 0.85 in October.Thus, the daily radiation is reflected by clearness index (Figure2).Because of oscillating in maximum power output of wind turbine and PV panel, the authors tried to utilize hybrid system to compensate and improve output power.System lifetime is considered as the lifetime of the solar array, which is 20 years. Oil Drilling Rig Specification The desired location is an oil rig unit that is located at Masjed Soleyman at 32-degree North latitude, 49.3 East longitude and 867 meters above sea level.Oil rig units are built to 749 extract oil and gas from the earth layers.Output oil from wells entered to refinery units and after passing few steps, the gas is separated from oil.Then gas and oil is sent via a pipeline to consumers or export.Oil and gas exploitation units have several power consumptions.Table 2 shows Electrical consumption of oil and gas operation units.DC power is used in oil rig units for calibrating equipment.The calibrating equipment needs constant power without fluctuating because DC power failure will cause disruption in extraction of oil process.Electrical consumption of oil drilling rig is for three applications: calibration, oil refining unit and public application.The calibrating equipment is using DC voltage of 24 V, but oil refining units are using 380 V AC .Moreover, public units such as pumps, cooling and heating, AC equipment and lighting are using 220 V AC . Calibrators' Consumption of Oil Rig Based on controlling technology, the power consumption in the oil rig unit varies.However, the voltage and current of calibration equipment does not exceed a few watts per day.The selected oil rig expenses are listed in Table 3.This equipment is used for extraction of 30,000 barrels oil per day.Transmitter, stabilizer, converter, flow computers, gas detectors, relay and switches used 24 V DC voltages in calibration.The oil calibrating unit contains a maximum 140 relays and switches that consume a bit more power than 0.8 kW.However, there are only 5 flow computers and energy demand is 0.56 kW.On the other hand, the maximum power consumed by 110 transmitters is more than 1.4 kW, but the minimum power used is slightly less than 0.35 kW that is used for 30 converters.Transmitters, gas detector, relay and switching equipment exert effective power of around 1.4 kW, 1.2 kW and 0.8 kW, respectively.Combined equipment consumes close to 6 kW per day. Economic Optimization of Hybrid System This study tried to economically optimize hybrid system, increase reliability, reduce net investment, reduce greenhouse gas emission and use renewable energy in oil rig.The system cost is defined as sum of PV cost (C PV ), wind turbine cost (C WT ), battery cost (C BAT ) and converter cost (C CONV ) [13]. (4) Cost of system components need to be deducted as: Where N is Number/size of component, C C is Capital cost, C R s Replacement cost, K is Number of replacement, C O&M is Operation and maintenance cost. Result and Discussion The desired oil drilling rig is located onshore in a hot and windy region far from national grid; solar panels, wind turbine and battery are selected to feed the calibrating equipment.Wind turbine (BWC excel-R), solar panels (PV), battery (S6CS25P) and converter specifications are 4 to 6. Calibrators require DC power to control circuits and provide protection on instruments.The output power of wind turbine is a function of wind speed, while the output of solar panels is a function of sun radiation, clouds pattern, air pollution etc.Moreover, the output voltages of selected generation devices are not 24 V DC .Hence, convertor between buses is used to control and regulate receiving end power and voltage.In Figure 3 is shown simulation system scheme in HOMER software.4, size of battery is 1156 Ah, but capital and replacement cost for battery is around $830 and $550, respectively.Moreover, the operation and maintenance (O&M) expenditure is only $15/yr.Hence, convertor is chosen as 1.5 kW.This type of converter requires $700 for capital cost and $700 for replacement also.Finally, O&M cost is $10/yr.Although the converter lifetime is 15 years, it is something less than 1000 kWh for the desire battery.The PV panel is selected as Aria Solar module.The peak power for module is 120 W. Maximum output current and voltage are 4.88 A and 24.6 V, respectively which is tabulated in Table 5.The important electrical characteristics of wind turbine are reported in Table 6 on output power and voltage that are 7.5 kW and 48 V DC , respectively.Furthermore, the generator of wind turbine uses a permanent magnet alternator.Two novels classified between 627 cases of simulation result are explained in this article, which is focused on economical price and finding the most reliable hybrid electrification.The economic scenario is selected based on less Net Present Cost (NPC) and minimizing initial capital cost.On the other hand, reliable system is chosen based upon atmospheric condition, selectivity and utilization of possible hybrid equipment in desire location. Economic Hybrid System In economic hybrid scenario, a wind turbine and five batteries are chosen.No converter is connecting the described system to calibrator equipment of drilling oil rig, although wind turbine generates DC voltage and the calibrating equipment of oil rig utilize DC voltage.Inherent 751 fluctuation in voltage of wind turbine, depending on average wind speed per month, forced hybrid system customers to use converter between generation side and load.Moreover, wind turbine needs a rectifier with maximum generation of 7.5 kV, which is more than demand.The initial cost for reported system is slightly more than $19,150 and the system operating cost will be $555 per year.The cost of electricity generation by wind turbine and batteries is $0.938 /kWh, which is the minimum price compared to 672 other hybrid cases seen by HOMER software.Finally, total NPC for the economic system is around $26,260.The average power generation of wind turbine is shown in Figure 4.The panels will generate a bit more than 1.5 kW in January, but the generation is increased gradually in March, which is slightly more than 1.55 kW.The energy generations by wind turbine decrease from April until July, which is 1 kW to 0.7 kW.The wind turbine electricity generation slope is upward for August and September with 0.53 kW and a bit less than 0.8 kW, respectively.Wind turbine generation in March is the maximum of the solar energy generation in a year.The power generation is increased in October and reached 1.3 kW.The reduction is continued in November and December.The minimum generation is a bit less than 0.6 kW in July and August.) is calculated by: ( Where is maximum discharge current while V is input voltage.Battery output power in 24 V DC is 27.744 kW and will be 4 times more economical for the described system.In the worstcase scenario of July and August, the wind turbine generates something less than 0.6 kW and one of the batteries can cover 5.2 kWh. Figure 6 focused on cash flow of initial cost of the economic system scenario.It is shown that wind turbine capital cost is $15,000 and it required a bit less than $5,424 in replacement cost.The salvage of wind turbine is $10 and total cost is around $19,670.The capital costs of batteries are 3 times less than wind turbine.However, the batteries replacement cost is only 50% less than wind turbine.The operating and maintenance (O&M) cost of batteries is $770 and $259 for O&M of wind turbine.The total cost of batteries salvage is $593.Wind turbines are designed to exploit the wind energy that exists at a location.Aerodynamic modelling is used to determine the optimum tower height, control systems, number of blades and blade shape.Wind turbines convert wind energy to electricity for distribution.Conventional horizontal axis turbines can be divided into three components: 1.The rotor component, which is approximately 20% of the wind turbine cost, includes the blades for converting wind energy to low speed rotational energy. 2. The generator component, which is approximately 34% of the wind turbine cost, includes the electrical generator, the control electronics, and most likely a gearbox (e.g.planetary gearbox), adjustable-speed drive or continuously variable transmission component for converting the low speed incoming rotation to high-speed rotation suitable for generating electricity. 3. The structural support component, which is approximately 15% of the wind turbine cost, includes the tower and rotor yaw mechanism Reliable Hybrid System In this scenario, both wind and solar energy are utilized to provide energy for calibration equipment of oil drilling.This system is highlighted in Figure 5.The addressed system is a DC wind turbine, a PV panel and two batteries that are connected with a converter to load.The initial cost of hybrid system is slightly less than $22,700, which is 1.8 times more than proposed economical system.Operating cost is $516 per year, which is close to first scenario.Moreover, cost of electricity generation and total NPC are 1.11 times more than proposed economical scenario and reached $1.046/yr and a bit less than $29,000, respectively.Figure 6 portrays wind/PV average energy generation per year.The average generation is 1.8 kW in January and jumped to maximum generation of a bit less than 2.0 kW in March.The generation is reduced to 0.9 kW until July, yet it fluctuated in August and touched 1.8 kW in November and December.Although wind power penetration is more dominant in all months, it is not constant and oscillates significantly.The average wind generation is 1.1 kW per year, reaching a peak in March with 1.7 kW.Furthermore, the minimum wind generation occurred in July with 0.65 kW.The PV panel generation in July is 0.25 kW.Hence, July is seen as the worst-case scenario of this hybrid system.Two batteries will support the proposed system by maximum output power of 55.488 kWh.In this proposed reliable hybrid system, PV penetration is 13% and wind turbine is 87%. Figure 7 illustrates the cash flow of PV/wind/ battery system.This cash flow is similar with cash flow of Figure 8. In this scenario, the highest initial cost is $15,000 for installation of wind turbine.The replacement cost is around 3 times less than capital cost and O&M cost is $256.Therefore, the wind turbine total cost is slightly more than $19,600.However, PV panel cost must be divided by two to compare with economic scenario because only a PV is installed.Furthermore, the batteries expenditure is two times less than previous scenario, because the batteries are reduced to 2. Hence, initial capital cost for this system is $22,716 and the total cost by adding replacement O&M and salvage is more than $29,000. Supplying Equipment with National Grid In this section, it is assumed that an oil rig is supplied by national grid.For this reason, overhead line or cable is needed for electrification of oil rig equipment.Distance of oil rig to national grid, soil properties and the nearest substation determine initial cost of electrification by grid.Designer of grid is required to clarify the number of towers and insulators, type of tower and insulator and length of cable/wire.Finally, a step down transformer will reduce the voltage to desired level and a converter will convert it to DC power.The information and price of related equipment are extracted from Iran, Ministry of Power and Energy/distribution voltage.The initial cost of a normal 25 kVA transformer is around $4,000 and initial cost on national grid is $50,000/km.Now, assume that oil rig distance to national grid is "L".The system cost (C SYS ) is summation of national grid cost (C NG ) multiplied by length of national grid, transformer cost (C T ) and converter cost (C CON ) as follows: (7) The capital cost analysis has illustrated that national grid is very expensive for a period of oil drilling and even can be replaced with expensive hybrid electrification such as two wind turbines.Moreover, O&M cost is must be added to calculated value.From calculation, the largest cost is on grid construction and this grid is undesired when extraction of oil process is complete. Single Generator and Pollution Analysis This section discusses the comparison of electrification of oil calibrating equipment with a diesel generator instead of hybrid system.A generator is selected to provide 6 kW for calibrating equipment in series with a converter.The initial cost for an AC generator is $2,400 and operating cost expenditure is $8,600/yr.Moreover, the electricity cost by this method is $4.401/kWh.Initial cost of converter is $800 and the system total NPC is $113,137.Although the initial cost of diesel generator is less than hybrid system and grid, operating cost of diesel generator in comparison with suggested system makes customers reluctant.Furthermore, this generator has burned 8147 liters diesel to generate 6 kW electrical powers.Analysis of pollutant emission by HOMER software portrays the desired hybrid system as decreasing greenhouse gases.The rate of carbon dioxide emission for supplying calibrator's equipment of oil drilling rig (6 kW/h and 21.9 MW/h/yr) by diesel generator is portrayed in Table 7. Emission of carbon dioxide and carbon monoxide due to burning diesel are 21,453 kg/yr and 53 kg/yr respectively.The rate of other pollutant such as nitrogen oxide, sulfur dioxide, unburned hydrocarbon and particular matter are 472 kg/yr, 43.1 kg/yr, 5.87 kg/yr and 3.99 kg/yr, respectively.Providing energy for calibrating equipment of oil drilling rig by hybrid system is preventing environmental pollutant release that is around 22 tons/yr.Hybrid system utilization helps to green energy and save fossil fuel resources for future generation. Conclusion Due to increase in energy demand, resource limitation and increasing environmental pollution of fossil fuel combustion, penetration of renewable energy in generation is enhanced.This paper is focused on effective cost analysis of oil rig calibrators consumption by HOMER software.The result analysis has shown that two effective scenarios need to be described, economic scenario and reliable scenario.In economic scenario, wind turbine/ battery hybrid system is used to feed calibration equipment of oil drilling rig with $19,165 for initial cost and $0.938/kWh on cost of electricity generation.In the other case, wind/PV/battery provides energy.Although the initial cost and cost of electricity generation is increased by around 18%, hybrid energy use is more reliable.Wind generation supports PV generation between sunset and sunrise.Moreover, wind/PV/battery hybrid system dependency on battery is 50% reduced. are k=1.98 and c=4.87 m/s. Figure 2 . Figure 2. Profile of overall solar radiation Figure 3 . Figure 3. Scheme of proposed hybrid system Analysis of Stand-Alone Hybrid Energy… (Abdullah Asuhaimi Mohd Zin) Figure 4 . Figure 4. Average Power of PV modules Figure 5 . Figure 5. Sensitive Result of HOMER Software Figure 6 . Figure 6.Average power generation of Wind/PV hybrid system  ISSN: 1693-6930 TELKOMNIKA Vol. 15, No. 2, June 2017 : 746 -755 754 Hybrid Optimization of Multiple Energy Resources) software that is developed by National Renewable Energy Laboratory (NREL) was used as the simulation and optimisation tool. Table 1 . Average wind and solar radiation Table 2 . Electrical consumption of oil operation units Table 3 . Oil Rig Calibrators Consumption Table 4 . Battery and Convertor Table 5 . ARIA Solar Module Electrical Table 6 . Wind Turbine Specification
4,859.4
2017-03-01T00:00:00.000
[ "Engineering" ]
Signed in Blood: Circulating Tumor DNA in Cancer Diagnosis, Treatment and Screening Simple Summary An important advance in the diagnostic and surveillance toolbox for oncologists is circulating tumor DNA (ctDNA). This technology can detect microscopic levels of cancer tissue before, during, or after treatment. Various groups from across the globe have published their experiences with the use of ctDNA to either guide therapy or monitor outcomes. The use of ctDNA likely cannot supplant the need for tissue biopsies, but it can complement other diagnostic and therapeutic monitoring mechanisms. Abstract With the addition of molecular testing to the oncologist’s diagnostic toolbox, patients have benefitted from the successes of gene- and immune-directed therapies. These therapies are often most effective when administered to the subset of malignancies harboring the target identified by molecular testing. An important advance in the application of molecular testing is the liquid biopsy, wherein circulating tumor DNA (ctDNA) is analyzed for point mutations, copy number alterations, and amplifications by polymerase chain reaction (PCR) and/or next-generation sequencing (NGS). The advantages of evaluating ctDNA over tissue DNA include (i) ctDNA requires only a tube of blood, rather than an invasive biopsy, (ii) ctDNA can plausibly reflect DNA shedding from multiple metastatic sites while tissue DNA reflects only the piece of tissue biopsied, and (iii) dynamic changes in ctDNA during therapy can be easily followed with repeat blood draws. Tissue biopsies allow comprehensive assessment of DNA, RNA, and protein expression in the tumor and its microenvironment as well as functional assays; however, tumor tissue acquisition is costly with a risk of complications. Herein, we review the ways in which ctDNA assessment can be leveraged to understand the dynamic changes of molecular landscape in cancers. Introduction A liquid biopsy is a minimally invasive technique for measuring diagnostically significant tumor-derived markers in body fluids. Although any liquid can be biopsied (e.g., blood, urine, ascites, and cerebrospinal fluid), herein we will be referring to blood biopsies when we speak of liquid biopsies. The types of components that can be interrogated in a liquid biopsy include circulating tumor cells, circulating extracellular nucleic acids (cell-free DNA (cfDNA) and its neoplastic fraction-circulating tumor DNA (ctDNA)), as well as extracellular vesicles (such as exosomes), and a variety of glycoproteins. We will be focused on ctDNA and cfDNA. cfDNA is a broad term that refers to DNA which is freely circulating in the blood but is not necessarily of tumor origin [1]; ctDNA is fragmented DNA in the bloodstream that is of tumor origin and is not associated with cells. The use of next-generation sequencing (NGS) of ctDNA from a blood biopsy has gone, in the last decade, from the unimaginable to the routine. NGS of ctDNA has provided The use of next-generation sequencing (NGS) of ctDNA from a blood biopsy has gone, in the last decade, from the unimaginable to the routine. NGS of ctDNA has provided insights into potential genomic-derived treatment options such as identifying novel targets as well as predicting responses to treatments ( Figure 1) [2]. Liquid biopsies can also be used to evaluate microsatellite stability/instability (MSI-H) and high tumor mutational burden (TMB-H), both of which are critical parameters for predicting immune checkpoint blockade response [3][4][5]. Further, ctDNA can be exploited to monitor response and predict resistance in some tumors [6][7][8][9][10]. The half-life of ctDNA ranges from 30 min to two hours. Changes in ctDNA can be used to monitor tumors dynamically [11]. Both the concentration of ctDNA and the number of somatic alterations found within a sample have been implicated in some studies as a surrogate for tumor stage and size as well as tumor aggressiveness [12][13][14]. The implementation of diagnostics using ctDNA has been leveraged as a companion diagnostic test, e.g., for detecting EGFR inhibitor sensitive mutations for the use of erlotinib in non-small cell lung cancer [15]. Even so, ctDNA may provide important risk stratification data [16]. A challenge for the utility of ctDNA is that clonal hematopoiesis of indeterminate potential (CHIP) may confound results; in other words, some presumptive ctDNA mutants may be derived from aberrations in blood cells, particularly those that accompany aging, rather than abnormalities in the tumor, yet mutational burden from CH is low and can be excluded by sequencing healthy control tissue [17]. Herein, we will examine the multiple potential uses of liquid biopsy with NGS of ctDNA in oncology: Liquid biopsies can also be used to evaluate microsatellite stability/instability (MSI-H) and high tumor mutational burden (TMB-H), both of which are critical parameters for predicting immune checkpoint blockade response [3][4][5]. Further, ctDNA can be exploited to monitor response and predict resistance in some tumors [6][7][8][9][10]. The half-life of ctDNA ranges from 30 min to two hours. Changes in ctDNA can be used to monitor tumors dynamically [11]. Both the concentration of ctDNA and the number of somatic alterations found within a sample have been implicated in some studies as a surrogate for tumor stage and size as well as tumor aggressiveness [12][13][14]. The implementation of diagnostics using ctDNA has been leveraged as a companion diagnostic test, e.g., for detecting EGFR inhibitor sensitive mutations for the use of erlotinib in non-small cell lung cancer [15]. Even so, ctDNA may provide important risk stratification data [16]. A challenge for the utility of ctDNA is that clonal hematopoiesis of indeterminate potential (CHIP) may confound results; in other words, some presumptive ctDNA mutants may be derived from aberrations in blood cells, particularly those that accompany aging, rather than abnormalities in the tumor, yet mutational burden from CH is low and can be excluded by sequencing healthy control tissue [17]. Herein, we will examine the multiple potential uses of liquid biopsy with NGS of ctDNA in oncology: early diagnosis of cancer; ctDNA as a prognostic variable; measurement of residual disease; discerning molecular alterations that can inform therapeutic decision-making; and monitoring response, resistance, and burden/aggressiveness of disease. Comparison of CTCs, ctDNA, and Tissue DNA Circulating tumor cells (CTCs), ctDNA, and tissue DNA (tDNA) are all potentially exploitable for providing insight and data about tumor genomes ( Table 1). Acquisition of CTCs and ctDNA are both noninvasive, requiring only a venipuncture, and are considered to be liquid biopsies. In contrast, tissue DNA requires an invasive biopsy. CTCs are tumor cells that are shed from growing and dying tumors that require isolation; thus, they technically require specialized equipment. However, CTCs can be a rich source of information about the genomic, transcriptomic, and proteomic content of the tumor; if grown in culture, CTCs can also provide functional assays. In contrast, ctDNA, while being easier to isolate that CTCs, cannot be cultured and the information obtainable from ctDNA is generally restricted to genomic analysis [18]. In this regard, blood-derived CTCs and tissue samples share similarities, as both can be isolated, cultured, and provide genomic, transcriptomic, and proteomic tumor data. In a blood sample (~10 cc), there will be tens to hundreds of ctDNA fragments for testing, whereas there will likely only be a handful of CTCs. One limitation of tissue DNA is that it is obtained from a discrete piece of tumor tissue; thus, tissue DNA cannot reflect heterogeneity amongst metastatic sites and is more difficult to be followed serially [19]. CTCs and ctDNA are, however, shed from multiple metastatic sites and therefore better reflect tissue heterogeneity than a tissue biopsy. On the other hand, the requirement for extremely sensitive techniques for genomic interrogation of CTCs and ctDNA means that tissue assays often yield greater numbers of positive genomic alterations, and tissue NGS assays are generally more comprehensive than those applied to ctDNA. A unique advantage to CTCs and ctDNA is their amenability to longitudinal follow up with a simple blood test in order to predict therapeutic response and resistance [19]. Liquid Biopsy and Dynamics of Normal Versus Tumor Cell-Free DNA (cfDNA) Elevated levels of cfDNA were found in patients with cancers but can be detected during pregnancy and in patients with history of organ transplant [20]. Generally, the blood concentration of cfDNA can vary from 0-5 to >1000 ng/mL in cancer patients and between 0 and 100 ng/mL in otherwise healthy patients [21,22]. The large range of cfDNA and ctDNA found in patients with cancer is in part due to the fact that various tumor types can have wide variations in ctDNA shedding and that the amount of ctDNA can reflect tumor burden. Patients with brain, kidney, and thyroid cancers have been found to have lower levels of cfDNA than those patients with pancreatic, colorectal, ovarian, breast, gastroesophageal, and melanoma [13,23]. Additionally, premalignant and earlystage cancers generally have lower levels of cfDNA compared to patients with advanced disease [21]. Not all of the cfDNA in the bloodstream of cancer patients is ctDNA, and it is important to recognize what fraction of cfDNA is actually from a cancer. It is believed that 0.1-89% of cfDNA is made up of ctDNA and that the ratio may increase as a cancer progresses [13,24,25]. The sizes of cfDNA are estimated to be between 40 and 200 base pairs [26][27][28]. If wrapped in chromatin, the DNA in these vesicles can make up to 2 million base pairs [29]. These fragments are believed to be part of tumor metabolism and growth; fragments from necrotic tumor tissue can be over 10,000 kilobases [30]. The amount of cfDNA found within the bloodstream is dependent on the balance of release and clearance of cfDNA. Clearance can occur within the primary tumor tissue, within the blood, or within various filtration organs: spleen, liver, and lymph nodes [31]. Elevated levels of cfDNA in patients with cancer is believed to be in part because of lack of clearance and subsequent accumulation. Within the bloodstream, degradation of cfDNA is performed in large part by circulating enzymes: deoxyribonuclease (DNAse) I, plasma factor VII-activating protease, and factor H [32,33]. Within the spleen and liver, Kupffer cells and macrophages have been implicated in removing cfDNA and nucleosomes from circulation [34]. The presence of tumor in patients with cancer may be in part the reason for higher levels of cfDNA detected and also in part due to inability to clear these fragments within these various mechanisms. How ctDNA Enters and Leaves the Circulation It is unclear exactly how ctDNA enters the blood stream; however, it is postulated that when primary tumor cells or metastatic cells die via apoptosis or necrosis, DNA fragments may be released into the bloodstream [22,35]. The amount of ctDNA that can be found within the blood stream is heavily dependent on the overall tumor biology and burden. The half-life of ctDNA is estimated to be between 30 min and two hours; ctDNA is rapidly degraded by bloodstream DNases [31]. Technique Advantages Limitations References Droplet digital PCR (ddPCR) High sensitivity Only detects known alterations [38] Cancer Personalized Profiling by deep Sequencing (CAPP-Seq) High sensitivity Not fully comprehensive [39] Tagged-amplicon deep sequencing (TAm-Seq) High sensitivity Not fully comprehensive [40] Whole exome sequencing (WES) Includes entire exome Lower sensitivity [41] Whole genome sequencing (WGS) Includes entire genome Lower sensitivity [42] ddPCR can identify potentially rare mutations, calculate copy number variants, as well as inform on miRNA [36]. This method also allows for detection of very low levels of genomic material, 0.01-1.0% [37]. The most notable limitation of this method of ctDNA detection, however; is that only characterized sequences can be screened via this method. The use of BEAMing allows for the assessment of characterized alterations (e.g., SNVs, indels, and amplifications) and combines PCR with flow cytometry [43]. This allows for the detection of alterations at exceedingly low levels-0.01%-with marked concordance to tissue testing of 91.8% [38]. The CAPP-Seq technique utilizes large genomic libraries combined with individual patient sample sequence signatures to identify alterations within ctDNA. This method combines statistical assessment of well-characterized tumor alterations with DNA oligonucleotides to identify patient specific alterations [39]. This method allows for the identification of various genomic alterations such as insertions/deletions, single nucleotide variants, rearrangements, and copy variants. A limitation of CAPP-Seq includes the inability to identify fusions, in contrast to ddPCR, TAm-Seq, WES, and WGS [39]. The TAm-Seq technique allows for highly sensitive and specific analysis~97% along with the ability to detect low levels of ctDNA, 2%. This method uses primers to tag and identify the desired genomic sequence. The limitation with this technique is that the sequence needs to be characterized to be included in the analysis [40]. Whole exome sequencing allows for comprehensive analysis and characterization of potentially all tumor mutations. In doing so, the sensitivity may be lower than other modalities because it includes all exomic alterations. The limitations of WES relate to error rate and sensitivity [41,44]. WGS includes the entire tumor genome to discern characterized/deleterious alterations as well as many uncharacterized genomic events (variants of uncertain significance (VUSs) and is mainly used for CNAs [42]. Clinical Laboratory Improvement Amendments (CLIA) Grade Commercially Available ctDNA Assays There are several CLIA-grade commercially available ctDNA assays that clinicians can order to potentially inform treatment decisions for patients. One of the most widely available of these tests is the Guardant360 CDx from Guardant Health, which was first accessible in 2014. The Guardant360 assay includes 73 genes commonly altered in cancers and can identify single-nucleotide variants (SNVs), insertions/deletions (indels), fusions, and copy number alterations (CNAs) (https://www.therapyselect.de/sites/default/files/ downloads/guardant360/guardant360_specification-sheet_en.pdf, accessed date: 10 January 2021). This Guardant360 assay requires two 10 cc tubes of whole blood and is reported to have results in 7 calendar days after receipt of the samples. In 2018, Foundation Medicine released their ctDNA assay called FoundationOne Liquid, which now includes 311 genes implicated in cancers (https://assets.ctfassets.net/w9 8cd481qyp0/wVEm7VtICYR0sT5C1VbU7/55b945602d7dc78f42b3306ca1caa451/Foundation One_Liquid_CDx_Technical_Specifications.pdf, accessed date: 10 January 2021). The Foun-dationOne Liquid assay included base substitutions, indels, rearrangements, copy number alterations, and MSI-H status. The FoundationOne Liquid assay requires two 8.5 cc tubes of whole blood and reports to have results within less than two weeks after receipt of the samples. Also in 2018, Tempus introduced Tempus xF, a ctDNA assay, which includes 105 genes implicated in cancers. The Tempus xF assay included SNVs, indels, rearrangements/fusions, CNAs, and MSI-H status (https://www.tempus.com/wp-content/uploads/2020/02/xF-Validation_013020-2.pdf, accessed date: 10 January 2021). The Tempus xF assay requires two 8 cc tubes of whole blood and reports to have results in nine to 14 days after receipt of the samples. Food and Drug Administration (FDA) Approvals for ctDNA Tests In August 2020, the FDA approved the use of FoundationOne Liquid CDx test from Foundation Medicine, Inc. as a companion diagnostic test for patients with ovarian cancer to identify mutations in BRCA1/2 for the use of rucaparib, for patients with metastatic hormone-resistant prostate cancer with mutations in BRCA1/2 and ATM for the use of olaparib, for patients with metastatic hormone-resistant prostate cancer with mutations in BRCA1/2 for the use of rucaparib, for patients with non-small cell lung cancer (NSCLC) with ALK rearrangement for the use of alectinib, for patients with NSCLC with EGFR exon 19 deletions and EGFR exon 21 L858R alterations for the use of gefinitinb, erlotinib, and osimertinib, and for patients with breast cancer with mutations in PIK3CA C420R, E542K, E545A, E545D [1635G > T only], E545G, E545K, Q546E, Q546R, H1047L, H1047R, and H1047Y for the use of alpelisib [45,46]. Guardant360 CDx by Guardant Health Inc. was also approved in 2020 to identify EGFR exon 19 deletions, L858R, and T790M mutations in patients with NSCLC for the use of Osimertinib [47]. The Therascreen PIK3CA RGQ PCR Kit was approved in 2019 to detect 11 mutations in the PIK3CA gene in patients with metastatic breast cancer for the use of alpelisib [48]. Additionally, Cobas EGFR Mutation Test v2 was also approved in 2016 to identify EGFR L858R mutations in patients with NSCLC for the use of erlotinib [49]. ctDNA for Early Diagnosis of Cancer Early cancer detection could transform outcomes by detecting lethal tumors at a time when the malignancies are curable, and treatment invokes less morbidity. However, the technical, biological, and clinical hurdles to developing an effective pan-cancer screening test for early cancer are substantial. Liquid biopsies with NGS of ctDNA are an attractive tool, but the very small amounts of ctDNA in early disease is still a major technical challenge, as is the issue that noncancerous normal tissue may have somatic mutations indistinguishable from those in cancer, but as mentioned above, CH mutations can be filtered out by using healthy tissue control samples. Still, Cohen et al. developed a noninvasive blood test, called CancerSEEK, that detected eight common cancer types through assessment of circulating proteins and mutations in cfDNA. In a study of 1005 patients previously diagnosed with non-metastatic cancer and 850 healthy control individuals, CancerSEEK detected cancer with a 99% specificity and a sensitivity of 69% to 98% (depending on type of malignancy) [50]. Another methodology that has recently been exploited is assessing ctDNA methylation patterns, noting that increased methylation of tumor suppressor genes can be seen as an early inciting event in the carcinogenesis of various tumors, such as hepatocellular and colorectal carcinomas [51]. A prospective case-control study evaluated the performance of pan-cancer targeted methylation analysis of cfDNA. With 6689 participants (2482 cancers (>50 cancer types), 4207 healthy), specificity was 99.3% and stage I-III sensitivity was 43.9% in all cancer types [52]. Other unique technologies aimed at early cancer detection continue to be explored. Multiple studies have shown that ctDNA can be an important prognostic factor. For instance, in triple-negative breast cancer patients who had received or were receiving neoadjuvant chemotherapy, the detection of ctDNA was associated with a significantly worse DFS (p = 0.027) [53]. Additionally, at the last post-chemotherapy pre-surgery time point, detection of ctDNA was strongly associated with shorter DFS (p = 0.013) and OS (p = 0.006) [53]. In patients receiving adjuvant chemotherapy for locally advanced rectal cancer, 122 (77%) of 159 patients had pre-surgical detectable ctDNA and after surgery only 12 of 140 (8.6%) with negative ctDNA (hazard ratio (HR) 12, p < 0.001) experienced recurrence [54]. Further, post-op ctDNA detection predicted recurrence regardless of adjuvant chemotherapy (chemotherapy: HR 10, p < 0.001; no chemotherapy: HR 16, p < 0.001) and ctDNA detection predicted higher recurrence rate among patients with a pathological complete response (HR 14, p = 0.014) or with pathologic node-positive disease (HR 11, p < 0.001) [54]. A cohort study of patients with local advanced anal squamous cell cancer found that, in 33 patients, ctDNA detection after chemoradiation was associated with shorter DFS (p < 0.0001) [55]. Additionally, this study reported that ctDNA was associated with stage (64% in stage II and 100% in stage III; p = 0.008) and baseline ctDNA levels were higher in pathological node positive (median 85 copies/mL, range = 8-9333) than pathological node negative disease (median 32 copies/mL, range = 3-1350) p = 0.03 [55]. In another study, this one in pancreatic cancer, higher levels of total %ctDNA were an independent prognostic factor for worse survival (hazard ratio, 4.35; 95% confidence interval, 1.85-10.24 (multivariate, p = 0.001)) [63]. ctDNA to Measure Residual Disease The ability of ctDNA to track tumor-specific mutations and to detect occult cancer lend themselves naturally to assessment of minimal residual disease. Further, the ease of plasma sampling permits ctDNA levels to be serially followed in order to longitudinally trend mutation status and frequently assess dynamic changes in levels of ctDNA, as reflected by percent ctDNA (or variant allele fraction (VAF). Multiple studies are now beginning to confirm clinical utility of ctDNA in evaluating minimal residual disease [73]. For instance, declines in circulating allele fractions of relevant mutations have been associated with clinical outcomes in melanoma, colorectal cancer, breast and ovarian cancer, and EGFR-positive lung cancer [74][75][76][77][78]. As examples, in a study that monitored patients with colorectal cancer pre-and post-surgery, pretreatment ctDNA was detected in 93.4% (100/107) of patients; post-operative ctDNA status was assessed in 107 patients, of whom, 13% (14/107) were minimal residual disease-positive. Of the positive patients, 42.9% (6/14) eventually relapsed while only 8.6% (8/93) of the negative patients relapsed (HR: 10; 95% CI: 3.3-30; p < 0.001). In multivariate analysis, ctDNA status was the most significant prognostic factor associated with relapse-free survival (HR: 28.8, 95% CI: 3.5-234.1; p < 0.001) [79]. Similarly, in patients undergoing surgery for peritoneal metastases, high levels of pre-operative ctDNA and new postoperative ctDNA alterations in the context of preoperative alterations predicted worse outcomes [59]. Applications may include using ctDNA to determine escalation or de-escalation of adjuvant therapy. Discerning ctDNA Molecular Alterations That Can Inform Decision Making Multiple studies demonstrate the important use of ctDNA interrogation for prosecuting treatment. In fact, as mentioned above, the FDA has approved several ctDNA tests as companion diagnostics [45][46][47]: detection of BRCA1/2 alterations for the use of the poly (ADP-ribose) polymerase (PARP) inhibitor rucaparib in ovarian cancer; BRCA1/2 and ATM mutations for the use of the PARP inhibitor olaparib in prostate cancer; ALK and EGFR alterations to be treated with the ALK inhibitor alectinib or the EGFR inhibitors gefitinib, erlotinib, and osimertinib in NSCLC; and a variety of PIK3CA alterations to be treated with the PIK3CA inhibitor alpelisib in breast cancer. Numerous other studies support the utility of ctDNA for genomic characterization aimed at assisting therapeutic choice. For instance, one study in patients with advanced breast cancer found that 68% (42/62) of patients had ≥1 characterized/pathogenic ctDNA alteration (non-VUS) [57]. A similar study in patients with advanced and resected esophageal, gastroesophageal junction, and gastric adenocarcinoma found that 76% (42/55) of patients had a ctDNA alteration, with 69% (38/55) having ≥1 characterized/deleterious (non-VUS) [58]. In gynecologic cancers, therapy matched to ctDNA alterations (n = 33 patients) was independently associated with improved survival (HR: 0.34, p = 0.007) compared to unmatched therapy (n = 28 patients) in multivariate analysis [60]. In a study focused on EGFR amplification, such amplifications were detected in cfDNA in a significant subset of pan-cancer patients-8.5% of 28,584. Most patients had coexisting alterations. Importantly, responses were observed in five of nine patients who received EGFR inhibitors, including patients who showed ctDNA EGFR amplifications, but no amplifications in the tissue DNA [64]. Taken together, it is apparent that ctDNA molecular alterations play a vital 79 Burden/Aggressiveness of Disease Resistant ctDNA alterations that may emerge months before changes in scans are noted and can inform an understanding of mechanisms of resistance in colorectal, lung, and breast cancers, as examples [80][81][82]. For instance, ctDNA was used to identify early resistance mutations in patients with HER2-amplified breast cancer; PI3K/mTOR pathway alterations were the major cause of resistance [83]. This information may be exploitable with the addition of another targeted therapeutic [84]. Using ctDNA in a longitudinal fashion could allow for concomitant or sequential targeting of multiple gene mutations in real-time. This strategy and the ability of ctDNA to offer this information prior to imaging and without the need for additional tissue biopsies may be part of the holy grail of getting the right drug to the right patient at the right time. Based on criteria established by the OncoKB database, and other evidential reports, studies have shown that over one-quarter of cancers harbored level 1 actionable targets in their ctDNA [85]. The ability to find these mutations early in the treatment course could potentially alter the trajectory of recognizing mutation acquisition, thus enhancing patient outcomes. Furthermore, ctDNA can be an early marker of response. For instance, drug-induced tumor apoptosis may occur for EGFR-targeted therapy in lung cancer within days of initial dosing, and daily sampling of ctDNA may facilitate early assessment of patient response within the first week of treatment with EGFR inhibitors [10]. Similarly, ctDNA has been used to predict response to treatment before radiographic response in colorectal cancer [75]. This measurable entity portends survival even in the setting of neoadjuvant therapy of breast cancer [86]. Similarly, early plasma ctDNA changes predicted response to first-line pembrolizumab in in patients with lung cancer [70]. Finally, genome-wide sequencing of cfDNA identified copy number alterations that could be used for monitoring early response (or resistance) to immunotherapy in cancer patients [72,87]. Multiple publications also show that both %ctDNA (VAF) and number of alterations in ctDNA predict a poor prognosis, possibly because they reflect tumor burden and/or aggressiveness [14]. The Issue of Concordance between ctDNA and Tissue DNA Several studies have examined the concordance in molecular alterations between tissue and ctDNA samples. In general, concordance is variable ranging from~50% to over 95% [69,88]. The literature suggests that the results from liquid biopsies and from tissue biopsies, vis a vis NGS, are highly reproducible [89]. Therefore, biological differences most likely account for discrepant ctDNA and tissue DNA NGS results. The biologic attributes that underlie differences between tissue and ctDNA results include (i) shedding of DNA into the bloodstream may be limited from some sites, (ii) ctDNA can be suppressed by treatment, and (iii) tissue DNA tests the genomics in a small sample of tissue, whereas ctDNA may reflect shed DNA from multiple metastatic sites. Both tissue and ctDNA may be confounded by germline alterations and by clonal hematopoiesis of indeterminate potential, though ctDNA may be more vulnerable to such confounders. Interestingly, studies now show that concordance between ctDNA and tissue DNA alterations, at least for TP53 and for KRAS, is associated with worse outcomes [68,69]. Conclusions and Future Directions Tumors release ctDNA into the bloodstream. The amount of ctDNA discernable, as reflected by percent of DNA VAF and the number of ctDNA alterations, may be an indicator of tumor burden and/or aggressiveness, with higher numbers predicting worse prognosis. Blood-derived ctDNA may provide crucial molecular information as a complement to the tumor biopsy for the following reasons: (i) some cancer tissue is not easily or safely accessible for biopsy; (ii) even if accessible, tumor biopsies can be complex and expensive procedures with morbidity; (iii) over time, the tissue that was biopsied may become less representative of the tumor, since malignancies undergo genomic evolution; (iv) genomic aberrations discerned in a tissue biopsy reflect the content of the small tissue sample, while ctDNA NGS abnormalities may reflect the heterogeneous alterations found in shed DNA from many metastatic sites; and (v) dynamic changes in ctDNA can occur and reflect response or resistance to treatment. Furthermore, evaluating ctDNA pre-or post-surgery may serve as a predictive tool for recurrence risk. Finally, ctDNA may be exploitable for early detection of lethal cancers when they are still curable and/or do not require drastic, life-altering interventions. There are also disadvantages to ctDNA as compared to tissue DNA assessment: (i) ctDNA is found in only small amounts in the circulation, making it difficult to detect alterations, and (ii) ctDNA carrying tumor-specific alterations may represent only a small fraction of the total genomic alterations in the tumor, since not all cancer-derived DNA may be shed into the blood. Therefore, variability in concordance rates between blood-derived ctDNA samples and tissue samples can be caused by spatial and temporal variables, as well as by dynamic changes driven by therapy and disease evolution; and (iii) ctDNA is more liable to be confounded by alterations of clonal hematopoiesis of indeterminate potential, and perhaps also by germline alterations. Taken together, the literature indicates that assessment of blood-derived ctDNA is a powerful and transformative technology which can inform genomic decision making for gene-and immune-targeted therapy, can predict prognosis, and can be followed serially to assess response, resistance, and residual disease.
6,330.8
2021-07-01T00:00:00.000
[ "Biology", "Medicine" ]
All-Day Thermogalvanic Cells for Environmental Thermal Energy Harvesting Direct conversion of the tremendous and ubiquitous low-grade thermal energy into electricity by thermogalvanic cells is a promising strategy for energy harvesting. The environment is one of the richest and renewable low-grade thermal source. However, critical challenges remain for all-day electricity generation from environmental thermal energy due to the low frequency and small amplitude of temperature fluctuations in the environment. In this work, we report a tandem device consisting of a polypyrrole (PPy) broadband absorber/radiator, thermogalvanic cell, and thermal storage material (Cu foam/PEG1000) that integrates multiple functions of heating, cooling, and recycling of thermal energy. The thermogalvanic cell enables continuous utilization of environmental thermal energy at both daytime and nighttime, yielding maximum outputs as high as 0.6 W m−2 and 53 mW m−2, respectively. As demonstrated outdoors by a large-scale prototype module, this design offers a feasible and promising approach to all-day electricity generation from environmental thermal energy. Introduction Low-grade thermal energy (<100°C) is an energy source with tremendous potential that exists in the environment, industrial processes, and the human body [1][2][3]. Unfortunately, most of this energy is wasted due to wide distribution and limited recovery technologies [4,5] as well as the consumption of extra energy for dissipation, which is harmful to global energy conservation and cooling. Direct conversion of low-grade thermal energy into electricity by thermoelectric technologies, without any energy consumption or carbon emission, is a promising strategy for the imminent energy and environmental crises [6]. Conventional solid-state thermoelectric devices have high efficiency at high temperatures, but high costs and material limitations impede their practical application for low-grade thermal energy [7][8][9]. Thermogalvanic cells (TGCs) that consist of redox couples, electrolytes, and electrodes can generate sustainable electricity due to a temperature-dependent redox potential [10][11][12]. The features of TGCs, including a high Seebeck coefficient (S e ) (~1 mV K -1 ), low cost, flexibility, scalable route, and matched operation temperature, make these cells an ideal alternative to solid-state thermoelectric devices for large-scale low-grade thermal energy harvesting [13]. For TGC systems, the open-circuit voltage (V oc ) is described as follows [3]: where ΔT is the temperature differential. Obviously, a realtime spatial temperature differential is absolutely necessary for electricity generation. In the practical scenarios, the operated ΔT is mostly yielded between heat sources and an ambient environment [3,14,15]. However, it is generally ignored that the environment itself is one of the most abundant and renewable low-grade thermal energy sources. Environmental thermal energy is present in the form of fluctuations of environmental temperature over time (e.g., diurnal fluctuation) [16], mainly contributed by earth absorbing solar irradiation at daytime and passively radiating heat to the outer space at nighttime and affected by everchanging weather conditions, different seasons, and locations. Unfortunately, due to the single temporal temperature differential, all-day electricity generation from environmental thermal energy remains a critical challenge. To harvest temperature fluctuations for electricity generation, some novel and emerging technologies have been reported and developed recently, such as pyroelectric energy harvesters [17][18][19], thermally regenerative electrochemical cycles [2,6,20], and thermal resonators [16,21]. However, pyroelectric energy harvesters strongly rely on high-frequency temperature fluctuations [18], mismatching the wide diurnal fluctuation of environmental temperature. Although thermally regenerative electrochemical cycles exhibit high efficiency with a small-scale device, the high cost, electrode reversibility, and cell durability still limit their application at large scale [5]. Thermal resonators provide an approach to the conversion of temporal temperature differential to spatial temperature differential by using phase change materials (PCMs) and have the capability of being optimized at different target frequencies of temperature fluctuations [16], but the small amplitude (generally approximate to the temperature difference between day and night) of temperature fluctuations becomes a critical limitation when they are applied in practical environmental thermal energy harvesting. Not only the low frequency but also the small amplitude of temperature fluctuations impedes effective utilization of environmental thermal energy by current single technology. It is worth noting that solar irradiation is a significant contributor of environmental temperature fluctuations and solar-thermal conversion technologies have been extensively investigated for solar steam generation [22][23][24][25][26][27], electricity generation [28][29][30][31][32], and solar hot-water systems [33]. In addition, passive radiative cooling (PRC), a phenomenon in which a surface spontaneously cools by radiating heat to the cold outer space through the longwave infrared (LWIR) transmission window (8-13 μm) of the atmosphere, has been demonstrated to supply considerable cooling power density without sunlight [34][35][36][37][38]. Hence, the development of hybrid systems might introduce a novel avenue for the use of environmental thermal energy in all-day electricity generation, which is of great importance to relieve energy issues. In this work, we report a tandem device based on a polypyrrole (PPy) broadband absorber/radiator layer, thermogalvanic cell, and thermal storage material that maximizes the temperature differential (ΔT) across the device during the traditional small amplitude of environmental temperature fluctuations and achieves all-day electricity generation. The structure of the thermogalvanic cell is illustrated in Figure 1(a) and Figure S1. The top layer is a hierarchically structural PPy layer that serves as a heat exchanger with an ambient environment including heating and cooling. The thermogalvanic cell in the middle consists of two graphite sheet electrodes and 0.4 M potassium ferricyanide/ferrocyanide (K 3 Fe(CN) 6 /K 4 Fe(CN) 6 ) aqueous electrolyte with a relatively high Seebeck coefficient (S e ) of -1.4 mV K -1 [1,39]. A PCM (labelled as Cu foam/PEG1000 in Figure 1(a) and Figure S1a) at the bottom stores thermal energy and maintains a hysteretic temperature (near the phase transition temperature T * ) on the bottom electrode. The mechanisms of the two working models of the device and the corresponding energy flux are schematically depicted in Figure 1(b). Model 1 (upper) is driven by heating at daytime with a relatively hot environmental temperature and sometimes natural sunlight. The top electrode achieves a high temperature via the PPy layer absorbing radiation from an ambient environment and natural sunlight, whereas the bottom electrode maintains a low temperature by storing latent heat in the PCM, yielding a large temperature differential (ΔT) across the TGC. Complementary model 2 (lower) is driven by cooling at nighttime with a relatively cold environmental temperature. The top electrode cools quickly due to the strong radiative cooling ability of the PPy layer, and the bottom electrode also maintains the temperature near the phase change temperature (T * ) of the PCM. As a result, a considerable inverse ΔT is built in the TGC. Consequently, the thermogalvanic cell with a large ΔT yields an impressive maximum output of 0.6 W m -2 at sunny daytime, and an extra output of 53 mW m -2 is still achieved at nighttime. In addition, the device also exhibits a continuous output during ambient environmental temperature fluctuation without any illumination, which testify its feasibility at sunless day. Furthermore, a proof-of-concept large-scale prototype is successfully fabricated to demonstrate the ability to harvest and recycle environmental thermal energy for all-day electricity generation outdoors as well as the feasibility of scale up. Characterizations of the Polypyrrole-(PPy-) Modified Graphite Sheet. We used the in situ chemical oxidation method to polymerize PPy on the top graphite electrode (see Supplementary Materials for details). The graphite sheets were selected as the electrodes for the TGC due to its low cost and relatively high current density [3]. Figure 2(b) compares the optical photograph and corresponding surface scanning electron microscopy (SEM) image of a PPy-modified graphite sheet (labelled as PPy/graphite) with those from a pristine graphite sheet (labelled as graphite). The PPy/graphite is notably dark in contrast to the pristine light-grey graphite, and PPy displays a typically cauliflower-like hierarchical structure ranging from nanosize to microsize. The cross-sectional SEM image (Figure 2(c)) shows the PPy layer with an average thickness of 20 μm on the graphite sheet. The dependence of the thickness of PPy on polymerization times was also characterized by SEM ( Figure S2). The chemical composition of the PPy/graphite was analysed by Fourier transform infrared (FTIR) spectroscopy ( Figure 2(d)). The spectrum of PPy/graphite shows identical absorption peaks at 1517 cm −1 and 1014 cm −1 , corresponding to the in-ring stretching of C=C bonds in the pyrrole rings and the in-plane deformation of N-H bonds, respectively [40]. No absorption peak is present for graphite ( Figure 2(d)). Furthermore, we also investigated the stability of PPy/graphite via FTIR spectroscopy and thermogravimetric analysis (TGA), as shown in Figure S3. All of the characteristic peaks of PPy are consistent with the pristine sample after exposure to the environment for one month, indicating excellent stability for outdoor operation. As schematically depicted in Figure 2(a), the mechanism benefits from the varied sizes of the PPy clusters and matched bonding frequency and multiple scattering and absorption of radiation exist in the hierarchical PPy layer that significantly suppresses reflection. Therefore, PPy/graphite exhibits ultrahigh broadband absorptivity/emissivity, showing distinct advantages over pristine graphite. The spectroscopic performance in both the solar (0.3 to 2.5 μm) and infrared (2.5 to 25 μm) regions was characterized by ultraviolet-visible-near-infrared (UV-Vis-NIR) spectrophotometry and FTIR spectrometry, respectively (Figure 2(e)). The absorptivity of PPy/graphite is greater than 0.98, as weighted by the standard air mass 1.5 global (AM 1.5 G) solar spectrum. The average emissivity of approximately 0.93 is measured over the atmospheric LWIR transmission window (8-13 μm). Both of these values lay the foundation for efficient heating at daytime and cooling at nighttime. Furthermore, we compared the absorptivity/emissivity values of different PPy thickness samples ( Figure S4), which were nearly equal within the range of errors. Hence, PPy/graphite with a PPy thickness of 20 μm was used in the following experiments, considering the relatively low thermal resistance. Performances of Heating and Cooling. To test the performance of heating assisted with natural sunlight, the PPy/graphite and graphite were illuminated with different energy densities generated by a solar simulator. As shown in Figure 3(a), the temperature of the samples increases with the increase in illumination time. Due to the excellent absorptivity, as noted above, PPy/graphite exhibits a more rapid rate of temperature increase and reaches a steadystate temperature of 91°C under one solar radiation density, much higher than that of graphite at 80°C, in agreement with infrared (IR) thermal images of the steady state (inset of Figure 3(a)). In addition, the steady-state temperature of PPy/graphite at different illumination densities is significantly higher than that of graphite (Figure 3(b) and Figure S5). These results verify the critical role of the hierarchical PPy layer in enhancing heating ability. The radiative cooling performance of PPy/graphite and graphite were also investigated by theoretical simulations and outdoor experiments. Considering all of the heat exchange processes, the net cooling power (P cool ) of a radiator can be defined as follows [34]: where is the radiation emitted by the radiator, is the incident atmospheric radiation absorbed by the radiator, is the thermal losses due to convection and conduction and P sun is the incident solar power absorbed by the radiator. In this work, I BB ðT, λÞ = ð2hc 2 /λ 5 Þð1/ðe hc/ðλk B TÞ − 1ÞÞ is the spectral radiance of a blackbody defined by Planck's law at temperature T, where h is Planck's constant, k B is the Boltzmann constant, c is the speed of light in a vacuum, λ is the wavelength, and ϵðλ, θÞ is the emissivity of the radiator according to Kirchhoff's law. The angle-dependent emissivity of the atmosphere is given by [41] ϵ atm ðλ, θÞ = 1 − tðλÞ 1/cos θ , where tðλÞ is the atmospheric transmittance in the zenith direction [42], T and T amb are the temperatures of the radiator and ambient air, respectively, and h c = h cond + h conv is a combined nonradiative heat coefficient stemming from the conductive and convective heat exchange of the radiator with the ambient air. Considering the practical operation of PPy/graphite and graphite at night, we assumed the terms P sun = 0, T amb = 20°C, and h c = 6 W m -2 K -1 [43]. The simulated P cool of PPy/graphite, graphite, an ideal broadband radiator (i.e., blackbody), and an ideal selective radiator (which has a unity emissivity only over the atmospheric LWIR transmission window of 8-13 μm) are shown in Figure 3(c). The transverse intercept (P cool = 0) represents the lowest temperature that the radiator can reach. The ideal selective radiator can reach a lower temperature, whereas it has an inferior P cool when the temperature is not much lower than T amb [44]. In contrast, the ideal broadband radiator has a superior P cool over a wide temperature range, especially at high temperature. In this work, the device is heated by thermal storage materials at night (Figure 1(b)), the temperature of which is higher than T amb at all times. Therefore, the ideal broadband radiator is a better choice. Because the high emissivity in the entire infrared band is close to that of the ideal broadband radiator, the PPy/graphite exhibits much higher P cool than graphite. Furthermore, we demonstrated the real-time, continuous outdoor radiative cooling performances of the samples after solar heating (Figure 3(d)). In addition, the fluctuation of relative humidity in the ambient air was also measured (inset of Figure 3(d)). PPy/graphite yields an average of~2.5°C and~5°C lower than graphite and ambient air, respectively. The remarkable heating and cooling performance of PPy/graphite is expected to generate as much larger ΔT for TGC operation than graphite in the day and at night, respectively. Performance of Electricity Generation. Although a highly efficient heat exchanger assisted with solar heating and radiative cooling is used, the ΔT across the TGC is still limited by the synchronous temperature fluctuations of both the top and bottom electrodes. To achieve a larger ΔT and recycle the residual thermal energy simultaneously, we connected the Cu foam/PEG1000 to the bottom electrode of the TGC. The Cu foam serves as a highly thermally conducting and porous matrix [16], and the PEG1000 bolsters the thermal capacitance through the latent heat of its phase change. PEG1000 is chosen as the PCM due to its suitable phase transition temperature (T * , 38°C), which is approximately the average temperature of the device during all-day operation ( Figure S6a). The Cu foam/PEG1000 with a high thermal effusivity (e) (see Supplementary Note 1 and Figure S6b) not only stores residual thermal energy via phase transition but also maintains the temperature of the bottom electrode near T * . The stored thermal energy is recycled as a heat source during night-time operation. Via the synergistic effect of PPy and Cu foam/PEG1000, a considerable ΔT (which means high output) can be yielded easily both in the daytime and at night without any complex optical or thermal concentration systems. To verify the rationality of our design, we compared the output performances of three different devices, namely, PPy-PCM (using both PPy/graphite and PCM), G-PCM (using graphite and PCM), and G-blank (using only graphite). During operation, all of the devices were illuminated to simulate the environment of sunny day, and the corresponding open-circuit voltage (V oc ) and the temperatures of the top electrodes (T top ) and the bottom electrodes (T bottom ) were recorded. As shown in Figure 4(a), the T top of PPy-PCM increases more rapidly and reaches a higher steady-state temperature than that of G-PCM due to better heating performance. The T top of PPy-PCM is a little lower than that of G-blank owing to its much lower T bottom . The T bottom values of PPy-PCM and G-PCM both increase slowly near T * (38°C) compared with that of G-blank, which is ascribed to the phase transition of the Cu foam/-PEG1000. As a result, the largest ΔT is measured in PPy-PCM under illumination. Corresponding to the regularity of temperature, PPy-PCM yields a maximum negative V oc of -54.2 mV, much larger than those of G-PCM (-44.9 mV) and G-blank (-34.1 mV), as clearly shown in Figure 4(b). When the phase change of PCM was complete (approximately three hours of illumination), the illumination was turned off and the devices were exposed to a mixture of ice water (~273 K) without direct contact (exchanging heat only by radiation) to simulate radiative cooling at a night-time environment. As shown in the grey area of Figure 4(a), it is worth noting that the superior radiative cooling ability of PPy-PCM produces a much lower T top compared with that of G-PCM. The T top of PPy-PCM is higher than that of Gblank owing to its much higher T bottom . The T bottom values of PPy-PCM and G-PCM have a long-term hysteresis effect and are higher than T top due to the release of latent heat by the Cu foam/PEG1000. Consequently, a maximum positive V oc of 18.2 mV is also achieved by PPy-PCM without illumination, which is almost twice that of G-PCM at 9.2 mV (Figure 4(b)). Without PCM, the ΔT of G-blank driven by the weak radiative cooling is so small that it only generates a positive V oc of less than 2.2 mV. Furthermore, we counted the average V oc of these three devices under illumination (C opt = 1) and after illumination (Figure 4(c)). Obviously, PPy-PCM generates the highest voltage output regardless of illumination and darkness. The current-voltage curves measured at the maximum V oc and the corresponding output power density are shown in Figure 4 respectively. It is unquestionable that PPy-PCM is the best choice for solar thermal energy harvesting. To estimate the feasibility of PPy-PCM in various weather conditions, we further tested its performance under varying optical concentration illumination and after illumination ( Figure S7). The calculated average V oc values in different conditions are shown in Figure 4(f). With the increase in optical concentration, the average V oc under illumination increases accordingly, whereas the average V oc after illumination changes with small fluctuation due to the same storage thermal energy by PCM. Furthermore, we calculated the total efficiency (η total ) for PPy-PCM, representing 50% and 200% enhancements of those of G-PCM and G-blank, respectively (see Supplementary Note 2 and Figure S8). Considering the sunless day during the practical scenarios, the PPy-PCM device was exposed to a hot and cold environment without illumination successively to test its performance of all-day electricity generation. As shown in Figure S9a, the device generates V oc continuously from a hot ambient temperature (45°C) to a cold ambient temperature (15°C). The maximum negative and positive V oc are -12 mV and 10 mV, respectively. And the corresponding I sc and P max are 7.6 A m -2 and 24 mW m -2 and 6.4 A m -2 and 17 mW m -2 ( Figure S9b). Outdoor Demonstration of a Large-Scale Prototype Module. To demonstrate the practical applications of this design for all-day harvesting of environmental thermal energy, a proof-of-concept tandem thermogalvanic cell prototype was fabricated for outdoor testing (Figure 5(a)). The device is based on a large-scale PPy/graphite with an active area of 10 cm × 10 cm ( Figure S10a). The used volume of the Cu foam/PEG1000 is simulated and depends on the absorption of all of the residual thermal energy during the daytime (see Supplementary Note 1 and Figure S10b). We measured the 24-hour continuous open-circuit voltage (V oc ) of the device and the temperatures of the top electrode (T top ) and the bottom electrode (T bottom ). Additionally, the solar flux of natural sunlight in the day and the relative humidity in the ambient air were also recorded. As shown in Figure 5(c), the T top and the T bottom increase synchronously with the enhancement of solar intensity and ambient temperature in the morning, reach a maximum at noon, and decrease after sunset. However, the impressive heating and cooling performance of the PPy layer mean that the T top is much higher at daytime with natural sunlight and is much lower at nighttime, respectively. The T bottom clearly exhibits a longterm hysteresis effect near T * (38°C) due to the phase transition of the Cu foam/PEG1000. Consequently, a considerable ΔT across the device lasts from day to night to generate a sustainable V oc (bottom graph in Figure 5(c)). The V oc reaches approximately 24.7 mV at daytime (average solar flux of~0.5 kW m -2 , upper inset of Figure 5(c)) and 9.8 mV at nighttime, and the corresponding I sc values are approximately 134 mA and 52 mA, resulting in maximum output power values of 0.83 mW and 0.13 mW, respectively ( Figure 5(b)). Conclusions In summary, a tandem device consisting of a absorber/radiator layer (PPy), a thermogalvanic cell, and a thermal storage material (Cu foam/PEG1000) was designed to harness and recycle environmental thermal energy for all-day electricity generation. The PPy layer with ultrahigh broadband absorptivity/emissivity exhibits impressive performance in heating at daytime and cooling at nighttime. The reversible phase transition processes of the Cu foam/PEG1000 enable the thermogalvanic cell to recycle residual thermal energy and generate electricity day and night, regardless of the single temporal temperature differential existing in an environment. By the synergistic enhancement of PPy/graphite and Cu foam/PEG1000, the thermogalvanic cell yielded a maximum electrical output power of 0.6 W m -2 at daytime with simulated sunlight and 53 mW m -2 at nighttime. Even at the sunless environment, the thermogalvanic cells also exhibit the ability of continuous electricity generation, which opens a promising path to enhance environmental thermal energy harvesting. In addition, the performance of the device can be further improved using a TGC with high Seebeck coefficient and optimized electrodes [1,3,39,45].
4,995.2
2019-10-31T00:00:00.000
[ "Engineering", "Environmental Science" ]
Spin-orbit splitting of Andreev states revealed by microwave spectroscopy We have performed microwave spectroscopy of Andreev states in superconducting weak links tailored in an InAs-Al (core-full shell) epitaxially-grown nanowire. The spectra present distinctive features, with bundles of four lines crossing when the superconducting phase difference across the weak link is 0 or $\pi.$ We interpret these as arising from zero-field spin-split Andreev states. A simple analytical model, which takes into account the Rashba spin-orbit interaction in a nanowire containing several transverse subbands, explains these features and their evolution with magnetic field. Our results show that the spin degree of freedom is addressable in Josephson junctions, and constitute a first step towards its manipulation. I. INTRODUCTION The Josephson supercurrent that flows through a weak link between two superconductors is a direct and generic manifestation of the coherence of the many-body superconducting state. The link can be a thin insulating barrier, a small piece of normal metal, a constriction, or any other type of coherent conductor, but regardless of its specific nature, the supercurrent is a periodic function of the phase difference δ between the electrodes [1]. However, the exact function is determined by the geometry and material properties of the weak link. A unifying microscopic description of the effect has been achieved in terms of the spectrum of discrete quasiparticle states that form at the weak link: the Andreev bound states (ABS) [2][3][4][5]. The electrodynamics of an arbitrary Josephson weak link in a circuit is not only governed by the phase difference but depends also on the occupation of these states. Spectroscopy experiments on carbon nanotubes [6], atomic contacts [7][8][9], and semiconducting nanowires [10][11][12] have clearly revealed these fermionic states, each of which can be occupied at most by two quasiparticles. The role of spin in these excitations is a topical issue in the rapidly growing fields of hybrid superconducting devices [13][14][15] and of topological superconductivity [16][17][18][19]. It has been predicted that for finite-length weak links, the combination of a phase difference, which breaks time-reversal symmetry, and of spin-orbit coupling, which breaks spin-rotation symmetry, is enough to lift the spin degeneracy, therefore, giving rise to spin-dependent Josephson supercurrents even in the absence of an external magnetic field [20][21][22][23]. Here we report the first observation of transitions between zero-field spin-split ABS. II. ABS AND SPIN-ORBIT INTERACTION Andreev bound states are formed from the coherent Andreev reflections that quasiparticles undergo at both ends of a weak link. Quasiparticles acquire a phase at each of these Andreev reflections and while propagating along the weak link of length L. Therefore, the ABS energies depend on δ, on the transmission probabilities for electrons through the weak link, and on the ratio λ ¼ L=ξ, where ξ is the superconducting coherence length. Assuming ballistic propagation, ξ ¼ ℏv F =Δ is given in terms of the velocity v F of quasiparticles at the Fermi level within the weak link and of the energy gap Δ of the superconducting electrodes. In a short junction defined by L ≪ ξ, each conduction channel of the weak link, with transmission probability τ, gives rise to a single spin-degenerate Andreev level at energy E A ¼ Δ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 − τ sin 2 ðδ=2Þ p [3][4][5]. This simple limit has been probed in experiments on aluminum superconducting atomic contacts using three different methods: Josephson spectroscopy [7], switching current spectroscopy [8], and microwave spectroscopy in a circuit QED setup [9]. The spectrum of Andreev states in a weak link with a sizable spin-orbit coupling has already been probed in two experiments on InAs nanowires [11,12]. Both experiments were performed in the limit L ≲ ξ. In Ref. [12], the zero-field spectrum was probed using a circuit QED setup and no effect of spin-orbit interaction was reported. In Ref. [11], where spectra at finite field were obtained by Josephson spectroscopy, spin-orbit interaction enters in the interpretation of the spectra when the Zeeman energy is comparable to the superconducting gap [24]. In the following, we consider a finite-length weak link with Rashba spin-orbit interaction [ Fig. 1(a)] and show that spin-split Andreev states require at least two transverse subbands. We first discuss the case of a purely onedimensional weak link. As shown by the green lines in Fig. 1(b), spin-orbit interaction splits the dispersion relation (assumed to be parabolic) according to the electron-spin direction [25]. AR at the superconductors couple electrons (full circles) with holes (open circles) of opposite spins and velocities. When the transmission probability across the wire is perfect (τ ¼ 1), Andreev bound states arise when the total accumulated phase along closed paths that involve two AR and the propagation of an electron and a hole in opposite directions [ Fig. 1(c)] is a multiple of 2π [2]. Figure 1(d) shows, in the excitation representation, the energy of the resulting ABS as a function of δ. ABS built with right-(left-) moving electrons are shown with thin solid (dashed) lines in Figs. 1(c) and 1(d). Note that the existence of two ABS at some phases is just a finite-length effect [5] (here, L=ξ ¼ 0.8) and that ABS remain spin degenerate as the spatial phases acquired by the electron and the Andreev-reflected hole are the same for both spin directions. Backscattering in the weak link (τ ≠ 1) due either to impurities or to the spatial variation of the electrostatic potential along the wire couples electrons (as well as holes) of the same spin traveling in opposite directions, leading to avoided crossings at the points indicated by the open blue circles in Fig. 1(d). One obtains in this case two distinct Andreev states (thick solid lines), which remain spin degenerate. This is no longer the case in the presence of a second transverse subband, even if just the lowest one is actually occupied [26][27][28][29]. Figure 1(f) shows how spin-orbit coupling hybridizes the spin-split dispersion relations of the two subbands (around the crossing points of 1↑ with 2↓ and of 1↓ with 2↑) [30,31]. The new dispersion relations become nonparabolic and are characterized by different energy-dependent spin textures [26][27][28][29][30][31]. We focus on a situation in which only the two lowest ones (m 1 and m 2 in the figure) are occupied. Importantly, their associated Fermi velocities are different. When τ ¼ 1, this difference in Fermi velocities leads, as illustrated by Figs. 1(e) and 1(g), to two families of ABS represented by black and red thin lines built from states with different spin textures. As before, backscattering leads to avoided crossings at the points indicated by the blue open circles in Fig. 1(e). The resulting ABS group in manifolds of spin-split states, represented by the thick black lines. In the absence of a magnetic field, the states remain degenerate at δ ¼ 0 and π. Figure 2 shows parity-conserving transitions that can be induced by absorption of a microwave photon at a given phase difference. Red arrows [ Fig. 2(a)] correspond to pair transitions in which the system is initially in the ground state, and a pair of quasiparticles is created either in one manifold or in different ones. Green arrows [ Fig. 2(b)] correspond to single-particle transitions where a trapped quasiparticle [32] already occupying an Andreev state is excited to another one [26,33], which can be in the same or in another ABS manifold. The corresponding transition energies in the absorption spectrum for both the pair and single-particle cases are shown in Fig. 2(c) as a function of the phase difference δ. Pair transitions that create two quasiparticles in the same energy manifold do not carry information on the spin structure. On the contrary, pair and single-particle transitions involving different energy manifolds produce peculiar bundles of four distinct lines all crossing at δ ¼ 0 and δ ¼ π. They are a direct signature of the spin splitting of ABS. Finally, single-particle transitions within a manifold give rise to bundles of two lines. As we discuss below, some of these transitions are accessible in our experiment. Figure 3 shows a spectrum measured on an InAs nanowire weak link between aluminum electrodes. The plot shows at which frequency f 1 microwave photons are absorbed as a function of the phase difference δ across the weak link (see description of the experiment below). This spectrum is very rich, but here we point to two salient features highlighted with color lines on the right-hand side of the figure. The red line corresponds to a pair transition, with extrema at δ ¼ 0 and δ ¼ π. The frequency f 1 ðδ ¼ 0Þ ¼ 26.5 GHz is much smaller than twice the gap of aluminum 2Δ=h ¼ 88 GHz, as expected for a junction longer than the coherence length. To the best of our knowledge, this is the first observation of a discrete Andreev spectrum in the long-junction limit. The observation of the bundle of lines (in green) with crossings at δ ¼ 0 and δ ¼ π that clearly correspond to single-particle transitions shown in Fig. 2(c) is the central result of this work. III. EXPERIMENTAL SETUP The measurements are obtained using the circuit QED setup shown in Fig. 4(d) and performed at approximately 40 mK in a pulse-tube dilution refrigerator. The superconducting weak link is obtained by etching away, over a 370-nm-long section, the 25-nm-thick aluminum shell that fully covers a 140-nm-diameter InAs nanowire [34][35][36] [see Figs. 4(a) and 4(b)]. A side gate allows us to tune the charge carrier density and the electrostatic potential in the nanowire and therefore the Andreev spectra [11]. The weak link is part of an aluminum loop of area S ∼ 10 3 μm 2 , which has a connection to ground to define a reference for the gate voltage [see Fig. 4(c)]. The phase difference δ across the weak link is imposed by a small magnetic field B z ð< 5 μTÞ perpendicular to the sample plane: Two additional coils are used to apply a magnetic field in the plane of the sample. The loop is inductively coupled to the shorted end of a λ=4 microwave resonator made out of Nb, with resonance frequency f 0 ≈ 3.26 GHz and internal quality factor Q int ≈ 3 × 10 5 . A continuous signal at frequency f 0 is sent through a coplanar transmission line coupled to the resonator (coupling quality factor Q c ≈ 1.7 × 10 5 ), and the two quadratures I and Q of the transmitted signal are measured using homodyne detection [see Fig. 4(d)]. Andreev excitations in the weak link are induced by a microwave signal of frequency f 1 applied on the side gate. The corresponding microwave source is chopped at 3.3 kHz, and the response in I and Q is detected using two lock-ins, with an integration time of 0.1 s. This response is expressed in terms of the corresponding frequency shift f − f 0 in the resonator (see the Appendix, Sec. 3). The fact that single-particle transitions are observed (see Fig. 3) means that during part of the measurement time, the Andreev states are occupied by a single quasiparticle. This is in agreement with previous experiments in which the fluctuation rates for the occupation of Andreev states by out-of-equilibrium quasiparticles were found to be in the 10-ms −1 range [9,12,32]. Note that in contrast to an excitation that couples to the phase difference across the contact through the resonator [9,24,26], exciting through the gate allows us to drive transitions away from δ ¼ π and at frequencies very far detuned from that of the resonator. Figure 5(a) presents another spectrum taken at zero magnetic field (apart from the tiny perpendicular field B z < 5 μT required for the phase biasing of the weak link) at V g ¼ 0.5 V. In comparison with the spectrum in Fig. 3, pair transitions are hardly visible in Fig. 5. Bundles of lines corresponding to single-particle transitions have crossings at 7.1, 14.0, and 22.4 GHz at δ ¼ 0 and 9, 21.5, and 26.0 GHz at δ ¼ π. Here, as in Fig. 3 (see the Appendix, Sec. 2), replicas of transition lines shifted by f 0 are also visible (bundle of lines near f 1 ¼ 11 GHz and around δ ¼ 0). They correspond to transitions involving the absorption of a photon from the resonator. Remarkably, the sign of the response appears correlated with the curvature of the transition lines. This suggests that the signal is mainly associated with a change in the effective inductance of the nanowire weak link. Additional work is IV. SPECTROSCOPY AT ZERO MAGNETIC FIELD False-color scanning-electronmicroscope image of the InAs-Al core-shell nanowire. The Al shell (gray) is removed over 370 nm to form the weak link between the superconducting electrodes. A close-by side electrode (Au, yellow) is used to gate the InAs exposed region (green). (b),(c) The nanowire is connected to Al leads that form a loop. This loop is located close to the shorted end of a coplanar wave-guide (CPW) resonator. (d) The CPW resonator is probed by sending through a bus line a continuous microwave tone at its resonant frequency f 0 ¼ 3.26 GHz and demodulating the transmitted signal, yielding quadratures I and Q. Microwaves inducing Andreev transitions are applied through the side gate (frequency f 1 ) using a bias tee, the dc port being used to apply a dc voltage V g . needed to confirm this interpretation. We focus on the bundle of lines between 13 and 23 GHz for which the effect of a magnetic field B is also explored. The green lines in Fig. 5 where x r ¼ 2x 0 =L. It should be noticed that Eq. (1) for λ 1 ¼ λ 2 reduces to the known result for a single quantum channel without spin orbit [5,37]. The fit in Fig. 5(b) corresponds to GHz for the gap of Al). These values can be related to microscopic parameters, in particular to the intensity α of the Rashba spin-orbit interaction entering in the Hamiltonian of the system as H R ¼ −αðk x σ y − k y σ x Þ (with σ x;y Pauli matrices acting in the spin) [26]. Assuming a parabolic transverse confinement potential, an effective wire diameter of W ¼ 140 nm and an effective junction length of L ¼ 370 nm, the values of λ 1;2 are obtained for μ ¼ 422 μeV (measured from the bottom of the band) and α ¼ 38 meV nm, a value consistent with previous estimations [38,39]. However, we stress that this estimation is model dependent: Very similar fits of the data can be obtained using a double-barrier model [with scattering barriers located at the left ðx ¼ −L=2Þ and right ðx ¼ L=2Þ edges of the wire] with λ 1 ¼ 1.1 and λ 2 ¼ 1.9, leading to α ¼ 32 meV nm. For both models, we get only two manifolds of Andreev levels in the spectrum, and only these four single-particle transitions are expected in this frequency window (transitions within a manifold are all below 3.5 GHz). The other observed bundles of transitions are attributed to other conduction channels: Although we considered till now only one occupied transverse subband, the same effect of spin-dependent velocities is found if several subbands cross the Fermi level. A more elaborate model together with a realistic modeling of the bands of the nanowire is required to treat this situation and obtain a quantitative fit of the whole spectra. V. SPIN CHARACTER OF ABS The splitting of the ABS and the associated transitions in the absence of a Zeeman field reveal the difference in the Fermi velocities v 1 and v 2 , arising from the spin-orbit coupling in the multichannel wire. To further confirm that this splitting is indeed a spin effect, we probe the ABS spectra under a finite magnetic field and, in particular, as a function of the orientation of the field with respect to the nanowire axis [26]. Figure 5(c) shows the spectrum in the presence of an in-plane magnetic field with amplitudes B ¼ 0, 2.6, and 4.4 mT applied at an angle of −45°with respect to the wire axis. The symmetry around δ ¼ 0 and δ ¼ π is lost. This is accounted for by an extension of the single-barrier model at finite magnetic field (green lines) and assuming an anisotropic g factor: g ⊥ ¼ 12 and g k ¼ 8 (see below and the Appendix, Sec. 1). The specific effects of a parallel and of a perpendicular magnetic field on the ABS are shown in Fig. 6. When the field is perpendicular to the wire (B⊥x), the ABS spectrum becomes asymmetric (this asymmetry is related to the physics of φ 0 junctions [27]), as observed in Figs. 6(b) and 6(d). The field is directly acting in the quantization direction of the spin-split transverse subbands [gray parabolas in Fig. 1(f)] from which the ABS are constructed, leading to Zeeman shifts of the energies. When the field is along the wire axis Bkx and, thus, perpendicular to the spin quantization direction, it mixes the spin textures and lifts partly the degeneracies at δ ¼ 0 and δ ¼ π (see Fig. 7). The spectrum of ABS is then modified, but it remains symmetric [40] around δ ¼ 0 and π; see Figs. 6(a) and 6(c). Keeping the same parameters as in Fig. 5, the value of the g factor is taken as a fit parameter for all the data with perpendicular field and for all the data with parallel field, leading to two distinct values: g ⊥ ¼ 12 and g k ¼ 8 (see the Appendix). Green lines show the resulting best fits. VI. CONCLUDING REMARKS The results reported here show that the quasiparticle spin can be a relevant degree of freedom in Josephson weak links, even in the absence of a magnetic field. This work leaves several open questions. Would a more realistic modeling of the nanowire [41][42][43][44] allow for a precise determination of spin-orbit interaction from the measured spectra? We need to understand, along the lines of Ref. [45] e.g., the coupling between the microwave photons and the ABS when the excitation is induced through an electric field modulation, as done here, instead of a phase modulation [26,33,46]. In particular, what are the selection rules? Are transitions between ABS belonging to the same manifold allowed? Can one observe pair transitions leading to states with quasiparticles in different manifolds? What determines the signal amplitude? Independent of the answer to these questions, the observation of spin-resolved transitions between ABS constitutes a first step towards the manipulation of the spin of a single superconducting quasiparticle [20,26]. Would the spin coherence time of a localized quasiparticle be different from that of a propagating one [47]? Finally, we think that the experimental strategy used here could allow the probing of a topological phase with Majorana bound states at larger magnetic fields [33]. ACKNOWLEDGMENTS Technical support from P. Sénat is gratefully acknowledged. We thank M. Devoret, M. Hays, and K. Serniak for sharing their results on a similar experiment and for discussions. We thank A. Reynoso for providing us codes related to his work [27] and for useful discussions. We also acknowledge discussions with Ç. Girit, H. Bouchiat, A. Murani, and our colleagues from the Quantronics group. We thank P. Orfila and S. Delprat for nanofabrication support. This work is supported ANR contract JETS, by the Renatech network, by the Spanish MINECO through The nanowire is described by the Hamiltonian H 3D consisting of kinetic energy, a confining harmonic potential in the y and z directions with a confinement width W (effective diameter of the nanowire), and Rashba spin-orbit coupling with intensity α, where m à is the effective mass, and σ x;y are the Pauli matrices for spin. We consider two spin-full transverse subbands denoted by nσ, with n ¼ 1, 2 and σ ¼ ↑; ↓, arising from the confining potential in the transverse direction [gray parabolas in Fig. 1(f)] under the effect of the Rashba spin-orbit coupling. The energy-dispersion relations of the resulting lowest subbands [green lines labeled m 1 and m 2 in Fig. 1(f)] are [26] where s ¼ −1 corresponds to m 1 and s ¼ þ1 to m 2 , and α=W is the strength of the subband mixing due to the Rashba spin-orbit coupling. In accordance with the estimated nanowire diameter, we take W ∼ 140 nm, which leads to E ⊥ 2 − E ⊥ 1 ∼ 0.68 meV for the subband separation. Particle backscattering within the nanowire is accounted for by either a single deltalike potential barrier located at some arbitrary position x 0 or by potential barriers localized at both ends (x ¼ AEL=2). The linearized Bogoliubov-de Gennes equation around the chemical potential μ is where v j¼1;2 are the Fermi velocities given by and k Fj are the Fermi wave vectors satisfying E s ðk Fj Þ ¼ μ. We note that if there is no subband mixing, i.e., η ¼ 0 [gray parabolas in Fig. 1(f)], Eqs. (A2) and (A5) show that indicating clearly that the Fermi velocities are the same. The potential scattering term H b is modeled as where and θ j¼1;2 ¼ arccos½ð−1Þ j ðℏk F j =m à − v j Þ=α characterize the mixing with the higher subbands; i.e., cosðθ j =2Þ and sinðθ j =2Þ determine the weight of the states on the hybridized subbands and therefore their spin texture. The superconducting order parameter ΔðxÞe iδðxÞ in Eq. (A3) is given by Δe −iδ=2 at x < −L=2, Δe iδ=2 at x > L=2, and zero otherwise, where δ is the superconducting phase difference. a. Ballistic regime In the absence of particle backscattering, the phase accumulated in the Andreev reflection processes at SPIN-ORBIT SPLITTING OF ANDREEV STATES … PHYS. REV. X 9, 011010 (2019) x ¼ −L=2 and x ¼ L=2, as illustrated in Fig. 1(g), leads to the following transcendental equation for the energy ϵ ¼ E A =Δ of the ABS as a function of δ: where λ j¼1;2 ¼ LΔ=ðℏv j Þ. For ϵ ≪ 1, there are two sets of solutions given by 8 < : with integers l and l 0 . The ballistic ABS are represented by the thin lines (black and red) in Fig. 1(e). b. Single-barrier model In this case, the effect of the barrier can be taken into account as an additional boundary condition at x ¼ x 0 , where 0 þ is a positive infinitesimal and M ij is the 2 × 2 matrix given by The reflection and transmission coefficients are determined by where v 0 ¼ ℏv 1 v 2 =U 0 , u j ¼ v j =v 0 , u s ¼ ðu 1 þ u 2 Þ=2, and u a ¼ ðu 1 − u 2 Þ=2. From the continuity conditions at x ¼ AEL=2 and Eq. (A9), we find the transcendental equation (1) where τ ¼ jtj 2 . As already noticed in the main text, Eq. (1) leads to split ABS when v 1 ≠ v 2 , except for δ ¼ 0, π where the ABS degeneracy is protected by time-reversal symmetry. c. Double-barrier model In this case, there are two boundary conditions similar to Eq. (A9) at the nanowire-superconductor interfaces, which results in the transcendental equation whereε j ¼ ϵλ j þ ð−1Þ j sδ=2, τ L;R are the transmission probabilities at x ¼∓ L=2, θ ν are the scattering phases acquired at the interfaces (ν ≡ L, R): where d ν and v ν are defined as d in Eq. (A11) replacing U 0 by U ν . Finally, we note φ tot ¼ ðk F1 þ k F2 ÞL − ðθ L þ θ R Þ the total accumulated phase. For the estimations quoted in the main text, we assume two identical barriers, i.e., τ L ¼ τ R ¼ τ. d. Magnetic field effect Information on the ABS spin structure can be inferred from their behavior in the presence of a finite magnetic field. This behavior depends strongly on the orientation of the field with respect to the nanowire axis [26]. We consider a magnetic field lying in the x-y plane. The y component B y (parallel to the spin states of the transverse subbands without RSO) shifts the energy of the subbands depending on the spin states and modifies the Fermi wave vectors as illustrated in Fig. 7(c). They thus satisfy On the other hand, the x component B x mixes oppositespin states, thus, opening a gap at the crossings points as illustrated in Fig. 7(a). We include this effect perturbatively [26]. For both Bkx and B⊥x cases, the resulting ABS and the corresponding transition lines are shown in the bottom row of Fig. 7. e. Fitting strategy The transcendental equations [Eqs. (1) and (A12)] for the single-and double-barrier models contain dimensionless parameters with which we fit the experimental data at zero magnetic field: (i) λ 1 , λ 2 , τ, and x r for the single-barrier model, (ii) λ 1 , λ 2 , τ, and φ tot for the double-barrier model. We then deduce the physical parameters α, μ (measured from the bottom of the lowest band), L, and U 0 (or U L=R ) using Eqs. (A2), (A5), and (A11), and assuming that the nanowire diameter is fixed at W ¼ 140 nm. We further set m à ¼ 0.023m e where m e is the bare electron mass. For the experimental data in Fig. 5, the single-barrier model gives , and x r ¼ 0.52, resulting in the microscopic parameters α ¼ 53 meVnm, μ ¼ 255 μeV, U 0 ¼ 92 meV nm, L ¼ 332 nm. Using the doublebarrier model, we get Another possibility is to fix the length of the junction L to the length of the uncovered section of the InAs nanowire, 370 nm, which leads to α ¼ 38 meV nm and μ ¼ 422 μeV for the singlebarrier model (α ¼ 32 meV nm and μ ¼ 580 μeV for the double-barrier model). However, in the single-barrier model, one cannot find values of U 0 leading to the corresponding τ. This is due to the fact that in our simplified model for the scattering matrix, processes involving the higher subbands are neglected, thus, limiting its validity to small values of U 0 . In order to fit the finite magnetic field data, in addition to the parameters determined at zero magnetic field, one needs the g factors in the parallel and perpendicular directions, g k and g ⊥ . We use all the data taken with field in the parallel and in the perpendicular directions and calculate the correlation function between the images of the measured spectra (taking the absolute value of the response f − f 0 ) and theory using various values of g k and g ⊥ . Figure 8 shows the dependence of the correlation functions with g k and g ⊥ . The best agreement is found for g k ¼ 8 and g ⊥ ¼ 12, which are within the range of values reported in the literature [48][49][50][51]. Note that the determination of g k is less accurate, and that overall, g k ¼ 4 gives a similar correlation as g k ¼ 8, but agreement is worse at the largest values of B k where the effect is the strongest. 2. Fit of the data at V g = − 0.89 V Many features of the data taken at V g ¼ −0.89 V (Fig. 3) can be accounted for by the single-barrier model. This is shown in Fig. 9, where we compare the data with the results of theory using λ 1 ¼ 2.81, λ 2 ¼ 4.7, τ ¼ 0.25, and x r ¼ 0.17. The Andreev spectrum obtained with this set FIG. 7. Effect of an in-plane magnetic field on the band structure (top row), the Andreev levels (bottom row, left) and the excitation spectrum (bottom row, right). (b) Reference graphs at zero field, (a) field applied along the wire axis, (c) field applied perpendicularly to the wire axis. The field effect on the band structure is exaggerated for clarity. The model parameters for the Andreev levels and the excitation spectrum are the same as in Fig. 5 and B ¼ 10 mT. of parameters [ Fig. 9(c)] presents three manifolds of spinsplit states leading to three bundles of four lines associated to single-particle transitions between manifolds [green lines in Fig. 9(b)]. They are in good agreement with the transition lines at least partly visible in the data. In addition, the pair transition corresponding to two quasiparticles excited in the lowest manifold gives rise to an even transition which falls in the frequency range of the data and roughly corresponds to a transition visible in the data. Assuming a fixed length L ¼ 370 nm and using the model of Eq. (A1), we deduce the microscopic parameters α ¼ 43.7 meV nm and μ ¼ 102 μeV (measured from the bottom of the band). However, these values should be taken with care since the linearization of the dispersion relation is not valid for energies close to Δ when μ ≲ Δ. Measurement calibration The measurement is performed by chopping with a square wave the excitation signal applied on the gate and recording with lock-in detectors the corresponding modulation of the response of the circuit on the two quadratures I and Q. We interpret these modulations as arising from shifts of the resonator frequency. To calibrate this effect, we measure how the dc values of I and Q change for small variations of the measurement frequency f 0 around 3.26 GHz. With all of the measurement chain being taken into account, we find ∂I=∂f 0 ¼ −40.3 μV=Hz and ∂Q=∂f 0 ¼ 34.4 μV=Hz. The signal received by the lockin measuring the I quadrature is a square wave, so that the response I LI at the chopping frequency is related to the rootmean-square (I rms ) and peak-to-peak (I PP ) amplitudes at its input by I LI ¼ ð4=πÞI rms ¼ ð ffiffi ffi 2 p =πÞI PP . The same reasoning applies to the Q quadrature measurement. We combine I LI and Q LI into X LI ¼ −ðI LI =40.3Þ þ ðQ LI =34.4Þ and using ∂X=∂f 0 ¼ 2 μV=Hz, the resonator frequency 4. Gate dependence of the spectrum Figure 10 shows two examples of the gate-voltage dependence of the spectrum at phase difference δ ¼ π, with reference spectra as a function of phase. In both spectra, single-particle transitions appear white at δ ¼ π, whereas pair transitions appear black. When V g is changed, A remarkable feature is that the black and white lines move "out of phase," which can be understood from the effect of V g on the transmission τ: When τ decreases, the distance between the two lowest manifolds decreases at δ ¼ π so that the transition energy for single-particle transitions decreases; at the same time, the energy of the lowest manifold increases and so does the transition energy for pair transitions.
7,285.6
2018-10-05T00:00:00.000
[ "Physics" ]
Assessing the sensitivity and suitability of a range of detectors for SIMT PSQA Abstract Purpose Single‐isocenter multi‐target intracranial stereotactic radiotherapy (SIMT) is an effective treatment for brain metastases with complex treatment plans and delivery optimization necessitating rigorous quality assurance. This work aims to assess five methods for quality assurance of SIMT treatment plans in terms of their suitability and sensitivity to delivery errors. Methods Sun Nuclear ArcCHECK and SRS MapCHECK, GafChromic EBT Radiochromic Film, machine log files, and Varian Portal Dosimetry were all used to measure 15 variations of a single SIMT plan. Variations of the original plan were created with Python. They comprised various degrees of systematic MLC offsets per leaf up to 2 mm, random per‐leaf variations with differing minimum and maximum magnitudes, simulated collimator, and dose miscalibrations (MU scaling). The erroneous plans were re‐imported into Eclipse and plan‐quality degradation was assessed by comparing each plan variation to the original clinical plan in terms of the percentage of clinical goals passing relative to the original plan. Each erroneous plan could be then ranked by the plan‐quality degradation percentage following recalculation in the TPS so that the effects of each variation could be correlated with γ pass rates and detector suitability. Results & conclusions It was found that 2%/1 mm is a good starting point for the ArcCHECK, Portal Dosimetry, and the SRS MapCHECK methods, respectively, and provides clinically relevant error detection sensitivity. Looser dose criteria of 5%/1 mm or 5%/1.5 mm are suitable for film dosimetry and log‐file‐based methods. The statistical methods explored can be expanded to other areas of patient‐specific QA and detector assessment. 7][8] This modern treatment option is more efficient and less invasive compared with historical options of WBRT, surgery, radiosurgery, radiosensitizers, and chemotherapy. The SIMT technique uses automated treatment planning techniques to optimize target coverage and organat-risk (OAR) sparing.Two modes of delivery currently available for SIMT plans are the Varian Hyperarc (HA) technique (Varian Medical Systems, Palo Alto, California, USA), 9,10 which utilizes volumetric modulated arc therapy (VMAT), and the BrainLab Elements Multiple Brain Metastases software (BrainLab, Munich, Germany), which utilizes dynamic conformal arc therapy (DCAT). 11IMT plans are optimized to deliver large and highly conformal dose distributions to multiple small volumes utilizing an idealized treatment system in terms of imaging, localization, and delivery.While SIMT offers excellent local control and acceptable toxicity, it is less clear how sensitive these plans are to sub-optimal machine performance and geometric localization variations.What is displayed to the dosimetrist or clinician in terms of target coverage and acceptable OAR toxicity may not be achievable due to localization and machine performance uncertainties.It is therefore pertinent to conduct robust patient-specific quality assurance (PSQA) measurements for all SIMT plans on the intended treatment machine to assess the deliverability, dosimetry, and localization of the dose deposition. A range of quality assurance (QA) tools are reported for use in the QA of stereotactic ablative body radiotherapy (SABR) and SIMT treatment plans.Some commonly available methods and sample publications are listed below: • EBT3 and EBT-XD Radiochromic Film 12 (Ashland Specialty Products, Wilmington, Delaware, USA) • Low and high detector-density ion chamber/diode arrays: ⚬ SRS MapCHECK, 13,14 ArcCHECK, 15 and MapCHECK 2 16 (Sun Nuclear Corporation, Melbourne, Florida, USA).⚬ PTW Octavius, Octavius II, Octavius Detector 1600 SRS 17 PTW Freiburg GmbH, Freiburg, Germany) ⚬ IBA myQA SRS Detector (IBA International, Louvain-La-Neuve, Belgium) • Small-volume ion chambers/diamond detectors (point dose measurements) • Electronic Portal Imaging Device (EPID) based 2D or 3D back projection reconstruction techniques: ⚬ Varían Portal Dosimetry (PD) (Varían Medical Systems, Palo Alto, California, USA) ⚬ Sun Nuclear 3DVH, PerFRACTION ⚬ VIPER 18 Calvary Mater Newcastle Hospital, New South Wales, Australia) • Polymer Gel dosimetry 19 • Machine delivery log files and independent recalculation (Mobius 3D, Varian Medical Systems, Palo Alto, California, USA) 20,21 The tools listed above each have pros and cons, as well as cases for where they are best suited.In this study, we compare EBTXD Radiochromic Film, the SNC Arc-CHECK and SRS MapCHECK, Varian Portal Dosimetry, and TrueBeam log file (trajectory log) analysis to determine each detector/method's ability to detect clinically significant errors in the context of SIMT.We propose that a suitable detector should, at a minimum, be able to detect any deviation in machine performance and/or error induced in the plan that can be shown to have a clinical impact on the plan quality.We hypothesize that appropriate gamma (γ) criteria can be chosen irrespective of the detector, by testing for a criterion set (dose difference/distance-to-agreement) that decreases the γ-passing rate relative to the ground truth linearly in proportion with the severity of the effect. Using this methodology, we compare EBT-XD Radiochromic Film, the SNC ArcCHECK and SRS MapCHECK, Varian PD, and TrueBeam log file (trajectory log) analysis to determine each detector/method's ability to detect clinically significant errors in the context of SIMT.While this work is presented in the context of SIMT, this method is extendable to other techniques and detectors (IMRT, VMAT, SABR, etc.) and is a novel way to determine the optimal γ criteria to use for these devices/methods. Materials: Clinical case A single patient with multiple brain metastases treated with stereotactic radiotherapy using the HyperArc technique was chosen for this retrospective study based on the complexity of the case, and the size and distribution of the 21 individual planning target volumes (PTVs) ranging from 0.4 to 8.1 cc.A 3D rendering of the case is shown in Figure 1.The patient had previously undergone stereotactic radiotherapy as well as WBRT. Materials: Detectors and associated equipment The following tools were used in this study: The detectors used, their features, and acquisition class according to AAPM TG-218 are shown in the Appendix (Table A1).Radiochromic film was used in conjunction with the CIRS Multi-Lesion Brain QA phantom (Model 037), and the SRS MapCHECK was used in conjunction with the Luki Phan, which is an in-house 3D-printed dice-shaped phantom.The Varian Portal Dosimetry method is EPID-based and does not require a phantom.Equally, Varian TrueBeam trajectory log files also require no phantom or phantom measurement.The detectors have varying levels of comprehensiveness to which they measure the absolute dose and dose distribution for the gantry, collimator, couch angles, and field size as per the plan and in simulated patient geometry.A summary of the detectors used is provided in the Appendix (Table A1). Methods: Python scripting to introduce errors To generate the erroneous plans with simulated MLC errors, the original plan DICOM file was exported, anonymized, and then modified using a Python script (Version 2.7).The script imports the DICOM file using the Pydicom module (https://pydicom.github.io/)and for each control point in the beam sequence a modification is performed on all the MLC leaves.The modified plan is then saved and can be re-imported into the Eclipse Treatment Planning System (TPS) for comparison.For this method to work, a copy of the original plan without jaw-tracking needed to be created and it was this plan that was modified.The copy of the original plan without jaw-tracking is the ground truth in this work.The list of modifications is shown in Table 1. Methods: Assessing the impact on plan quality To determine the plan quality impact and therefore potential clinical impact, the error-laden plans generated in Section 2.3 were re-imported into the TPS for recalculation.The clinical severity of an error was then determined by ranking the percentage of clinical goals passing relative to the original clinical plan.For example, scaling the monitor units by 1% in an artificially modified plan decreases the percentage of clinical goals passing from 97.5% (original plan) to 95.8% (modified plan), which has an effect, but is small relative to randomly shifting all the MLC leaves by an amount between 0.25 and 0.5 mm,which results in a reduction of 15.8% (97.5-% to 81.7%) to the percentage of clinical goals passing.Using this method, the most appropriate γ criteria for each detector were determined by the best linear model fit to the measurement result as a function of the severity of the error (as determined by the decrease in the percentage of clinical goals passing).Figure 2 shows a mosaic example of the changing isodose structures because of these introduced errors. Methods: Measurements All measurements were carried out on three dosimetrically matched TrueBeam linear accelerators that satisfy AAPM TG-142 stereotactic performance requirements.Film measurements were repeated on two linear accelerators (Linac 1 and Linac 2), the ArcCHECK and MapCHECK were carried out on a single accelerator (Linac 1), and the Portal Dosimetry measurements were on the third accelerator (Linac 3).All three TrueBeams are equipped with Millenium MLC systems and Perfect-Pitch 6 degrees-of -freedom couches (6DOF).Periodic QA (daily/monthly/annual) is performed on all machines.Verification plans were created for the original plans for The list is sorted against the percentage of clinical goals passing compared to the original plan, "Brain 0." The decrease in the percentage of clinical goals passing is an indicator of the degradation of plan quality and therefore a measure of the severity of the error.Where modifications refer to "systematic" or "random" offsets, this refers to all MLC leaf positions per control point.Where collimator rotations are mentioned, this refers to a collimator rotation angle increase by the degree amount to all fields in the original plan. ArcCHECK The ArcCHECK (Figure 3) was set up on the True-Beam PerfectPitch treatment couch and positioned at the isocenter using the lasers and alignment markings on the detector cylinder.Orthogonal (anterior and lateral) MV pair fields were used to verify the setup by comparison to the TPS.The standard ArcCHECK Dose Calibration procedure was then performed.All fields for the plans in Table 1 were delivered and integrated for the total dose distribution to be compared with the TPS using SNCPatient software (Sun Nuclear Corporation, Melbourne, Florida, USA).No "calc shift"registration between the delivered and planned dose distributions was performed. Varian portal dosimetry For each plan in Table 1, a PD Verification Plan was created in Eclipse (Figure 4).Each plan was then delivered to the EPID (positioned at 0,0,0 cm) field-by-field using integrated MV images.The PD software was then used to create the composite from the individual fields for comparison to the TPS-predicted image. Radiochromic film EBT-XD in the CIRS Multi-Lesion Brain QA Phantom Model 037 (Figure 5) was aligned to the lasers with a single piece of film at the central 0.0 cm slice.Registration points were marked at known distances for registration and aligned to the lasers, the phantom was reassembled, and a single plan was delivered.This setup was used for all plans, which resulted in 16 film measurements.In routine clinical practice, a film placed at each slice intersecting a PTV in the verification plan is standard.However, managing 22 PTVs and 16 plans this way would demand 352 film measurements, making it impractical for this study.Therefore, because an errorladen plan affects all PTVs in some way, a single-slice analysis was deemed sufficient to determine the overall effect.Films were digitized after 20 h on an Epson 11000XL flatbed scanner (Seiko Epson Corporation, Suwa, Nagano, Japan) creating 48-bit color images with 72 dpi resolution. Varian TrueBeam log files Machine log files produced during portal dosimetry measurements on Linac 3 were retrieved for in-house processing and comparison.Each log file was converted to a fluence map of the differential MU per control point delivered by explicitly modeling the MLCs and their motion in MATLAB and adding the differential MU to each control point.This same method can be applied to the plan DICOM MLC positions meaning plan-calculated fluence maps can be compared to log file-generated maps.Further, log files contain all the information about the planned "expected" positions of all the mechanical axes, and the actual measured position ("actual") fed back to the Linac.In this work we compared the planned (TPS, DICOM generated) fluence maps to the log file generated fluence maps (plan vs. log), and the log file expected versus actual (log file only method).The two methods produce dif-ferent results due to the temporal resolution of the data. SRS MapCHECK The SRS MapCHECK was used to measure three separate coronal planes of the treatment plan capturing in total 10 out of 22 PTVs.The SRS MapCHECK was installed in the custom-made "LukiPhan" dice-shaped each plane.The gamma passing rate for measurements utilizing this device was then evaluated for the captured PTVs on each individual plane for three coronal planes and the mean value of the pass rates presented.Figure 6 shows the SRS MapCHECK in Eclipse housed in the LukiPhan with three dose planes capturing multiple PTVs in each plane and the isodose lines of three separate planes overlaid. Methods: Analysis methodology For all detectors used in this study, γ analysis was performed in absolute dose mode with a threshold of 10% and dose difference/DTA criteria of 1%/1 mm, 2%/2 mm, 3%/3 mm, 5%/1.5 mm, and 5%/1 mm, respectively.To determine the ideal γ criteria for this method, a linear model of the form y = β 0 +β 1 X 1 +ϵ was used to estimate a linear fit to the measurement γ results versus the percentage of clinical goals passing, which itself is a measure of the severity of the error.For an idealized detector, the passing rate should linearly decrease with the severity of the introduced error and thus we hypothesized that an appropriate γ criteria set (dose difference/DTA) should exhibit a correlation to this parameter.The model's root mean squared error (RMSe), which estimates the standard deviation of the error distribution, the R-squared and adjusted R-squared coefficient of determination and adjusted coefficient of determination, respectively, and the F-statistic versus constant model and p value for the F-test on the model are reported. F I G U R E 6 Three coronal dose plane locations and visual isodoses as depicted in Eclipse.The three planes captured 10 out of 22 PTVs.Every error-laden plan from Table 1 was measured and compared to the TPS dose distribution as the reference. • Root mean squared error-Square root of the mean squared error, which estimates the standard deviation of the error distribution.For example, a low RMSe at a given gamma criteria indicates the detector's decrease in pass rate is tightly correlated to a linear decrease in the percentage of clinical goals passing.• R-squared and Adjusted R-squared are the coefficient of determination and adjusted coefficient of determination, respectively.For example, at 2%/1 mm the ArcCHECK has an R 2 = 0.92 (92%) (Table 2), which indicates that the model fits the data well.At this dose/dta criteria, the detector can detect the introduced errors which have been shown to have an impact on the number of clinical goals passing.At 3%/3 mm, the R 2 = 0.73 (73%) demonstrating that the detector fails to have gamma pass-rates that correlate well to potential clinical impact.• F-statistic versus constant model-Test statistic for the F-test on the regression model, which tests whether the model fits significantly better than a degenerate model consisting of only a constant term.The result is significant if the F statistic is larger because this indicates greater differences among the sample averages.• p value-p value for the F-test on the model.If the p value is low and the F-statistic is large, then the overall results are significant. Results summary Figure 7 presents a boxplot summary of detector results.The ideal γ criteria for each detector aim to pass the ground-truth plan error-free and detect errors proportionate to their severity based on γ pass rates (GPR). A wider interquartile range and overall range in these cases indicate greater error detection sensitivity.A key result of this study is that these results demonstrate that a 3%/3 mm criterion is unsuitable for all detectors except film due to additional uncertainty.For instance, PD with 3%/3 mm detects only severe errors.Error-laden plans had a 100% pass rate, except for Brain 14 and 15, which caused substantial plan degradation due to systematic MLC shifts and randomized leaf offsets.These errors could lead to significant mistreatment, reducing clinical goal achievement to 40.0% and 29.7%, respectively, from an initial 97.5%. Figure 8 shows a summary of the detector results grouped by γ criteria.Figure 8 presents the same results grouped by γ criteria. The results of the linear model are shown in Table 2.The linear model was a fit of the results to the ranked decline of clinical goals passing (see Table 1).Table 2 shows the parameters of the model for each detector and γ-criteria.The best fit of the model was used to determine the optimal γ-criteria which is shaded in the table.According to this model,the optimal criteria for Arc-CHECK were 2%/1 mm and 5%/1 mm, respectively.For the PD method, 2%/1 mm should be used to provide the best error detection.For film, 5%/1.5 or 5%/1 mm should be used.For log files, which compare reconstructed fluence maps from MLC positions and differential control point MU integration, 2%/1 mm or 5%/1 mm provide the best error detection.Finally, for the SRS MapCHECK, 2%/1 mm provided the best error detection in this study. ArcCHECK Results for the ArcCHECK measurements and the linear modeling are shown in Figure 9.The original plan passed at 100% for all γ-criteria evaluated.The most appropriate criteria were found via the linear model to be 5%/1 mm and 2%/1 mm, respectively.Table 3 shows the results for all ArcCHECK plans measured for criteria The highlighted rows per detector show the favorable criteria that yield measurement results that are strongly correlated with detecting the error. Portal dosimetry Results for the PD measurements and the linear modeling are shown in Figure 10.The original plan passed at 100% for all γ criteria except for 2%/1 mm and 1%/1 mm.The most appropriate criteria were found via the linear model to be 2%/1 mm, even though this criterion reported a failed result for the original plan.The range of γ pass rates for this criterion was 68%.At 2%/1 mm all plans with errors introduced failed except for Brain 1, where the scaling of the MU by 1% improved the result.Table 4 shows the results for all PD plans measured for criteria of 1%/1 mm, 2%/2 mm, 3%/3 mm, 5%/1.5 mm, and 5%/1 mm, respectively. Radiochromic film Radiochromic film showed the largest variation between individual measurements and the least correlation to the decrease in clinical goals passing.However, from an error detection standpoint, it is appropriate to use 5%/1 mm or 5%/1.5 mm.The extra uncertainty in film dosimetry is both a weakness and a strength in this Boxplot distributions for the γ results per criteria for each detector.A larger range of GPR in this work is favorable since the GPR should decrease in proportion to the severity of the error introduced to the delivery.An ideal detector would pass for the original plan (Brain 0) and fail for all other plans (Brain 1-15) in Table 1. respect.Any error is likely to be detected but the degree to which the decrease in γ pass rate is correlated to the clinical consequence is not clear.These measurements were repeated twice,once with EBT3 and another repeat on a separate machine to confirm these results. No significant changes to the results presented here were found.The film results for this work represent an outlier in terms of the hypothesis and further work is needed to investigate whether that is due to this case alone, as our clinical experience with the use of film in the context of SIMT is that it is an accurate and reproducible dosimeter when strict protocols are adhered to.We speculate that the resolution and sensitivity of the film measurement method is such that it is more sensitive than other detectors and therefore errors affect the results in unpredictable ways.More measurement cases of different plans and error-laden plans are needed to substantiate this claim though.Figure 11 and Table 5 F I G U R E 8 Summary of the measurement results grouped by γ-criteria.5%/1 mm or 2%/1 mm are the most suitable criteria across all detectors providing the best error detection. shows the results for radiochromic film and assessing a ROI encompassing the lesions. TrueBeam log files The results for the TrueBeam trajectory log file method are shown in Figures 12 and 13, respectively.Tables 6 and 7 show these results in full.There are two appropriate methods for using log files.One method is to compare the actual reconstructed fluence map delivered from the machine to the reconstructed fluence map gen-erated from the treatment plan DICOM (Figure 12) and the second method is to solely use the log file to reconstruct two sets of fluence (expected and actual delivered fluence as recorded by the linac).A fluence map that contains all the planned MLC positions ("expected") can be compared to the measured fluence map generated from the actual MLC positions ("actual") as well as all other mechanical axes information (Figure 13).The two methods differ in the temporal resolution of the reconstruction.Figures 12 and 13, Tables 6 and 7 show that 5%/1 mm for the log files is the optimum criteria across the range analyzed herein.Table 2 also highlights a F I G U R E 9 ArcCHECK results for 1%/1 mm (top), 2%/1 mm, 3%/3 mm, 5%/1.5 mm, and 5%/1 mm (bottom) respectively.Each measurement point corresponds to a planned delivery from Table 1.The measurement points are represented by circles, with a linear model fit to the data along with confidence intervals for the model shown in a solid and dashed red line, respectively.The green dashed line indicates ideal linearity. strong correlation between the γ pass rates and the decline in the percentage of clinical goals passing for both methods. SRS MapCHECK Results for the SRS MapCHECK measurements are shown in Figure 14 and Table 8, respectively.According to these results, the SRS MapCHECK should be used with a γ criterion of 2%/1 mm or tighter.Again, it should be noted that other criteria like 3%/1 mm or 2%/2 mm might be appropriate also, but these have not been evaluated against these plans.This device, along with all others presented in this work should not be used with loose tolerances like 3%/3 mm, which relegates any error detection ability to all but the most serious errors (Brain 14 and Brain 15 plans in this work). DISCUSSION Recommendations of AAPM Task Group No. 218 in determining tolerance limits and methodologies for IMRT-based verification QA.In the last sentence of Section 9 it is recommended that "…efforts should be focused on further improving the correlation between IMRT QA evaluation metrics and underlying planning or delivery errors." 22This work aimed to evaluate five methods for quality assurance of SIMT treatment plans according to the methods' suitability and sensitivity to delivery errors using a novel correlation between γ pass-rates and the clinical plan quality degradation due to the error.We introduced a novel method to determine optimal γ criteria for each method, correlating error severity with its detection based on its impact on clinical plan integrity, as measured by the decrease in clinical goal achievement compared to the original plan.This method F I G U R E 1 0 Results from Varian's PD software.Each figure's title shows the γ criteria used for the analysis.Each data point is a comparison of the composite dose from four fields analyzed in PD for the plans in Table 1.Each measurement of the original and erroneous plan is compared to the original plan's portal dose image prediction composite. can be used to establish appropriate gamma criteria by correlating gamma pass rates at a particular dose/dta criterion to clinical plan degradation.Errors introduced into the original plan, along with their effect on clinical goals are given in Table 1.The key findings of this work were that this novel method can be applied to an assessment of any detector for PSQA use, and provides a way to determine the optimal γ criteria for the detector, to maximize the detector's error detection capability.A second key finding was that loose γ criteria for PSQA, for example, 3%/3 mm coupled with the detector choice and its use-case applicability, can result in clinically relevant false positives, where a plan that should fail QA and detect a serious, clini-cally relevant delivery issue, passes the test.This was found across all detectors and methods presented in this work, except for radiochromic film and we recommend that these loose criteria be tightened to maximize error detection.All detectors and methods studied in this work demonstrate that errors can be detected reliably, provided that the appropriate γ criteria are used. For the ArcCHECK, a criterion of 2%/1 mm should be investigated for a range of patient cases experimenting with looser criteria like 2%/2 mm and 3%/2 mm given the resolution of the device, in line with recommendations from AAPM TG-218. 22ArcCHECK measurements of the plans can be complemented by an evaluation of the couch walk-out and IGRT procedures as the device F I G U R E 1 1 Radiochromic film results.Higher uncertainty for this case relative to the other detectors can be seen. is only able to be used without couch rotation.The same applies to the use of PD and log file-based methods, where no information about the spatial accuracy of the dose delivery is obtained in a phantom.Whilst this work provides a starting point for appropriate tolerance selection in the context of SIMT, it is recommended that each facility investigate its own appropriate criteria, as individual clinical cases may necessitate looser or tighter tolerances depending on the site. SIMT cases are among the most complex radiotherapy plans yet there is no clear guidance on which detectors and/or γ criteria should be used when performing PSQA.AAPM TG-218 22 recommends universal tolerance limits where the γ passing rate should be ≥ 95%, with 3%/2 mm and a 10% dose threshold, and universal action limits where the γ passing rate should be ≥ 90%, with 3%/2 mm and a 10% dose threshold.These limits serve as a good starting point for PSQA of IMRT and VMAT treatment plans.With SIMT, tighter tolerances depending on the equipment available should be investigated, such as 2%/1 mm to detect subtle regional errors and to discern if the errors are systematic for a specific treatment site or delivery machine.The reduction to 1 mm distance-to-agreement is also recommended regardless of dose-difference criteria given the tighter margins often employed in SIMT treatment plans. This work echoes the findings of Xia et al. 23 In their work, the authors reported on their experience with applying TG-218 recommendations to a large multicenter clinical SRS and SBRT program for a range of diverse clinical pre-treatment QA systems.Pretreatment QA systems included Delta4 (Scandidos), PD, 1. Each measurement of the original and erroneous plan is compared to the original plan's composite fluence image. and 3%/1 mm for SRS MapCHECK SRS cases could be applied with acceptable action and tolerance limits. In agreement with this work, it was shown that stringent criteria (2%/1 mm) could be applied for multiple target SRS using the SRS MapCHECK.James et al. 24 compared commercial quality assurance (QA) devices (EBT-XD film, IBA Matrixx Resolution, SNC ArcCHECK, Varian aS1200 EPID, SNC SRS MapCHECK, and IBA myQA SRS) to film dosimetry for pre-treatment evaluation of stereotactic radiosurgery (SRS), fractionated SRT, and stereotactic body radiation therapy treatment plans.Their work compared gamma pass rates for a set of forty plans as well as two plans containing MLC positioning error scenarios.Their work found that errors in MLC positioning were most reliably detected at 2%/1 mm for high-resolution detectors and that lower-resolution detectors did not consistently detect MLC positioning errors.Our work also confirms their findings with 2%/1 mm being the most appropriate for the SRS MapCHECK and Portal Dosimetry.Our findings differ concerning the ArcCHECK where their findings suggest that this detector, on average, did not correctly identify the changes in the dose distribution when lagging MLC error plans were measured.This could be due to the nature of the error introduced compared with this study and the plan's complexity. There are several limitations to our work and areas where the work can be expanded.This work is based on the results of one patient plan that was subsequently modified and measured on a range of devices.Future work aims to reduce the number of plans in Table 1 and test this method across a wide range of treatment plans and this would overcome one of the shortcomings of this study, where plan variation was not a variable that was studied.Future work internally at our organization aims to use this method to determine treatment plan robustness to these effects across a large patient group. Whilst this work provides recommendations on dose/spatial gamma criteria for these detectors, it is important to understand the limitations of each detector and methodology (see Appendix: Table A1) and to establish center and site-specific tolerances according to TG-218 methodology where possible.It is important to note that although TG-218 does not specifically address the topic of stereotactic radiotherapy, its methodological principles can be applied to the establishment of best-practice gamma criteria and tolerances for each organization and detector.Further, all initial gamma criteria should be tightened/refined where applicable based on data acquired for a range of patient cases over time.In this work,we have demonstrated that all detectors and methods outlined herein can be used to detect clinically relevant errors on a TrueBeam linear accelerator. This work also shows the potential usefulness of a combinatorial approach for QA of these cases.For example, rather than processing 20 film measurements, TA B L E 7 Gamma pass rates for the log file recorded actual fluence reconstruction compared to the log file recorded expected fluence.one per PTV, the entire delivery might be captured on an ArcCHECK with no couch rotation, to determine the composite deliverability, and then a single film plane measurement done to account for the shortcomings of the ArcCHECK method and focus in on the agreement in areas of steep-dose gradient, while assessing the couch-walkout and IGRT workflow.This approach coupled with a 3D independent plan recalculation provides a robust way to ensure the planning system and delivery errors do not affect treatment efficacy and combinatorial QA may reduce the risk of adverse events.Though gamma criteria are tightened for ArcCHECK, SRS MapCHECK and Portal dosimetry, the results discussed show an acceptable pass rate of > 95%.Therefore, we suggest that if using clinically, the standard tolerance of > 95% gamma pass rate be considered.In the case of pass rates falling below the 95% threshold, the standard criteria of 3%/1 mm be applied to evaluate 1.Each measurement of the original and erroneous plan is compared to the original plan's dose distribution. Plan the results and could be further confirmed by assessing the log files collected from the QA delivery.However, this would highly depend on the department's practice.Retrospective studies with tighter criteria applied may be a starting point prior to clinical application. CONCLUSION SIMT plans, though optimized to deliver highly conformal dose distribution to multiple volumes with acceptable toxicity, require a safe and efficient method of validation for delivery.As the number of volumes targeted in a single field increase, the complexity and time required for patient specific QA increases.In this work, we aimed to assess five methods for quality assurance of SIMT treatment plans in terms of their suitability and sensitivity to delivery errors and machine miscalibration.We also proposed a novel method for setting appropriate gamma criteria for each device and demonstrated the following: 2%/1 mm is a good starting point for the ArcCHECK, PD, and the SRS MapCHECK methods respectively, and provides clinically relevant error detection sensitivity.Looser gamma criteria of 5%/1 mm or 5%/1.5 mm are suitable for film dosimetry and log-file-based methods.From these starting points, we recommend evaluating SIMT patient-specific QA results against a cohort of representative patients with a range of PTV sizes, quantities, and distances from the isocenter.The tighter criteria for the devices other F I G U R E 1 3D Visualization of the treatment case in Eclipse v16.1.As the distribution was throughout the brain, the plan had 180 • arcs at couch angles of 0 F I G U R E 2 Mosaic showing the degree of plan quality degradation compared to the original plan (a), when the errors from Table 1 are introduced to the plan files and re-imported into Eclipse.(b) shows the dose distribution of a single slice with 0.01-0.1 mm random offsets applied to each MLC leaf per control point.The arrows point to features of the isodose distribution that change relative to the original plan.(c)-(m) show continuous degradation with larger errors introduced from cases: Brain 6−10 and Brain 13−16, respectively.The percentage of clinical goals passing is shown in the bottom right box in each image. F I G U R E 3 Eclipse screenshot of the plan transferred to the ArcCHECK.F I G U R E 4 PD predicted image for the composite dose distribution of the four fields.phantomand aligned to the lasers using the inscribed markings.Cone-beam CT (CBCT) images of the phantom were used to precisely match and position the phantom in congruence with the treatment plan's reference CT.For each measurement, the four fields were delivered, and the phantom was then shifted to the next measurement plane.The reference plan and all the error introduced plans were delivered to collect data from F I G U R E 5 CIRS Multi-Lesion Brain QA phantom at −4.0 cm (anterior of isocenter) showing the location of the measurement slice and the isodose distribution.Six lesions can be seen. F I G U R E 1 2 Results from TrueBeam trajectory log files compared to the treatment plan generated fluence.Each figure's title shows the γ criteria used for the analysis.Each data point is a comparison of the composite fluence (intensity map) from four fields for the plans in Table F I G U R E 1 3 Results from TrueBeam trajectory log files.Each figure's title shows the γ criteria used for the analysis.Each data point is a comparison of the composite fluence (intensity map) from four fields for the plans in Table1.The actual fluence based on the recorded MLC positions for each plan is compared to the expected fluence from the log file of the original plan. F I G U R E 1 4 Results from Sun Nuclear's SRS MapCHECK.Each figure's title shows the γ criteria used for the analysis.Each data point is a comparison of the composite dose from four fields analyzed in SNC Patient for each plan in Table TA B L E 1 List of erroneous plans generated. Plan/Rank # Modification Percentage of clinical goals passing PTV_TOTAL D98% PTV_TOTAL D2% PTV_TOTAL Dmean [%] Linear model results comparing a linear fit of the decline in γ pass-rates against plan-quality degradation as measured by the percentage of clinical goals passing following the introduction of errors to the treatment plan. TA B L E 2 ArcCHECK GPR results for all plans measured.Shaded regions show optimal criteria as determined by the linear model. TA B L E 5 Gamma pass rates for SRS MapCHECK measurements.
8,047.2
2024-04-03T00:00:00.000
[ "Medicine", "Physics", "Engineering" ]
Assessing the deviation from the inverse square law for orthovoltage beams with closed‐ended applicators In this report, we quantify the divergence from the inverse square law (ISL) of the beam output as a function of distance (standoff) from closed‐ended applicators for a modern clinical orthovoltage unit. The divergence is clinically significant exceeding 3% at a 1.2 cm distance for 4 × 4 and 10×10cm2 closed‐ended applicators. For all investigated cases, the measured dose falloff is more rapid than that predicted by the ISL and, therefore, causes a systematic underdose when using the ISL for dose calculations at extended SSD. The observed divergence from the ISL in closed‐ended applicators can be explained by the end‐plate scattering contribution not accounted for in the ISL calculation. The standoff measurements were also compared to the predictions from a home‐built kV dose computation algorithm, kVDoseCalc. The kVDoseCalc computation predicted a more rapid falloff with distance than observed experimentally. The computation and measurements agree to within 1.1% for standoff distances of 3 cm or less for 4×4cm2 and 10×10cm2 field sizes. The overall agreement is within 2.3% for all field sizes and standoff distances measured. No significant deviation from the ISL was observed for open‐ended applicators for standoff distances up to 10 cm. PACS numbers: 87.55.‐x, 87.55.kh I. IntroductIon Orthovoltage X-ray tubes generally operate at 40-350 kVp accelerating potentials.They are used to treat superficial skin cancers and bone disease, as well as benign tumors seated close to the skin.The characteristics of the percent depth-dose (PDD) curves at these energies are such that the maximum dose (d max ) is at or near the surface.These PDDs present a more pronounced falloff with depth compared to megavoltage X-ray beams.These characteristics allow orthovoltage beams to deliver high doses to superficial tumors, sparing underlying healthy tissues seated beyond the treatment target.The majority of orthovoltage treatments make use of X-ray applicators (commonly called cones).These applicators are used to define the treatment field size by placing them directly on the patient surface, reducing the gap or standoff between the patient skin surface and the applicator.A lead (Pb) cutout can be used to define irregularly shaped fields and provide additional shielding.The standoff is defined as the distance between the patient skin surface and proximal end of the X-ray applicator.In certain treatment situations, the standoff is unavoidable, such as in circumstances where patient curvature limits the applicator placement.In such circumstances, a standoff correction factor is used to account for the expected inverse square law (ISL) falloff of dose with distance from the applicator end. Li et al. (1) have reported divergence from the expected ISL for closed-ended applicators.It is also recommended by the AAPM Task Group protocol for kilovoltage X-ray beams (TG61) that this effect be taken in account. (2)In this study, we have experimentally quantified the dose divergence from the inverse square law with standoff for a clinical Xstrahl-300 orthovoltage unit and compare the results with computation using the kVDoseCalc calculation software (3) in order to explain the sources of discrepancy.The standoff effect on monitor unit (MU) or timer calculations was also investigated. A. clinical orthovoltage unit The clinical unit investigated in this study was an Xstrahl-300 X-ray therapy system (Xstrahl Ltd., Camberley, UK).The Xstrahl-300 unit can operate at potentials in the 40-300 kVp range.The Xstrahl-300 unit was commissioned for clinical beams of 100, 150, and 200 kVp.Details of the clinical beams are shown in Table 1.The effective beam energy in Table 1 is defined as the mono-energetic beam which produces the same half value layer (HVL).In addition to the added filtration shown in Table 1, all beams have an inherent filtration of 4 mm of Be.The X-ray applicators are open-ended and circular at 30 cm FSD (focus-to-skin distance).The open-ended applicators have diameters of 1.5, 3, 4, 5, and 10 cm at the nominal 30 cm FSD.At 50 cm FSD, the applicators are closed-ended and have square field size dimensions.The available closedended applicators produce field sizes of 4 × 4, 6 × 6, 8 × 8, 10 × 10, 15 × 15, and 20 × 20 cm 2 at the nominal 50 cm FSD.The end plate of the closed-ended applicators is composed of a 4 mm thick polymethyl methacrylate (PMMA) window (also known as acrylic). B. In-air measurements The in-air relative ionization measurements were performed with a Markus plane parallel ion chamber (PTW Frieburg, Germany; model N23343).The Markus chamber has a 0.055 cc nominal active volume and a 30 μm thick polyethylene entrance window.The 0.87 mm acrylic protective cap provided with the chamber is thick enough to remove any potential contaminating electrons; however, the addition of this extra material could act as an additional X-ray scattering source.Therefore, this acrylic cap was not used.An acrylic ring was mounted on the chamber to allow a 1 mm offset between the applicator end plate and detector entrance window, avoiding any potential collisions between the applicator end plate and detector window.The electrode plate separation for the Markus detector is 2 mm.The effective point of measurement is the center of the detector air cavity (2) (i.e., another 1 mm downstream of the applicator end plate).Therefore, the closest measurement position to the applicator end plate was 2 mm (due to electrode separation plus 1 mm offset).No physical offset was used for the open-ended applicators, except for the fact that the effective point of measurement was at 1 mm from the applicator due to the detector plate separation.In regards to electron contamination, the continuous slowing down approximation (CSDA) range for electrons in water for 100, 150, and 200 keV electrons is approximately 140, 280, and 450 μm, respectively. (4)These electrons have enough kinetic energy to penetrate the 30 μm thick polyethylene window of the Markus chamber.However these ranges correspond to the maximum X-ray energy.At effective beam energies (listed in Table 1), the corresponding CSDA range for electrons are approximately 20, 40, and 98 μm, respectively.To determine if there was any significant electron contamination on our clinical unit, experiments were performed by placing a GAFCHROMIC EBT3 film (International Specialty Products, Wayne, NJ) as an electron absorber.EBT3 film has a manufacturer stated thickness of 250 μm and it is relatively tissue-equivalent.The differences between the chamber response with and without the absorber, with the X-ray attenuation taken into account, were used to estimate the electron contamination. To determine if there is any dependence due to chamber size on the standoff assessment, measurements were also performed with a CC13 cylindrical ionization chamber (Scanditronix Wellhöfer, Nuremburg, Germany).The CC13 chamber has a 0.13 cc sensitive volume.This detector was chosen because of the similar volume compared to the 0.12 cc cylindrical ion chamber used by Li et al. (1) The effective point of measurement for this chamber was taken as the center of the active volume and, therefore, the closest point of measurement for this chamber was 0.5 cm due to geometrical limitations of the detector size (limited by the detector radius and cable sheath).Since these are relative measurements performed in air, no effects due to chamber composition are expected since the beam quality in air does not change significantly within the short range of standoff distances used in the present study. Ionizations in pC were recorded from a Max 4000 electrometer (Standard Imaging Inc., Middleton WI.).The ion chambers were positioned using a Standard Imaging 1D scanning arm.This scanning arm has a positional accuracy of ± 0.05 mm.Three measurements were performed at each position.The variation for these three measurements was found to be less than 0.3%.The ISL correction factor, or standoff correction (SOF) factor, is defined as: (1) where FSD N is the nominal FSD (30 cm for open-ended and 50 cm for closed-ended applicators), and S is the amount of standoff or distance from the FSD N in cm.The percentage difference between the calculated and measured relative ionization is defined as: (2) I ISL (S) − I meas (S) I meas (S) where I meas (S) is the measured relative ionization normalized to zero standoff: (3) I(S) The percentage differences measured represent the dosimetric error that occurs when only the ISL factor is used in the calculation of treatment monitor units. c. kVdosecalc dose computations Relative dose computations were performed using a validated (3,(5)(6) hybrid kV dose calculation.kVDoseCalc computes the component of the dose deposited by primary photons using a deterministic model and the scatter component deposited by scattered photons using a stochastic biased Monte Carlo-based technique. (3)kV dose computations were performed to validate the experimental measurements for the closed-ended applicators and provide a theoretical explanation for the observed phenomenon.The model of the beam geometry consisted of a flat beam where the fluence varied only due to divergence, and a field size defined by the nominal applicator size.The dose was computed similar to the measurement geometry; where air (ρ = 0.00129 g/cm 3 ) was used everywhere, except for the 4 mm acrylic end plate.The dose was computed to a small volume of air (~ 2 mm 3 ) as a function of standoff from the applicator end plate.The lower extremity of the end plate was placed at the nominal FSD; the atomic densities (atom/cm 3 ) were calculated using the nominal PMMA polymer composition (C 5 O 2 H 8 )n and density (1.18 g/cm 3 ).The spectrum of the 100 kVp beam was characterized by matching the measured (6.26 mm Al) HVL and nominal kVp with spectra generated by the third-party freeware Spektr (7) using the method described by Poirier et al. (5) The highest kVp spectra that can be generated by Spektr is 140 kVp; for this reason the 150 and 200 kVp spectra were generated by inputting the nominal inherent 4 mm Be and added filtrations (provided in Table 1) into SpekCalc. (8)The spectra generated by SpekCalc matched the measured HVL within 0.01 mm Al. A. open-ended applicators The percent difference as defined by Eq. ( 2) for the open-ended applicators are shown in Fig. 1.Measurements were performed up to a standoff distance of 10 cm in order to demonstrate the overall trend.For standoff of up to 2 cm, the deviation between the measured relative dose and the expected value from ISL is ≤ 1% for the investigated beams.The largest percentage difference for the studied open-ended applicators ~ 2.5% is observed for the 4 cm diameter applicator at 200 kVp.It was found that the electron contamination was less than 0.7% for 200 kVp for the closed-ended applicators and 1.1% for the open-ended, and less than 0.2% for the 150 and 100 kVp beams for both open-ended and closed-ended applicators.The small, but systematic, underestimation of the standoff effect could be due to the electron contamination produced by this beam, which would be greatest near the applicator end and will fall off much faster than the ISL due to electron attenuation in air.However, this extreme difference is still small and has limited clinical significance, since it is unlikely that standoff for treatments with 4 cm applicators will exceed a few cm.Measurements were not performed for the smaller applicators of diameters 1.5 and 3 cm because it is highly unlikely that treatments will be performed with significant standoff for these field sizes.In clinical practice, these applicators are small enough to accommodate placement directly on the patient surface for most treatment sites.However, if there are treatment cases in which standoff is necessary due to patient comfort, the use of the ISL correction factor is acceptable for standoffs up to 10 cm. B. closed-ended applicators The deviation from the ISL law for the closed-ended applicators is shown in Fig. 2. The Canadian Partnership for Quality Radiotherapy CPQR guideline for quality control on kilovoltage machines states the daily output tolerance should be 2%. (9)Therefore, deviations of the machine output from the ISL as a function of standoff that exceeded this tolerance level were deemed clinically significant in this study.Furthermore, the deviations are systematic and can be accounted for with correction factors (see Results section E).For field sizes of 4 × 4 and 10 × 10 cm 2 , the 3% deviation at a standoff of 1.2 cm is clinically significant.This represents the dosimetric error or underdosing that would occur if the ISL was the sole correction used in the dosimetry (MU) calculation.For a given kVp, the deviation is field size dependent for standoff distances of 5 cm and less, and the deviation increases with decreasing field size.For the 10 × 10 and 4 × 4 cm 2 field sizes, there is slight energy dependence; however, for the 20 × 20 cm 2 fields there is no apparent energy dependence observed.For the field size of 4 × 4 cm 2 , the deviation is nearly 5% for a standoff of 2 cm for all investigated beam qualities. c. dose computations A comparison of the measured and computed relative dose falloff in air is shown in Fig. 3 for a 200 kVp beam with 10 × 10 cm 2 close-ended applicator.The measurements and computations were both normalized to the dose at 50.2 cm.This is the closest position achieved experimentally.The computation and experiment both demonstrate a more rapid dropoff of the dose with standoff compared to the inverse square law.The computation systematically predicts a slightly more rapid dose fall off compared to the measurements.For the data presented in Fig. 3, the agreement between experiment and computation varies between -0.8% and 0.2%.The computation and experimental measurements agree within 1.1% for standoff distances of 3 cm or less for 4 × 4 cm 2 and 10 × 10 cm 2 field sizes and over all energies.The overall agreement is within 2.3% for all field sizes and standoff distances measured in the Results section B. The dose computed by kVDoseCalc can be separated into the primary and scattered components.The separate dose components are also shown in Fig. 3 for a 200 kVp beam with a 10 × 10 cm 2 closed-ended applicator.The total dose to air falls off more rapidly than that predicted by the ISL.This can be explained by the X-ray scattering in the 4 mm thick acrylic end plate.The end plate scattering component contributes approximately 10% to the total dose at the end plate, but drops off much more rapidly than the inverse square law.The primary dose component follows the ISL as expected; however, the addition of rapidly decreasing scatter component results in total dose which is lower than that expected by the ISL at the nominal 50 cm FSD. The relative contribution of the scatter component to total dose at the end plate as a function of energy and field size (FS) is shown in Table 2.As expected, due to an increased scattering surface area of the end plate, the computation predicts a larger scatter contribution with increasing FS.Therefore, the falloff deviation decreases with increasing FS.The normalized scatter component falloff as a function of FS and standoff is shown in Fig. 4. The falloff becomes more pronounced with decreasing field size because of a reduction in the scattering area of the end plate with field size.The computation predicts a larger scatter component at the end plate with decreasing energy as shown in Table 2.This is as expected since the Compton scattering probability is inversely proportional to energy.Furthermore, Compton scattering is increasingly forward-directed at higher energies, which means that fewer photons are scattered towards the central axis of the end plate. (10)However, contrary to the above effect with field size, the normalized scatter component as a function of energy demonstrates the same proportional dropoff as shown in Fig. 5; therefore, only moderate energy dependence is observed, consistent with the experimental results.The dropoff is much more pronounced with decreasing field size compared to the dropoff as a function of energy shown in Fig. 5. d. chamber comparison The Markus chamber measurements were renormalized to 0.5 cm in order to compare with the CC13 measurements which cannot be measured at distances ≤ 0.5 cm due to the chamber radius. Over the measurement range of 0.5 cm to 10 cm, the percentage difference between the relative ionization for the CC13 and Markus chamber did not exceed 0.7%.The dropoff deviation from the ISL is, however, not as significant when the normalization is made at 0.5 cm.This is due to the fact that a significant contribution to the dose at the end plate (i.e., at 0 cm) is due to scattering in the end plate (see Results section C).Therefore, when normalizing at a point further downstream, the results will begin to follow more closely the ISL, as demonstrated by the computation results in Fig. 3 and discussed above.Therefore, we can conclude that there is no chamber-related volume averaging or energy dependence for the measurement of the inverse square law divergence for the cylindrical and parallel plate ion chambers used in this study.However, the Markus chamber is a superior choice for these measurements since the effective point of measurement can be placed in close proximity with the endplate of the applicator.The deviation observed for the closed-ended applicators in Fig. 2 is larger than that observed by Li et al. (1) This may be due to the thicker acrylic endplate for our applicators (4 mm compared to 3.2 mm (1) ) and the use of the Markus chamber, which allows the effective point of measurement of the detector to be placed very close to the applicator end plate.In addition, Evans et al. (11) observed no significant deviation from the ISL for closed-ended applicators for a Gulmay D3300 unit.This, again, could be due to detector selection, since the effective point of measurement in their study was limited by the Farmer detector size to 4.3 mm.Furthermore, this effect is machine-and applicator composition-specific, as mentioned by Li et al. (1) E. clinical implementation of the stand-off correction In order to implement the findings of our study into the clinical practice, it is important to consider the clinical impact of the standoff correction, as well as the possible errors that can occur due to its improper implementation.The standoff correction procedure should be efficient and made as simple as possible.The current procedure at our clinic is to calculate the number of monitor units required to deliver the daily tumor dose (DTD) using the following calculation: where DTD is the prescribed daily tumor dose in cGy, DWA is the dose-to-water in-air factor in cGy/MU, SOF is the standoff or standin correction factor (ISL correction), and BSF is the field size dependent backscatter factor. One method of correcting for the divergence of the inverse square law is to use the effective source position as proposed for electron calculations. (12,13)This is determined by plotting the square root of the inverse of the measured relative dose falloff as a function of standoff. (13)his approach is shown for 100 kVp in Fig. 6.The inverse of the slope of these straight lines represents the effective source position.For use in clinical practice, the data were only fitted for standoff up to 2.2 cm. The calculated effective source position as a function of energy and field size is shown in Table 3.The effective source position depends more on field size than the energy.Ideally a single value for the effective source position would be used.A value of 33 cm FSD provides Fig. 6.A plot of the square root of the inverse of the relative dose as function of standoff at 100 kVp for square closedended applicators.The effective source position is determined from the inverse of the slope as defined in Khan. (13)ble 3.The effective source position in cm as a function of energy (E) and field size (FS).The averages for either FS or E are also given.The dependence on field size is more significant than the dependence on E. reasonable agreement over all field sizes and energies, and is compared to the lookup table method below.Another method to implement the standoff correction into clinical practice is to create a lookup table for the SOF as a function of FS.The lookup table is given in Table 4.The correction factor for the inverse square law alone is also given in the table.The standoff correction variation produced by the field size is small enough that linear interpolation for the other clinical field sizes of 6 × 6, 8 × 8, and 15 × 15 cm 2 could be performed safely.The agreement between the lookup table and using an effective source position of 33 cm is within ± 1% for all field sizes investigated for distances up to 2 cm from the end plate; however, the lookup table provides the most accurate correction. FS (cm The measurements were performed at 0.2 cm from the end plate; however, in practice, the reduction in output is calculated from the nominal FSD of 50 cm with 0 cm standoff.The inverse square law correction of (50.2/50) 2 to normalize the measurements to a 0 cm standoff would introduce a 0.8% increase in the normalization point.However, according to kVdoseCalc, the correction factor varies from 1.7% to 2.8% for field sizes ranging from 20 × 20 to 4 × 4 cm 2 , respectively.The small 0.8% correction was performed for the correction factors calculated in Table 4 to renormalize the data to zero standoff; however, based on computation we acknowledge that this may be an underestimation of up to 2%. At extended SSD applications it is known that the percentage depth dose (PDD) increases with SSD.However the standoff distances of up to 5 cm typically encountered in the clinic will not change PDD by more than 1%-2%. (1)The measured and computed divergence from the inverse square law is systematic and quantifiable.The statistical uncertainty in the dose computations was ≤ 1%.Based on measurement uncertainties (set up and reproducibility), we believe the standoff factors are accurate to within 1%, thus providing an accurate and simple correction method.The standoff correction shall be applied for closed-ended applicators in order to reduce systematic dose errors for the orthovoltage X-ray treatments.Our standoff findings reemphasize the results of Li et al. (1) The standoff effect needs to be quantified for a given orthovoltage treatment unit; the radiation therapy planner can make an informed decision on the relevance of this parameter in their practice. IV. concLuSIonS Based on our measurements and computations we recommend that the divergence from the inverse square law at extended SSD should be evaluated for all orthovoltage therapy units.We have determined field size dependent factors to account for the dose falloff as a function of distance from closed-ended applicators for the Xstrahl 300 orthovoltage unit.We have also determined that a value of 33 cm as the effective source position for the nominal 50 cm FSD closed-ended applicators provides an acceptable correction for all field sizes and energies within 2 cm of the applicator.Using only the inverse square law to account for a 1 cm gap from a closed-ended applicator would result in a dosimetric error between 2%-3% for field sizes from 20 × 20 to 4 × 4 cm 2 , respectively.At a 2 cm standoff, the dosimetric error would exceed 5% for a 4 × 4 cm 2 applicator.The deviations are systematic and are deemed clinically significant. The divergence from the inverse square law for closed-ended applicators is a systematic effect and, if not taken into account, MU calculations will result in underdosing for orthovoltage treatments.Detectors, such as parallel plate ion chambers, that can measure the dose output in close proximity to the applicators are best suited to measure this effect.The open-ended applicators follow the inverse square law to within 1% for standoff distances up to 2 cm and to within 2.5% for standoff distances up to 10 cm.Therefore, no additional corrections are necessary for open-ended applicators. Fig. 1 . Fig. 1.The percentage difference between the measured dose falloff and that expected from the ISL with distance from open-ended circular applicators. Fig. 2 . Fig. 2. The percentage difference between the measured relative dose falloff and that expected from the ISL as a function of the distance from closed-ended square applicators. Fig. 3 . Fig. 3. Comparison of the measured and computed relative total dose falloff as a function of applicator standoff for a 200 kVp beam with 10 × 10 cm 2 closed-ended applicator.A much more rapid falloff is observed for the measured and computed data compared to the predicted falloff from the inverse square law.Also shown are the computed primary and scatter dose components. Fig. 4 . Fig. 4. Normalized scatter component as a function of FS and applicator standoff for square closed-ended applicators.The dropoff is much more pronounced with decreasing field size compared to the dropoff as a function of energy shown in Fig. 5. Fig. 5 . Fig. 5. Normalized scatter component as a function of energy and applicator standoff for square closed-ended applicators.The dropoff is much less pronounced as a function of energy compared to the FS dependence in Fig. 4. Table 1 . Nominal clinical beam parameters of the Xstrahl-300 unit. Table 2 . Relative scatter contribution (%) to the total dose at the end plate as a function of field size and energy determined from kVDoseCalc. Table 4 . Standoff correction factors as a function of field size for closed-ended applicators.The data in this table are averages over all three energies.The standard deviation varied by less than 0.6% for the average over energy for each standoff position.The ISL correction factor at the nominal SSD of 50 cm is shown for comparison.
5,882.8
2014-07-01T00:00:00.000
[ "Medicine", "Engineering", "Physics" ]
VEHICLE OCCLUSION REMOVAL FROM SINGLE AERIAL IMAGES USING GENERATIVE ADVERSARIAL NETWORKS : Removing occluding objects such as vehicles from drivable areas allows precise extraction of road boundaries and related semantic objects such as lane-markings, which is crucial for several applications such as generating high-definition maps for autonomous driving. Conventionally, multiple images of the same area taken at different times or from various perspectives are used to remove occlusions and to reconstruct the occluded areas. Nevertheless, these approaches require large amounts of data, which are not always available. Furthermore, they do not work for static occlusions caused by, among others, parked vehicles. In this paper, we address occlusion removal based on single aerial images using generative adversarial networks (GANs), which are able to deal with the mentioned challenges. To this end, we adapt several state-of-the-art GAN-based image inpainting algorithms to reconstruct the missing information. Results indicate that the StructureFlow algorithm outperforms the competitors and the restorations obtained are robust, with high visual fidelity in real-world applications. Furthermore, due to the lack of annotated aerial vehicle removal datasets, we generate a new dataset for training and validating the algorithms, the Aerial Vehicle Occlusion Removal (AVOR) dataset. To the best of our knowledge, our work is the first to address vehicle removal using deep learning algorithms to enhance maps. INTRODUCTION With the rapid evolution of autonomous driving, there has been a rising demand for high-definition (HD) maps in recent years.Occlusion-free aerial images from drivable areas can help generating more precise and complete HD maps by allowing for more accurate extraction of crucial features such as road boundaries and lane markings.Automatically removing occlusions is carried out by first detecting and masking undesired occlusions caused by static and dynamic objects such as vehicles, and then reconstructing the missing information in the masked areas, with both of these tasks being non-trivial.Several previous works using classical and learning-based approaches have addressed occlusion removal as an inpainting problem, filling in missing areas with the support of known surrounding areas.While classical methods rely solely on the neighborhoods of the missing areas, learning-based approaches are capable of using the learned features from various similar images.This theoretically allows them to restore features that are unrelated to the neighbouring regions. Among the learning-based approaches, the ones based on deep learning (DL) has shown promising performance in various image processing and computer vision tasks in the past two decades.For the first time, (Pathak et al., 2016) employed a DL-based approach for image inpainting.They proposed a generative adversarial network (GAN), where the encoder-decoder structure can generate the incomplete image parts. Later, (Nazeri et al., 2019) proposed the Edge-Connect network, focusing on restoring the missing image structural features by two GANs: one for generating edges, and the other one for completing the missing areas.The first GAN takes an original image and its corresponding missing information mask as input, and generates a masked grayscale image and a masked edge map.For the training step, the ground truth edge map is generated by applying the Canny edge detector to the original image.The resulting edges are then transferred to the second GAN with the masked source image in order to obtain the final output.In another work, (Ren et al., 2019) proposed the Struc-tureFlow network which uses Edge-Connect as backbone network for recovering the missing structures in two stages: generating edge-preserved smooth images and refining the uniformity of textures.The inputs to the network include the source image and its corresponding mask, as well as its masked structure map.In contrast to Edge-Connect, StructureFlow attempts to recover a smoothed structure map rather than an edge map for structure retention, resulting in a better reconstruction of structures and textures. As a more efficient method, (Li et al., 2019) proposed progressive reconstruction of visual structure (PRVS), which performs structure and texture restorations in parallel.The encoder starts the process from the boundaries of the masked area to its center, while the decoder performs the same operation in the opposite direction.This procedure enhances the coherence between the masked area and the rest of the image, considering that the boundaries of the masked area provide valuable information.Results on several benchmark datasets show promising restorations of image contents and edges.Mutual encoderdecoder with feature equalization (MEDFE) is another method that simultaneously recovers structural and textural information (Hongyu Liu, Yang, 2020).This network recovers textural and structural features at the shallowest and deepest layers, respectively, during the encoding process, and adds them through skip connections during the decoding step.In order to improve pixel discontinuity in the missing regions, (Liu et al., 2019) developed the coherent semantic attention (CSA) network, consisting of a shallow and a deep network. The shallow network provides a course prediction of the restored image, while the deep network refines the output using the CSA layer to improve the pixel coherence in the missing areas.Results show that CSA can coherently recover missing regions.Dealing with restoring large missing regions, (Li et al., 2020) proposed the recurrent feature reasoning (RFR) network that recursively infers the boundary of the missing regions as a reference, filling the missing parts from the border towards the center.RFR uses a knowledge consistent attention module that calculates the scores for each recursion and merges them in order to obtain more consistent outputs.Results demonstrate its efficiency in reconstructing large missing regions. In real-world scenarios, there is no ground truth for missing image information, with inpainting methods estimating the possible restorations of the missing information.Therefore, there is no unique solution to inpainting problems.In order to deal with these limitations, a more recent family of inpainting methods tries to provide multiple possible restorations of the missing image parts.One of such methods is the hierarchical vector quantized variational auto-encoder (VQ-VAE) network (Peng et al., 2021) that comprises three modules: a hierarchical encoder and decoder extracting discrete structural and textual features, a diverse structure generator estimating structure distribution in order to produce multiple possible structural features, and a texture generator which helps maintaining the synthesis of textures. All these methods have been developed for image inpainting and occlusion removal in the computer vision frame.To the best of our knowledge, there is no DL-based method in remote sensing for removing occlusion from single high-resolution aerial images.Moreover, while datasets with ground truth are crucial for training DL-based methods, there is no training dataset available for removing vehicles from aerial images.Since usually there is no ground truth of the occluded image parts, generating such dataset is very challenging thing, which imposes limits to the development of DL-based methods in this domain. In order to deal with these limitations and to promote the future development of DL-based methods for vehicle removal from single aerial images, in this paper we introduce the Aerial Vehicle Occlusion Removal (AVOR) dataset, based on an aerial image dataset with annotated vehicles from the German Aerospace Center (DLR), the so-called DLR multi-class vehicle detection and orientation in aerial imagery (DLR-MVDA) (Liu, Mattyus, 2015).We consider only vehicles as occluding objects, without their corresponding shadows.Our Occlusion-free dataset is composed of 1,296 images of size 256 × 256 pixels, containing no vehicle occlusions.Furthermore, in order to train the DL networks to learn the occlusions, we generate 19,639 realistic vehicle occlusion masks with the same size as the images.We randomly assign the masks to the images, then we split the dataset into training, validation, and test sets.The number of masks and images are equal to the number of images with a fixed assignment in the test set.However, for the training and validation sets, the number of masks are larger than those of the images, and we perform an on-the-fly assignment during the training and validation phases.Figure 1 demonstrates some example images and occlusion masks from the AVOR dataset.Furthermore, as an additional contribution, we adapt the aforementioned state-of-the-art GAN-based inpainting methods and apply them on our AVOR dataset.As demonstrated in Figure 2, the training and inference procedures of GAN-based techniques exhibit similarities, despite variations in their structural specifics.We then investigate their performances qualitatively and quantitatively, and discuss their opportunities and limitations for their practical use in future applications by the community.According to the presented results, StructureFlow outperforms the other methods and its restorations are robust with high visual fidelity in real-world applications. AERIAL VEHICLE OCCLUSION REMOVAL DATASET In this section, we introduce our Aerial Vehicle Occlusion Removal (AVOR) dataset.We generated AVOR based on DLR-MVDA, an aerial image dataset with annotated vehicles (Liu, Mattyus, 2015), which comprises 20 high-resolution and nonoverlapping aerial RGB images with a size of 5616 × 3744 pixels taken during a flight campaign over Munich, Germany by a helicopter.The images were acquired at 1000 m resulting in a ground sampling distance (GSD) of 13 cm/pixel. Similar to many annotated datasets, this dataset suffers from a lack of ground truth for the occluded regions, which is crucial for training purposes.The general idea is to extract occlusionfree areas from original images for driving areas which are not occluded by vehicles.In order to train the algorithms to learn the occluded image contents, we generate synthetic occlusion masks and assign them to these occlusion-free images.Since the input is the element-wise multiplication of source images and their binary masks, the networks assume that the masked parts of the original images are occluded by vehicles. In order to generate the occlusion-free images, we manually crop image subsets of 256 × 256 pixels containing drivable areas not occluded by vehicles from the large aerial images.The patch size is a trade-off between the number of samples and the occurrence of representative contextual features.Since the number of such image patches is limited, we augment them by rotations of 90, 180, and 270 degrees, resulting in 1,296 image patches.Then we split the dataset into 1,050 training, 188 test, and 58 validation images.Figure 1 shows a few example images and occlusion masks from the AVOR dataset. Since the images are not occluded, their corresponding occlusion masks can be randomized.In order to keep the masks as realistic as possible, we randomly crop patches of 256×256 pixels from the binary masks of the occluding vehicles of the original dataset, resulting in 19,639 masks.We then split the generated masks into 17,825 training, 188 test, and 1,626 validation masks, where the image patches of each set do not belong to the same images.The number of masks in the training and validation sets is much larger than in image patches.Thus, one occlusion-free image can match multiple masks during training, which can slightly compensate for the limited number of occlusion-free images, as various occlusion scenarios for each image are present.For the test time, we use a fixed set of 188 randomly selected masks. The most notable advantage of the dataset is that it contains real-world images of drivable areas without occlusion, and occlusion masks from real occluding vehicles.The dataset has also some limitations, such as its relatively small size and unbalanced distribution of scenes.For example, since the parking areas are usually occupied by vehicles, these are rarely presented in our dataset: this can have a negative impact on occlusion removal performance in these regions. VEHICLE REMOVAL USING INPAINTING METHODS In this section we report our experiments on vehicle removal using image inpainting methods and evaluate their results.The most significant challenge in vehicle removal is reconstructing the missing information.The GAN-based inpainting methods learn the data models by training on relevant datasets and use the learned model to generate the missing features.Despite the difference in their structure details, the training and inference procedures of the GAN-based methods are similar as shown in Figure 2. In the training step, the networks relies on occlusion-free images and occlusion masks to learn the characteristics of the occluded areas by their generators.For inference, the generators reconstruct the missing information of the occluded areas indicated by the occlusion masks.In Figure 2, G and D denote generator and discriminator, respectively.I o is the input occlusion-free image, M is a binary occlusion mask, and I of is the generated occlusion-free image.The discriminator uses the original occlusion-free image (I o ) as ground truth. For our experiments, we consider seven GAN-based methods including StructureFlow (Ren et al., 2019), Edge-Connect (Nazeri et al., 2019), PRVS (Li et al., 2019), MEDFE (Hongyu Liu, Yang, 2020), RFR (Li et al., 2020), CSA (Liu et al., 2019), and VQ-VAE (Peng et al., 2021).We use the available implementations of the algorithms available on Github, and keep their parameters and configurations as in the original networks.We train the networks on the AVOR dataset for 300 epochs.For training, we input an occlusion-free image with a randomly selected occlusion mask from the training set to the network.We evaluate the methods on the test set of the AVOR dataset both qualitatively and quantitatively.To this end, we mask the input occlusion-free images by their corresponding masks, and compare the restored images with the input occlusion-free images (ground truth).Furthermore, we apply the trained models to the real-world scenarios by inputting images with vehicle occlusions (from DLR-MVDA) together with the binary mask of the vehicles.The networks are supposed to replace the vehicles with the image content that they occlude.Since there is no ground truth available for the occluded images, we evaluate the results qualitatively. EVALUATION METRICS To evaluate the image restoration performance from various perspectives, we employ three commonly-used metrics in image enhancement domain. Peak Signal-to-Noise Ratio (PSNR) (Davda et al., 2010) is the most commonly-used metric for image quality.It characterizes the relationship between the maximum possible signal power and the destructive noise power.Since signals can have wide dynamic ranges, PSNR is often presented as a logarithm with a decibel range of 0 to ∞: where max(I) is the maximum pixel value of the image and MSE is the mean squared error.The larger the PSNR value, the better the quality of the reconstructed image, as less errors are introduced to the output. Structural Similarity (SSIM) (Davda et al., 2010) is derived from three comparative measures (brightness, contrast, and structure) between the source image x and the reconstructed image y as follows: where α, β, and γ are the weights of brightness, contrast and structure, respectively.The SSIM value ranges between 0 and 1.It is equal to 1 only if the two images are identical. Fréchet Inception Distance (FID) (Heusel et al., 2017) is a widelyused metric for measuring the distances between the feature vectors of the original image set x and the recovered image set y, as: where µ y and µ x denote the mean values of the feature vectors of the sets y and x, respectively.Additionally, Σ y and Σ x correspond to the covariance matrices of the feature vectors of the two sets, respectively, while T r (.) is the trace of the corresponding matrix.A smaller FID implies a higher similarity of the generated image to the source, with the FID between two identical images being 0. RESULTS AND DISCUSSION Table 1 shows the quantitative evaluation of the results on the test set of the AVOR dataset based on SSIM, PSNR, and FID (Nazeri et al., 2019) 0.983 (0.027) 38.71 (6.77) 15.53 StructureFlow (Ren et al., 2019) 0.990 (0.014) 41.02 (6.51) 7.88 MEDFE (Hongyu Liu, Yang, 2020) 0.989 (0.017) 40.48 (6.99) 9.87 PRVS (Li et al., 2019) 0.988 (0.016) 40.21 (6.68) 21.91 CSA (Liu et al., 2019) 0.985 (0.021) 38.57(6.18) 12.47 RFR (Li et al., 2020) 0.988 (0.016) 40.29 (7.22) 34.52 VQ-VAE (Peng et al., 2021) 0.984 (0.022) 38.37 (6.91) 13.30 Table 1.Comparing occlusion removal by inpainting algorithms on the occlusion-free dataset metrics.For SSIM and PSNR, we also present standard deviations in parenthesis.In this table, ↑ and ↓ mean that higher and lower indicate better performance, respectively, and we report in bold the best result for each metric.According to the results, StructureFlow (Ren et al., 2019) outperforms the other methods.It yields the best results for SSIM and PSNR, showing that it can better preserve the image structures and the overall pixel values.Moreover, it achieves the best FID, indicating that the pixel value distributions of its resulting images are close to those of the source images.In order to provide a visual indication of the capabilities of each method, we assess the results qualitatively. Figure 3 shows an example reconstruction of missing features by different methods.In this figure, the first image is an occlusion-free image selected from the test set of the AVOR dataset.The second image is the masked image which is given to the networks, and the remainder the reconstruction results. Moreover, among the multiple generated outputs by VQ-VAE (Peng et al., 2021), we select and visualize a representative one in In order to evaluate the applicability of the methods on realworld vehicle removal problems, we apply the top five methods ( including Edge-Connect, StructureFlow, MEDFE, PRVS, and RFR) based on the qualitative and quantitative results (see Table 1) to the original images of the DLR-MVDA dataset. Figure 4 demonstrates the results on two example image patches with diverse occlusion scenarios.For the first example, since the occlusions lay on a homogeneous road surface and do not require much structural reconstructions, all methods can restore the missing parts in a satisfactory way.However, results obtained with Edge-Connect suffer from texture inconsistencies, indicating its limitations in preserving texture homogeneity.In the second example, the same performance holds for vehicles on the plane road surface.For vehicles occluding the tree shadow textures, StructureFlow and PRVS outperform the other methods in reconstructing the missing tree features. In order to provide a broader view on the vehicle removal in real-world scenarios, Figure 5 represents results of Structure-Flow on a large part of an image from the DLR-MVDA dataset, where StructureFlow can remove most of the vehicles and restore the occluded road information.There are also a few failure cases, especially where the vehicle shadows are large.Since the vehicle masks usually do not include shadows, the models cannot learn how to deal with the significant contrasts on the border of the missing areas which do not belong to the road surface.Thus, the missing parts with the pixel values of the shadowed areas appear smeared.This shows the limitations caused by the vehicle masks, and the necessity for the development of algorithms learning the vehicles and their relevant features (e.g., shadows) in the training phase, in order to recognize and remove them fully automatically without relying on vehicle masks as prior information. To investigate the improvements of surface extraction necessary for HD mapping in autonomous driving, we use the labels of the SkyScapes (Azimi et al., 2019) dataset.We keep only road, parking-place and enterance-exit (access-way) classes.To remove vehicles, we propagate the class of neighboring pixels based on 8-pixel connectivity to the regions occupied by vehi- cles.In this propagation, we only allow the pixels belonging to the mentioned classes to be propagated.One could have used morphological or contour-based propagation, but we found this approach to be more accurate.We compare the predictions of SkyScapesNet (Azimi et al., 2019) with and without vehicles against the generated ground truth.Table 2 and Table 3 show how the performance can be increased on the indicated classes without vehicles.The results show after the vehicle removal, the mean IoU has increased from 60.24% to 63.51%, a roughly 3% increase, indicating the rough portion of vehicles occupying the driving areas in the predictions.The confusion matrices in Figure 6 provide more insights into the segmentation results where Figure 7 illustrates the qualitative evaluation results on three sample patches.We expect that by applying the segmentation method to the images with vehicles removed, we can achieve even better performance than what our preliminary experiment indicates. CONCLUSION AND FUTURE WORK In this paper, we address automatic vehicle removal from drivable areas in aerial imagery using DL-based inpainting methods.Due to the lack of appropriate training datasets, we generate the Aerial Vehicle Occlusion Removal (AVOR) dataset containing occlusion-free aerial image patches and realistic random vehicle occlusion masks.Subsequently, we adapt and evaluate seven state-of-the-art GAN-based inpainting methods on the AVOR dataset.Based on quantitative and qualitative evaluations, StructureFlow outperforms other inpainting methods, yielding robust restorations with high visual fidelity.Results show that all evaluated methods suffer from limitations in restoring fine structures such as lane markings on the road surfaces. In order to improve their performance, in addition to developing problem-specific networks, future work should focus on generating larger and more diverse datasets, in terms of occluding vehicles, road surface textures, and structures.Moreover, future works should make the vehicle removal algorithms independent from vehicle masks which in this stage is needed as an extra input to the networks.Using the SkyScapes dataset, we demonstrate that vehicle removal can improve the performance of surface extraction.As a next step, we plan to explore the direct application of vehicle removal to trained networks using images generated by this approach. Figure 1 . Figure 1.Examples of occlusion-free images and random occlusion masks from the AVOR dataset. Figure 2 . Figure 2. Workflow of GAN-based inpainting algorithms including the training and test phases. Figure 3 . According to the results StructureFlow ensures the continuity of the missing structures, although part of its created structures are not similar to the original image.Additionally, all methods fail in properly reconstructing fine structures such as dashed lines.Only StructureFlow could partially complete a missing dashed line on the right side of the image as a continuous line. Figure 6 . Figure 6.Confusion matrices before and after the vehicle removal. Table 2 . Surface extraction evaluation for HD mapping using aerial images after vehicle removal.Numbers are in %.
4,949
2023-12-05T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Software Engineering on Daily Activities and Fuel Consumption Report at PT. Energate Prima Indonesia Software engineering is a field of science which explores device development techniques focusing on the principle of engineering which aims to achieve the objectives of a device that is valuable, efficient and effective in accordance with the needs of users. The probblem that occur in the process of daily activities and fuel consumption report of PT. Energate Prima Indonesia is still conducted conventionally, there is no information system that can support business processes efficiently, lack of security, the reporting process is considered slow for users. The purpose of this research is to develop the software engineering in Daily Activities and Fuel Consumption report at PT. Energate Prima Indonesia. In an effort to help business processes based on user needs. In the software process that is built in this research, used the Spiral model process. The spiral model is the right process to be implemented in the construction of software systems for the Daily Activities and Fuel Consumption Report at PT. Energate Prima Indonesia the stages of the spiral model used in this study are 1. Planning 2. Risk Analysis 3. Engineering 4. User Evaluation. The results of the development software engineering system in daily activities and Fuel Consumption Report at PT. Energate Prima Indonesia has evaluated the results of engineering based on user needs and then performed Software testing using the Black BoX method. INTRODUCTION he Development of software or information technology in a company, it can be able to help the business process of a company's business efficiently and can increase the company's productivity performance so that companies that develop technology can compete globally.The development of software engineering previously explained the relationship of relations to the needs and architecture.The software requirements obtained were applied to software architects in order to develop architectures that meet the needs as targeted [1].Software requirements engineering is an important thing in the device engineering process flow [2]. Software engineering method is a discipline that discusses all aspects of software production, starting from the initial stages of finding information, analyzing all user needs, defining user needs, prototype development design, evaluation systems [3].Meanwhile according to [4] the software engineering research method has a sequence and stages that are systematic and structured. PT. Energate Prima Indonesia is a company engaged in the field of port services (Terminal Coal) located in the Penambat village of Penukal Abab Lematang Ilir Regency.In addition, PT.Energate Prima Indonesia and also engaged in providing access to the port, from Dewa Sebane Village to Perambat Village in Penukal Abab Lematang Ilir Regency, South Sumatra Province, making new roads from Jetty Harbor to Perambat Village is a project of PT.Energate Prima project.In Figure 1. a map of PT Energate Prima Indonesia's new road project is explained. Figure 1 Map of Pt Energate Prima Indonesia'a New Road Project According to Merianto as the Fuel Admin, there are currently 18 units of heavy equipment, 9 units of generators, 16 units of dump trucks and 6 units of light vehicles (LV) for company operations, which are involved in the project of making new roads and maintenance of special roads for PT.Energate Prima Indonesia, So that the reporting of daily activities and the use of fuel for the new road construction project of PT.Energate Prima Indonesia and existing road maintenance activities are needed to be able to monitor project activities and report on the use of fuel oil continuously.The following diagram shows the use of fuel oil at PT. Energate Prima Indonesia for 2017 with a total fuel consumption of 681,092 liters. T So far, PT.Energate Prima Indonesia records daily activities and fuel used, using Microsoft Excel.But in the recording process, the authors see the limited knowledge of the admin fuel on the formula (formula) in Microsoft Excel so that the use of Microsoft Excel for recording daily activity reports and fuel consumption becomes less the maximum and the number of sheets that must be input makes the process of recording slow, it also often results in errors when entering data and data storage is not yet efficient because it is still stored in the form of computer files so that if the computer is damaged the files will also be damaged and can not be accessed if you do not have softcopy of the report file.In addition to the obstacles in recording reports, constraints of the slow submission of data from the field in this case by foreman (field supervisor) becomes the thing that makes the recording of reports on daily activities of the project and the use of fuel oil is not on time.So we need an application that can facilitate the process of recording and make storage more efficient and effective and provide ease of sending data from the field. Based on the problems above, the authors took the initiative to provide a solution by building "Daily Activities Software Engineering and Fuel Consumption Report At PT. Energate Prima Indonesia Website Based" Daily activities engineering software that was built using the Spiral Model, Spiral Model is very suitable to be applied for system development that focuses on evaluation and risk analysis [5].The results achieved in accordance with the work program and the accuracy of the required software engineering needs, the system achieved will be evaluated and improved to the expected point. System Development Method System Development Method in this study using the spiral method which consists of four stages can be seen in Figure 3. Figure 3 Spiral method It is a software process model that combines the repetition form of the prototyping model with the control and systematic aspects of the linear sequential model, with the addition of a new element, namely risk analysis [6].This model has four important activities [7] here it is: The stages used in the study are as follows: 1. Planning (Planning) 2. Risk Analysis. Planning (Planning) In the process of identifying the goals and requirements of the system the researcher identified the problem with the method: 1. Interviews with related parties Studying business processes that occur. 2. Observation of the course of business processes to get a detailed picture in the field in order to get clear data.3. Literature studies Researchers are embarrassing about gathering the need for supporting theories.4. fter the identification is frozen then the identification results are referred to in the process of determining objectives, alternatives, and limitations in the process of developing the Software Engineering Daily Activities and Fuel Consumption Report at PT. Energate Prima Indonesia. Risk Analysis In the process of risk analysis is obtained from the data that has been done in the planning process, then carried out the Risk Analysis stage using the modeling approach as follows: 1. Running System Flowchart 2. Proposed System Flowchart 3. Data Flow Diagrams (DFD) 4. Entity Relationship Diagram (ERD) After modeling, proceed with making a prototype which is then tested and reviewed the shortcomings of the system.The Development Method used with the PHP programming language and MYSQL database [8]. User Evaluation In this process, development & testing is done, then develop the results of the system prototype software engineering and add deficiencies that have been tried in the previous stage n, then researchers do testing of the system that has undergone development and improvement. Methods of evaluation of engineering results based on user needs and carried out Black BoX Software Testing [9]. Proposed Procedure The proposed procedure in the modeling of the Engineering activities of daily activities and fuel consumption report on Pt.Energate Prima Indonesia.Can be seen in figure 5-8. Data Flow Diagrams Data flow diagrams on modeling Engineering activities of daily activities and fuel consumption report on Pt.Energate Prima Indonesia. Context Diagram Context diagram is a diagram that illustrates a large part of the flow of data flow Data Flow Diagrams Software engineering daily activities and fuel consumption report on pt.Energate prima Indonesia, can be seen in Figure 9. Entity Relationship Diagram (ERD) The following is a picture of the Entity Relationship Diagram (ERD) which contains the components of the entity set and the set of relations Software engineering daily activities and fuel consumption report on Pt.Energate Prima Indonesia, can be seen in Figure 11. Development In the construction of software engineering daily activities and fuel consumption report on Pt.Energate prima Indonesia.Using the PHP programming language.And the MySQL database.The results of the development can be seen as follows: Main Page Display This view contains the home menu, users, data units, daily reports, verification, and logout.Display the main page for Admin as shown in Figure 12. Verification View This display is to see whether the reports that have been input need to be revised or not The daily report menu displays as shown in Figure 17.PT.Energate Prima Indonesia has been periodically evaluated by users and then tested by using the Black Box software.So that a system of Daily Activity and Fuel Consumption Report was developed at PT. Energate Prima Indonesia a complete and effective system that can be implemented. SUGGESTIONS The suggestions are in the software engineering daily activity and fuel consumption report at PT. Energate Prima Indonesia can be developed into a realtime system in the recording of field activity reports with a Barcode scan system. Figure 2 . Figure 2. Fuel Usage Diagram for 2017 Period Figure 5 Figure 5 Proposed Flowchart Admin Section Figure 8 Figure 8 Proposed Section Manager Flowchart Figure 9 Figure 9 Context Diagram Figure 10 Figure 10 Data Flow Diagrams Figure 12 Figure 12 Display the Admin main page Figure 13 Figure 13 Display unit data menu Figure 14 . Figure 14.Display of daily reports Figure 15 Figure 15 Display daily reportThis view contains a form for inputting fuel usage reports.The daily report menu display as in figure16 Figure 17 Figure 17 Display print fuel report Figure 18 Figure 18 Display timesheet input Figure 19 Figure 19 Display of daily reports. Figure 20 Figure 20 Daily report display Implementation Risk Analysis Engineering Customer Evaluation Evaluation Method • Developing the results of the website prototype and adding deficiencies that have been tried in the previous stages, • Testing of systems that have been developed and improved Systems Assesment/ Evaluate engineering results based on user needs Black BoX Software Testing n 325 Fakultas Ilmu Komputer | Universitas Klabat | CORIS | ISSN: 2541-2221 | E-ISSN: 2477-8079
2,403.8
2019-12-26T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Stochastic nonlinear dynamics pattern formation and growth models Stochastic evolutionary growth and pattern formation models are treated in a unified way in terms of algorithmic models of nonlinear dynamic systems with feedback built of a standard set of signal processing units. A number of concrete models is described and illustrated by numerous examples of artificially generated patterns that closely imitate wide variety of patterns found in the nature. Background Problems of pattern formation and growth of forms belong to the most fundamental problems in theoretical biology and other natural sciences [1][2][3][4]. In this paper, we treat these problems from the nonlinear dynamics and system theory perspective. Specifically, we regard pattern formation and growth models as versions of pseudo-random number generators and show that they can be described and generated in terms of nonlinear systems with feedback built of a standard set of signal processing units. We show also that quite simple algorithmic models are capable of generating a wide variety of patterns, which closely remind patterns frequently found in the nature such as dendrite patters, labyrinth and zebra skin patterns, papillary patterns, fingerprints and alike. We believe that this approach facilitates unification, quantification and comparison of the growth and pattern formation models and secures their efficient computational implementation. The paper is organized as following. In Section 2, commonly used generators of pseudo-random numbers are described, represented in terms of the nonlinear dynamic systems with feedback and generalized on this base. In Section 3, it is shown that simple and straightforward modifications of these random number generators give rise to a wide family of stochastic growth models that are illustrated by Eden's type models [5][6][7][8] and by several modifications of evolutionary models that originate from Conway's "Game of Life" [8][9][10][11]. Section 4 is devoted to an extension of the approach to formation of 2-D stochastic patterns commonly called "texture" images. It suggests regular methods for generating texture images and provides a number of concrete examples of texture generating algorithmic models of different complexity capable, in particular, of imitating quite complex natural textures. Pseudo-random number generators Nothing in Nature is random. A thing appears random only through the incompleteness of our knowledge (B. Spinoza [12]) Anyone who considers arithmetical methods of producing random digits is, of course, in the state of sin. (J. Von Neuman, [12]) In this section, we describe numerical generators of "pseudo-random" numbers that are commonly used in Monte Carlo simulations and show how can they be represented in a form of nonlinear dynamical (evolutionary) systems with feedback that we further, in sections that follow, extend to more sophisticated growth and pattern formation models. First generators of pseudo-random numbers were suggested by John Von Neuman at the very beginning of the computer era. Since then, many attempts have been undertaken to improve "randomness" of the generated numbers including even attempts to introduce hardware random number generators that exploit "random" nature phenomena such as radioactivity or Brownian motion. Finally, the concept of pseudo-random numbers won overwhelming recognition, and software pseudo-random number generators have become commonly accepted for generating stochastic numbers that seem "random" in particular applications. These generators produce pseudo-random numbers recursively from one initial number ("seed" number) by using quite simple computational rules. For instance, Knuth [13] recommends an algorithm that can be described by the following recursive relationship Here ξ (t) is a pseudo-random number generated at t-th iteration, C 1 , C 2 , and C 3 are certain constants, [·] mod C is an operation of finding residual of division of the input value by C 3 . This commonly used algorithm generates, one by one, pseudo-random numbers with uniform distribution density in the range [0,1]. The algorithm can be represented by a schematic diagram is shown in Figure 1. Represented in this way, the algorithm is built of the following signal processing units: a multiplication unit, a summation unit, a point-wise nonlinearity unit that implements operation [·] mod C (its transfer function is shown in the box in Figure 1), and a one-sample delay unit. The latter is a very important component of the scheme that makes it recursive, or, in another word, evolutionary. This scheme represents an example of a very simple nonlinear dynamic (evolutionary) system. It is well known that such systems potentially are prone to cycles and "fixed points", states that, when reached, do not change in the process of iterations (system evolution). A natural requirement to the pseudo-random number generators is that they should avoid cycles and fixed points and provide numbers with nearly uniform distribution and without noticeable correlations. In practice it is achieved by a careful selection of the model parameters C 1 , C 2 , and C 3 [14]. The above scheme can, in a very natural way, be extended to the one presented in Figure 2. Multiplication and summation units in the scheme of Figure 1 are replaced here by a linear filter, a device that computes output signal by weighted summation of certain number of input samples, the weights being defined by the filter impulse response (point spread function). In addition, one-sample delay unit of the scheme in Figure 1 is replaced by a one-frame delay unit, where frame is a certain group of samples. If signal samples in this scheme are arranged in a form of a 2D array, they can be displayed as an image. Figure 2b) illustrates an example of evolution in such a system of a ξ ξ ξ ξ A modification of the pseudo-random number generator with a linear filter in the feedback (a) and examples of an ini-tial image (b) and generated images after one (c) and 10 (d) iterations Figure 2 A modification of the pseudo-random number generator with a linear filter in the feedback (a) and examples of an initial image (b) and generated images after one (c) and 10 (d) iterations. Point-wise nonlinearity natural image taken as a "seed". The linear filter in this example is a simple two-dimensional "box" filter with a uniform 3 × 3 samples impulse response. Such a filter computes, for each image sample (pixel), image local mean value over the window of 3 × 3 pixels centered at this sample. A constant C 3 in the point-wise nonlinearity was set equal to the half of the image maximal gray level. One can see on this image how the nonlinearity and feedback destroy, in only a few iterations, all pixel correlations that existed in the initial image and generate a 2D array of numbers with no visual correlations. In what follows, we will use such units, which we call "primary random number generators", as primary units in the stochastic growth and pattern formation models. They will generate inputs to the models and, in addition, they will determine "clocks" of the model evolution. Stochastic growth models In this section, we describe several classical numerical stochastic growth models to show that they can be considered as extensions of above presented pseudo-random number generators and described in terms of nonlinear dynamic system composed of standard signal processing units. Eden's type growth models Stochastic growth models aimed at simulating biological grows have been studied since very first years when digital computers became available [15]. One of the first models was suggested by M. Eden [5,6]. In Eden's model, growth was simulated as a sequence of random "births" taking place on a rectangular lattice with the probability proportional to the number of already "live" cells in the nearest spatial 3 × 3 vicinity of the given cell (left and right neighbors at the same row, three neighbors on the rows from above and three from bottom). Eden's model can be mathematically represented as an recursive equation: where (k, l) are pixel coordinates on the lattice, is the sum of pixel values in 8 neighbor points in the 3 × 3 neighborhood of the given pixel, t is the iteration index, 2Drandb(P) is a binary 2D array of pseudo-random numbers that take value one with probability P and ⊕ denotes modulo 2 addition of binary numbers. Figure 3 shows how this growth model can be implemented in a system that is just a slightly modified and extended version of the system of Figure 2. This system contains, as an individual unit, the "primary" pseudo-ran-dom number generator of Figure 1, which is now included in the loop with a linear filter, point-wise nonlinearity, 2D frame former (a unit that converts sequences of numbers into a 2D arrays of numbers), and a one frame delay unit. Impulse response of the linear filter and transfer function of the point-wise nonlinearity are shown in the corresponding boxes in Figure 3. This unit also generates a clock signal for the one-frame delay unit that defines the evolution clock rate. We assume that the 'primary" pseudo-random number generator generates real numbers in the range [0-1]. The combination of the 'primary" pseudo-random number generator and the point-wise nonlinearity with a threshold transfer function forms the unit 2Drandb(P), which implements an operation of generating, out of the primary pseudo-random numbers, binary numbers zeros and ones with a given probability P of ones. On such an array of binary numbers, the linear filter with impulse response as shown in Figure 3 computes the number of ones in the 3 × 3 neighborhood (8-neighbor sum S 8 ) of each pixel thus defining the threshold level of the pointwise nonlinearity. Clearly, this simple model describes unlimited growth. One can, however, easily modify this model to simulate drain of "sources of food" by measuring the size of the growing formation and introducing a corresponding saturation to the probability of "birth" as it is shown in schematic diagram of Figure 4. In this scheme, the 2Drandb(P) unit of Figure 3a is preceded by a (x -y ) -point-wise nonlinearity that implements the saturation. This modified model can be described by equation Schematic diagram of the Eden's model Figure 3 Schematic diagram of the Eden's model. The table in the box "Linear filter" presents the linear filter impulse response. The graph in the box "Point-wise nonlinearity" in shows the nonlinearity transfer function. is a "global" sum over the entire field of growth. It defines the size of the formation on (t-1)-th iteration (evolution) step. If saturation is introduced to all probabilities but to the probability of "birth" from only one neighboring live cell, one arrives at a modification of the model, which begins to grow dendrites after (statistically) the cell reaches a certain size. Figs. 5a) and 5b) illustrate the work of these models. Images are displayed here in color that corresponds to the "age", from red to blue, of each pixel (number of evolution steps from its birth). Other modifications of the model aimed, for instance, at imitating dependence of growth from "age" of cells are also more or less straightforward. Conway's "Game of Life" and its modifications A famous mathematical model known as Conway's "Game of Life" [8] represents yet another type of growth models, where cells on a rectangular lattice (raster) can give a "births" or "die out" depending on the number of "alive" and empty ("dead") cells in their nearest spatial neighborhood. The rules of the original "Game of life" are very simple: (i) if an empty cell has exactly 3 "alive" neighbor cells in its 3 × 3 neighborhood in the rectangular lattice, birth takes place in this cell on the next step of the evolution; (ii) if an "alive" cell has less than 2 and more than 3 "alive" cells in the neighborhood it will die on the next step; (iii) otherwise nothing happens. These rules can be formally described by the equation: where "alive" and "empty" cells are represented by "ones" and "zeros", respectively, δ(·) is the Kronecker delta (δ(0) is the sum of the values in 8-neighborhood of (k, l)-th cell on a rectangular lattice, and t is the iteration number. In the original model [8], a deterministic initial distribution of zeros and ones in the field was assumed. By introducing "random" initial distribution of "alive" and empty cells, the model can be made stochastic [9,10]. The corresponding schematic diagram of this model is shown in Figure 6. As one can see, this diagram contains essentially the same units as the Eden's model, but here they as arranged in 2 parallel branches (one for "births" and one for "deaths"), Schematic diagram of the stochastic Conway's "Game of Life" model Examples of images generated by the Eden's model with satu-ration (a) and its modification that evolves into growing den-drites (b) Figure 5 Examples of images generated by the Eden's model with saturation (a) and its modification that evolves into growing dendrites (b). Images are color coded according to the color bar to indicate the "age" of different parts of the patterns. and the 2Drandb(P) generator of the Eden's model is placed at the input of the model and is used for generating only initial "random" distribution of 1's and 0's for "alive" and empty cells. The evolution clock rate of the model is determined in this model by the one-frame delay unit. It is well known that the model generates several types of formations: -Stable formations that once appeared keep staying unchanged unless they are destroyed by other formations; -Growing crystal-like formations that grow until their fragments form stable formations or die out; -Cyclic, in course of iterations, formation that repeat themselves with a certain period; -"Moving", in course of iterations, formations also featuring iteration-wise cycles ("gliders"). Boundary conditions of the model are important for its evolution. Under pseudo-random boundary conditions, when pseudo-random binary numbers are permanently generated at the borders of the field, the model generates patterns that do not converge to a fixed (stable) ones though always contain certain number of formations that "live" during considerably large number of iterations (evolution steps). Such pattern evolution is illustrated in Figure 7. One can, for instance, see on these images "randomly" located stable formations such as 2 × 2 pixel square blocks, hexagonal formations called beehives, formations that grow like crystals, and moving formations, or "gliders" (marked in the figures by black boxes), which move across the lattice with a period of 4 evolution steps. An important parameter of the model is the direction of the spatial interaction. It is defined by the linear filter impulse response. In the original Conway's model, the spatial interaction is almost isotropic: all cell's 8 neighbors play the same role in the defining next state of the cell on each iteration step. In the model of Figure 6, this is reflected in the linear filter isotropic impulse response equal to 1 for all 8 neighbor pixels. In general, the filter impulse response may not be isotropic. In particular, it may define only one-dimensional interaction (only left and right neighbors of each cell affect its next state) thus producing one-dimensional models. An interesting special case of such a 1-D model is the one described by the equation: where S 2 is the sum over 2 neighbor cells of the k-th cell (from the left and from the right). Figure 8a) shows, row by row, an example of the evolutionary behavior of such a one-dimensional model. It is interesting to observe that patterns, which appear in the process of the evolution, are identical to the so-called Sierpinski Gasket [16]. As it is shown in Figure 8b), they also remind patterns that some see-shells develop in their life (see, for instance, [17,18]. One can further modify the canonical Conway's model by introducing stochastic "death" and "birth" events: where 2Drandb(P d ) and 2Drandb(P b ) are the same binary pseudo-random number generators as in the Eden's model (Eq. 2). They produce "ones" with probabilities P d (probability of "death") and P b (probability of "birth"), respectively. Note that in the original, non-stochastic, Conway's model, P d = P b = 1. If P b <1 the evolutionary behavior of this model changes very substantially. The model begins to produce labyrinth-alike formations with irregular dislocation whose positions depend on the realization of the initial primary pattern. While the "body" of the patterns stabilizes after a few iterations, their periphery continues growing independently until the pattern fills the entire lattice. Depending of the probability of "ones" in the initial pattern it may happen that several such formations arise and start growing until they merge into one larger labyrinth-alike formation. An example of such an evolution is shown in Figure 9. These labyrinth-alike patterns can very frequently be found among natural patterns such as patters of magnetic domains, paterns of stripes on zebra skin, lab-yrinth alike patterns on fingerprints and similar formations ( Figure 10 adopted from [21]). One can further generalize the Conway's model in different ways. An interesting option is the one, in which the Kronecker delta-function δ(·), which describes logical operation in Eq. 4, is replaced by a "fuzzy delta", a nonmonotonic unimodal function ∆(·) [9][10][11]: where L 1 and L 2 are outputs of linear filters that replace summations over 8 neighbors in the model of Eq. 4 and C 1 and C 2 are constants that replace thresholds 2 and 3 in the model of Eq. 4. In this modification, states of cells are not binary and are modeled by real numbers that take arbitrary values in the range [0,1]. Experiments reveal very rich evolutional pattern formation capability of this model. With this model, the following three major types of the evolutionary behavior can be observed depending on the spread of the "fuzzy delta" and constants C 1 and C 2 : "stable chaos", "ordering of chaos" and "reemerging of chaos". In the "stable chaos" mode, initial chaotic patterns produced by the primary 2-D random number generator gradually evolve into visually correlated patterns that then remain to look similarly though individual cell values keep changing with iterations. In the "ordering of chaos" mode, initial chaotic patterns degenerate, in the course of iterations, into spatial star constellation-alike or labyrinth-alike patterns that remain stable spatial-wise but may exhibit "temporal" (iterationwise) cycles. Obviously these are the model's "fixed" points. Natural "labyrinth" and "zebra skin" patterns Figure 10 Natural "labyrinth" and "zebra skin" patterns. a) -image of the magnetic domain structure (adopted from [21]); b) -a fragment of the zebra skin; c) -a fingerprint. a) b) c) Examples of the modified Conway's model evolution with P d = 0.25 and P b = 1 (a) -initial pattern; b), c), d) -evolution results after 50, 75 and 200-th iteration steps, correspond-ingly, that form "labyrinth" or "zebra skin" -patterns Figure 9 Examples of the modified Conway's model evolution with P d = 0.25 and P b = 1 (a) -initial pattern; b), c), d) -evolution results after 50, 75 and 200-th iteration steps, correspondingly, that form "labyrinth" or "zebra skin" -patterns. The most complex and varying is the behavior of the "reemerging of chaos" type. Its basic feature is rapid degeneration of the initial pseudo-random pattern into a uniform field (a trivial fixed point of the model) or into "star constellations". After that, a new chaotic pattern emerges through growing crystal-alike formations from the constellations left from the initial pattern, through spatial waves from the borders when they are kept to be random, or through the appearance of different types of "gliders" that move across and collide producing clouds of new "particles". These emerging formations gradually fill in the field with visually correlated patterns similarly looking to those, which are characteristic for the "stable chaos" mode. Examples of the evolutionary behavior of such a model are shown in Figure 11. As one can see, typical examples of patterns generated by the model are labyrimth patterns ( Figure 11d) and papillary pattern (Figure 11c) that remind patterns frequently found in cytology as the one shown in Figure 12 (adopted from [23]). An illustrative video showing moving gliders generated by the model can be found at L. Yaroslavsky's home page ( [22]). 2-D pattern formation: texture images Above described growth models can be regarded as special cases of a general model described by schematic diagram shown in Figure 13. Such a representation assumes that output patterns are generated by mean of a transforma-tion of primary 2D arrays of pseudo-random numbers in a certain signal processing system. Different systems produce patterns of different classes. Patterns generated by the same system out of different realizations of the primary array of pseudo-random numbers are different realizations of patterns of the same class defined by the transformation system. In order to make such a representation constructive, we will assume that the transformation systems are built from of a set of certain standard (elementary) signal processing units. Parameters of these units and the transformation system structure form the set of parameters that define stochastic pattern of a certain class. Specific selection of the set of structural signal processing units is governed by considerations of the convenience of their parameterization and by their computational complexity. It is only natural to use units that form the basic and computationally efficient instrumentation tool of digital signal and image processing (see, for instance, [19], such as following units: • Point-wise nonlinearity (PWN) that transforms signal samples according to the relationship: (k, l)), output(k, l) = F(input Algorithmic model of texture images Figure 13 Algorithmic model of texture images. Primary 2-D array of pseudo-random numbers Transformation system Output pattern Examples of the evolutionary behavior of the modified Con-way's model of Eq. 7: stable "star constellations" patterns (a,b), "clouds" (c), and labyrinth-alike pattern () Figure 11 Examples of the evolutionary behavior of the modified Conway's model of Eq. 7: stable "star constellations" patterns (a,b), "clouds" (c), and labyrinth-alike pattern (). Cell value levels in the images are varying here from 0 to 255 and are coded in color as it is represented by the color bar. where F(·) is, generally, a nonlinear function that defines the transformation transfer function of the unit and (k, l) are sample indices. • Linear filters (LF). Linear filters are defined by the equation of weighted summation: where h(m, n; k, l) -is the filter impulse response. • Rank filters (RF) [19]. Rank filters operate with signal order statistics computed over a certain neighborhood of each sample of the array and are defined by the equation: output(k, l) = F los (input(k, l)), (10) where F los (·) is a function defined by the local order statistics computed, for every (k, l)-th sample of the array over its certain neighborhood (nbh). • Logical filters. Logical filters assume working with binary arrays and are defined by a certain Boolean function of input pixels. For binary images, logical filters can implement both linear and rank filters. This list of signal processing elementary units does not pretend to be complete and one is free to further extend or to modify this list to include other processing units proved to be useful signal processing components. As for the connection of units into a system, the following types of interconnections can be assumed: • serial connection; • parallel connection; In what follows in this section, we will show how such an approach allows to readily build models that are capable of generating very wide variety of stochastic 2-D patterns, including those that imitate natural textures. We will call these patterns texture images. Figure 14 represents the simplest PWN-model, in which the transformation system consists of the primary pseudorandom random generator and a single point-wise nonlinear transformation (PWN) unit in cascade. Note that unit 2Drandb(P) used in the above described growth models is a particular version of this model, in which point-wise nonlinearity is a threshold nonlinearity In the PWN-model, one can easily, by an appropriate selection of the nonlinearity, control probability distribu-tion density of the values of samples of generated patterns. The next step in the hierarchy of models is LF-model, in which the transformation system consists of a primary pseudo-random number generator and a linear filter in cascade ( Figure 15). On can show [11,19] that, with LF models, patterns with probability distribution of sample values close to the Gaussian distribution are generated. Selection of the linear filter frequency response (Fourier Transform of its impulse response) controls Fourier power spectrum (spectral density) of the pattern and, correspondingly, its correlation function. Figure 16 shows four examples of patterns obtained from initial pattern of uniformly distributed uncorrelated pseudo-random numbers using linear filters with different frequency responses (shown in the left column of the figure). Especial interest represents the texture shown at the bottom of the figure. It was generated using linear filter with the isotropic filter frequency response, inversely proportional to absolute value of spatial frequency. This texture image illustrates what is conventionally known as (1/f)-fractals [16]. LF-model, however simple it is, allows imitating quite a number of natural texture images [11,20]. Some illustrative examples of such images are shown in Figure 17, [24]. Point-wise nonlinearity Texture image Primary 2-D array of pseudo-random numbers Combination of the threshold type point-wise nonlinearity and a linear filter in cascade with the primary pseudorandom number generator (Figure 18a) forms PWN-LF models. They generate patterns of randomly distributed filter impulse responses. An example is shown in Figure 18b). Inversion of the order of the point-wise nonlinearity and linear filter in PWN-LF models results in LF-PWN-models ( Figure 19a). LF-PWN-models allow to generate textures with correlation function controlled by the linear filter impulse response and with a given distribution density controlled by the nonlinear unit. Examples of such a texture image is shown in Figure 19b). All above described models contain only one branch (several units in cascade). Obviously, texture models can have several branches whose outputs can be combined in different ways. For instance, outputs of branches can be multiplied, or output of one branch can be used to switch between outputs of other branches, or output of one branch can control parameters of the transformation units in another branch, etc. Above described growth models of Figure 3 and 4 exemplify such multiple branch models. Figure 20 shows four examples of texture images gener-ated by more sophisticated models with multiple branches. Up to now no feedback connection was assumed in the texture models except the one in the primary pseudo-random number generator. Clearly, feedback gives to the models evolutionary features In order to exhibit nontrivial evolutionary behavior, systems should contain, in a loop, both linear filter and a non-monotonic nonlinearity or a nonlinear filter with spatial interaction, such as rank filter. Inserting into the loop rank filters that combine spatial interaction and substantial nonlinearity in a more sophisticated way then just by cascading linear filters and point-wise nonlinearity gives rise to a new family of evolutionary models. Note that the primary pseudo-random number generator in such system serves to introduce only an initial "seed" pattern. The above random number generator of Figure 2 and growth models exemplify the simplest of such systems. An example of such an evolutionary model with a rank filter is illustrated in Figs. 21. Figure 22 shows examples of textures generated by the model. They very closely remind natural patterns of crystals illustrated in Figure 23. The Natural texture images from Brodatz' album of natural tex-tures [24] (left column, from top to bottom: textile, moher, wood) and their synthetic copies (right column) generated by the LF-model. Image at the bottom is yet another example of a synthetic wood-texture artificially colored in brown. Figure 17 Natural texture images from Brodatz' album of natural textures [24] (left column, from top to bottom: textile, moher, wood) and their synthetic copies (right column) generated by the LF-model. Image at the bottom is yet another example of a synthetic wood-texture artificially colored in brown. Examples of texture images generated by LF-models (right column) and the corresponding filter frequency responses shown (left column) in form of images and as a plot in 2-D coordinates of spatial frequencies Figure 16 Examples of texture images generated by LF-models (right column) and the corresponding filter frequency responses shown (left column) in form of images and as a plot in 2-D coordinates of spatial frequencies. rank filter used in this model replaces, in each iteration, gray level of every pixel by the most frequent value within a certain spatial window S centered at the pixel (this operation is called MODE S (input(k, l) with k, l as pixel 2-D indices [19]). As one can see, patterns generated by this model are very reminiscent of natural crystals, cells and cell wall patterns. Conclusion We outlined an approach to the analysis and design of stochastic growth and pattern formation models that treats the models in terms of nonlinear signal processing systems with feedback composed of a set of standard and algorithmically simple processing units. We have described a variety of concrete growth and pattern formation models built on the base of this approach and have shown, on examples, that they are capable of imitating natural growth and patterns such as dendrites, see shell, labyrinth, zebra skin, papillary, fingerprint patterns, fur, wood, textile, clouds and alike textures. We believe that such a unified approach facilitates the growth and pattern formation model design, comparison, quantification and unification and secures their efficient computational implementation. Schematic diagram of an evolutionary texture model with a rank filter MODES Figure 21 Schematic diagram of an evolutionary texture model with a rank filter MODES. Naturally looking texture images generated by models with multiple branches Figure 20 Naturally looking texture images generated by models with multiple branches. Natural texture image of crystals Figure 23 Natural texture image of crystals. Examples of images generated by the model of Figure 20: pri-mary pseudo-random pattern (a); two texture images gener-ated by the special homogeneous (b) and inhomogeneous (c) model with different size of the spatial neighborhood; edges pattern of figure (d) Figure 22 Examples of images generated by the model of Figure 20: primary pseudo-random pattern (a); two texture images generated by the special homogeneous (b) and inhomogeneous (c) model with different size of the spatial neighborhood; edges pattern of figure (d).
6,986
2007-07-05T00:00:00.000
[ "Computer Science" ]
Extended expansion linkage engine: a concept to increase the efficiency In a broad-based survey, the reciprocating piston engine with extended expansion stroke was found to have the highest efficiency potential for passenger car propulsion. To confirm the predicted efficiency, three different prototype engines were built and measured on testbed. Measurements were complemented with thermodynamic simulations. Investigations focused on naturally aspirated and supercharged SI engines. It could be shown that a naturally aspirated SI engine with an expansion ratio $$\gamma$$γ of 2 gains an efficiency improvement of 7 percentage points compared to a conventional crank train engine. It was also found that the extended expansion has no inherent effect to combustion, emission formation and wall heat transfer. Major effort was made to assess the Miller cycle as a thermodynamic alternative to the crank train with extended expansion. Measurements and simulations revealed that Miller suffers in a way from higher wall heat and gas exchange losses cutting a substantial share of the efficiency potential of an equivalent crank train solution. Introduction and concept decision Conventional combustion engines for passenger car propulsion are already highly developed, hence it is no longer possible to reach significant efficiency improvement with 1 3 evolutionary development. Initial objective of this thesis is to find a revolutionary new concept for a propulsion engine. A literature research and a first assessment of concepts were undertaken to identify possible solutions without any restrictions in the first instance. Thermal engines can be divided into machines with continuous process (e.g. gas turbine) and discontinuous process (e.g. Otto process). Thermal engines can further be divided into machines with external combustion (e.g. Stirling engine) and with internal combustion (Otto process). Some simple considerations allow for a first elimination of engine concepts. With external combustion processes it would be necessary to conduct the whole thermal energy through the wall of a heat exchanger whatsoever. Consequently, the peak process temperature and thus the thermal efficiency of the machine are limited by the temperature resistance of the used material. Right now there exists no material that can withstand temperatures that were necessary for sufficient process efficiency. Processes with external combustion were dropped for this reason. In the case of continuous combustion, the combustion temperature would be also limited by the thermal stability of the combustion chamber gas outlet device. Processes with continuous combustion were also dropped. Thus, it is clear that a high efficiency thermal process for vehicle propulsion is discontinuous with an internal combustion (Otto process). The discontinuous operation enables high process temperatures without exaggerated temperature stress for the engine parts. Admittedly, it was somehow unexpected to end up with a rather conventional process. The first step to refine this first selection is to identify the process changes which increase the efficiency, compared to a conventional process. Extended expansion, waste heat recirculation and advantages from cooled compression (higher compression ratio or higher potential of heat recirculation) are fundamental concepts to reduce the fuel consumption of the process. Figure 1 shows the most relevant process adaption in the pV diagram. Starting from the process of the conventional internal combustion engine, which is shown as broken line 1-2-3-4-5, an extended expansion is achieved if the volume at the end of expansion exceeds the intake volume. It is also possible to increase the efficiency using recirculated waste heat Q HE to substitute fuel energy. The third shown measure is cooled compression in combination with waste heat recirculation. Cooled compression 1-2 cool decreases the necessary work for the compression. However, without heat recirculation, more fuel energy would be used to reach the same point 4 at the end of combustion and the efficiency would not increase. With waste heat recirculation the combustion starts at 3 instead of 2. Thus, more work can be gained with less exerted fuel energy. If the compression ratio is limited by knocking, it is possible to gain additional efficiency by reducing the probability of knocking with cooled compression in combination with a high compression ratio. New engine concepts are required to realise above described processes. For passenger car scale, reciprocating piston engines offer the best way to reach a high efficiency. Other concepts [1][2][3][4] drop out due to various disadvantages. These are either friction, combustion, wall heat losses or other-or all of them. Some of the concepts were excluded based on simple considerations, others on the basis of thermodynamic simulations. After exclusion of obviously inappropriate machines two concepts remained for a detailed simulation analysis. These are the Atkinson Crank Train [5] and the Split Cycle Engine. The Atkinson Crank Train provides an extended expansion in one working chamber, while the Split Cycle Engine comprises two connected working chambers. The Split Cycle offers, additionally to extended expansion, the possibility of cooled compression and waste heat recirculation. Figure 2 shows an overview of the simulation results. With the results it has to be discriminated between knock-limited processes (e.g. gasoline Otto process) and not knock-limited processes (e.g. Diesel process). With 1D-thermodynamic simulations, following findings contribute to the ongoing selection process: • Waste heat recirculation is not reasonable for processes limited by knocking (e.g. gasoline). • Waste heat recirculation does not provide any advantage when all losses of the real process are considered (heat loss, flow resistance, valve opening). • Cooled compression gives some advantage when combined with higher compression ratio with knock limited processes. • Cooled compression gives only minor advantage on processes, not limited by knocking. After this short overview, it is clear, that the concept with extended expansion gains the highest efficiency under real conditions. In this phase of the project, the initially unrestricted investigations were focused on the stoichiometric gasoline combustion. Thus, all discussions and results are related to an Atkinson cycle engine with stoichiometric gasoline combustion henceforth. This process is discussed in more detail in the following chapters. After some theory in Sect. 2, the mechanical realisation and the used test carriers are described in Sect. 3. Section 4 summarizes the measurement results. 1D-CFD-post processing of the measurements revealed additional insight-see Sect. 5. Extended expansion theory The theoretical considerations base upon the idealized standard air fuel cycle. Figure 3 shows this idealized process with extended expansion. After the compression 1-2 and the combustion 2-3, an isentropic expansion 3-4 follows. The expansion does not end, unlike the conventional Otto process, at the intake volume V inlet . The expansion continues until the volume V Ex is reached. Because of this, most of the combustion pressure can be used and the shaded area of extra work is gained. In the case of the conventional engine, useful energy escapes through the exhaust system. The process with extended expansion has a short intake stroke and a long expansion stroke. Thus, an extra parameter in addition to the parameters of the conventional process is necessary to characterise the process with extended expansion. Here , the ratio of the volume after the expansion V Ex and the volume before compression V inlet , is used. It is also equal to the ratio of expansion ratio and compression ratio Table 1 shows the efficiency of the stoichiometric standard air fuel cycle. It depends on the compression ratio c and on the volume ratio . The shown volume ratios 1, 1.56 and 2 were also realised at the test bench. Even higher volume ratios are not reasonable with mechanical crank trains. To reach an expansion to atmospheric, the necessary volume ratio p atm would be higher than four. The process with extended expansion is similar to the conventional process, but the influence of some parameters such as air to fuel ratio or boost pressure is not the same. An important point is the effect of different supercharging systems on the process with extended expansion. A mechanically driven supercharger can be taken as a part of the overall compression. The overall compression ratio overall is the product of supercharger-compression and cylinder-compression. The overall volume ratio overall is the ratio of the overall compression ratio overall and the expansion ratio Ex of the crank train. The overall compression ratio overall increases with supercharging. Hence, the overall volume ratio overall decreases, because the engine expansion ratio remains unaffected by the supercharging. In Fig. 4, this correlation is depicted. The plot shows the theoretical efficiency v versus overall . There is an array of thin black curves which represents the efficiency as function of the volume ratio for fixed boost pressures (BP). The red curve shows the efficiency of a conventional engine. Starting from naturally aspirated operation ➀, the efficiency drops with increasing supercharging pressure ➁. The influence of the decreasing overall volume ratio overall near the value 1 is higher than the influence of the increasing compression ratio overall . This meets the expectation of mechanical supercharging. In the case of a crank train with a volume ratio of two (blue line), the efficiency increases slightly ➂ → ➃. So it is possible to increase the power density and the efficiency by supercharging. A crank train with a volume ratio of four (black line) shows a strong increase of the efficiency by super charging ➄ → ➅. Realisation of the process with extended expansion The realisation of the extended expansion is possible by a special crank train (Atkinson cycle) or by Miller valve timing with a conventional engine. There exist other concepts with extra expansion machines but these are not investigated here in detail. Their potential is limited [6]. Figure 5 shows the principle of crank trains for extended expansion. The idea is to superimpose the oscillation of the crank train and a second oscillation with half the frequency. The result of this summation is the shown oscillation (red line). Every second outward stroke is longer. For the crank train realisation of this principle, some solutions are known. One possibility is the Honda linkage [7] which is shown in Fig. 6. The test carrier here uses the same design principle as Honda. Miller realisation Another possibility for extended expansion is a conventional crank train with Miller timing [8] which is shown in Fig. 7. One variant of Miller timing is an early inlet valve closing. The inlet valve closes before bottom dead centre. Because of this, the air inside the cylinder expands to low temperature and pressure. After the recompression to point 4, the same condition as at point 2 is reached. The work of the expansion compensates the used work for the compression. Because of this, the effective compression begins at 4. So, the expansion stroke is longer than the effective compression stroke. 1 Test carrier The test carriers are based on the two-cylinder parallel-twin Rotax 804 which is used in the motorcycle BMW F 800, but with a prototype crank train (Fig. 8). Table 2 shows the three different crank trains. The base compression ratio of all variants is 12. The camshafts are optimised for 3000 rpm. To compare the extended expansion with a Miller engine, the base engine was adapted with a Miller camshaft and with a higher geometric compression ratio. As mentioned before, the Miller timing reduces the effective compression ratio. Inter alia, because of the quite slow valve closing, it is not possible to quantify neither the effective compression ratio nor the effective compression stroke exactly with Miller. Measurement results Numerous measurements have been conducted with the above mentioned test carriers. Speed, load, compression ratio, AFR and boost pressure were varied to investigate the characteristic of extended expansion. Figure 9 shows the measured indicated thermal efficiency i versus mean effective pressure. The mean effective pressure p i,c is related to the (effective) intake volume. The load is varied by an intake throttle, the engine is operated stoichiometrically. The measured efficiency at full load of the engine with extended expansion with a volume ratio of 1.56 is with 43.1% significantly higher than that of the conventional engine which reaches 38.5%. At part load operation, the efficiency advantage of the extended expansion decreases. At part load operation, the gas exchange cycle becomes negative, because of throttling. The effect of a negative gas exchange cycle increases with higher volume ratio [9]. Additionally, at the lowest loads, the pressure at the end of expansion is lower than the pressure in the exhaust system. The efficiency advantage of the engine with the volume ratio of two is less, compared to the engine with = 1.56 . This is due to the badly shaped combustion chamber because of the short compression stroke. 2 That causes a high wall heat flow, high unburned exhaust energy and a slow combustion. The Miller engine has an unexpected low efficiency advantage compared to the conventional engine ( × in Fig. 9). That is-again-due to a bad combustion chamber and a bad Fig. 9 Measured indicated efficiency for a throttle variation [9] combustion as result of a reduced tumble [10]. Additionally, the inlet valve lift is restricted to about 4 mm. Anyway, Miller timing-compared to the engine with extended expansion-has negative effects on principle, which will be discussed in detail in Sect. 5.2. This paper focusses on the thermal efficiency of extended expansion engines. However, other parameters contribute to the feasibility of new engine concepts also. With this concept, the parameters friction, mass balancing and exhaust gas temperature may become critical. For further information, please refer to [11] (friction), [15] (mass balancing) and [9] (exhaust gas temperature). 1D-CFD post processing With the measurements, it is possible to calibrate 1D-CFDsimulation models. These calibrated models were used to post-process the measurements, thus allowing to gain a deeper understanding of measured effects and to simulate variants, which were not measured before. Wall heat transfer The wall heat transfer plays a crucial role in the assessment of Atkinson-type engines. Prior to the measurements it was assumed that the unconventional piston movement has an impact to the wall heat transfer as the piston speed is one of the most important parameters. By simulative recalculation of the measured operating points it was possible to reveal the effects of the Atkinson piston movement to the heat transfer. This evaluation was supported by the direct measurement of the local instantaneous heat transfer using the surface temperature method [12]. It was shown that the concept of extended expansion has no major effect to the wall heat transfer compared to an conventional engine with the same compression stroke. The compression stroke and the piston movement around top dead centre have the strongest influence on the wall heat transfer. As there exist no significant differences in piston movement between an Atkinson engine and a conventional one during this phases of the process, there are also no significant differences in heat transfer. However, when using zero-dimensional heat transfer models the definition of the piston speed has to be considered. With models using the mean piston speed (Woschni, Hohenberg) best results were achieved, when the mean piston speed during expansion stroke was assumed as linearly rising over crank angle from top dead centre to the bottom dead centre. In [13] it is shown that the Bargende heat transfer equation, using the instantaneous piston speed, can cope with the Atkinson cycle piston movement without adjustment. Miller engine vs. engine with extended expansion As described in Sect. 4, the measured efficiency of the Miller cycle engine was disappointingly low. This was ascribed to a slow combustion and restricted valve lift. To identify further negative effects of Miller timing, simulations with the same rate of heat release were done. Following boundaries were used for these simulations: • compression ratio c = 10 • exhaust turbocharger, boost pressure = 1.4 bar • same rate of heat release for all points Figure 10 shows the simulated indicated efficiency for the crank train solution and the Miller solution for different volume ratios. The volume ratio is substituted by the volumetric efficiency, because with the Miller valve timing it is not possible to define an exact . Starting from the conventional engine (vol. eff. = 0.97 / = 1 ), the efficiency increases by increasing the volume ratio (decreasing volumetric efficiency). The variation of is done by increasing the exhaust stroke for both variants that ensures identical geometrical boundary conditions for the standard air fuel cycle. The curves show a significant benefit for the crank train solution, what is additionally highlighted in the lower chart. A loss analysis reveals the cause for this clear difference. 3 Figure 11 shows the loss analysis of the conventional engine and the engine with extended expansion. Starting with the efficiency of the standard air fuel cycle with real cylinder charge, following losses are subtracted: loss from incomplete combustion Δ IC , from real combustion Δ RC , from heat transfer Δ WH and the gas exchange losses Δ GE . The efficiency difference between the conventional engine and the engine with extended expansion is caused by the higher efficiency of the standard air fuel cycle of the latter. The left two loss analyses in Fig. 12 compare the engine with extended expansion with the Miller engine. The efficiency of the Miller engine is three percent point lower than that of the engine with extended expansion. This is partly due to the higher gas exchange losses Δ GE . Likewise, the efficiency of the standard air fuel cycle is less. That is because of the higher start temperature T 1 of the Miller cycle. 4 The higher temperature has two reasons. On the one hand, the (input) wall heat of the Miller cycle during the gas exchange is higher than for the conventional engine. On the other hand, the (output) work during aspiration of the Miller engine is lower than that of the engine with extended expansion, which also causes a higher gas temperature [9]. Furthermore, the loss from real combustion Δ RC is higher, although the rates of heat release over crank angle are equal for both. That is due to the different kinematic of the crank train. The piston speed of the Miller engine near the top dead centre is higher than that of the engine with extended expansion. Thus, the combustion of the engine with extended expansion is closer to the isochoric combustion than that of the Miller engine. The rightmost loss analysis in Fig. 12 shows a dethrottled Miller engine. That means an increase of the intake port area of 65% and an inlet valve that closes fast within 10 • CA. These measures increase the efficiency of the Miller engine for one percent point. This is caused by the reduced gas exchange losses and the increased efficiency of the standard air fuel cycle. The latter is caused by the decreasing temperature at the start of the effective compression T 1 . Thus, dethrottling acts in two respects on the efficiency, but it is not easy to realise. These are simulation results that assume identical combustion. A realised Miller engine would have an even lower efficiency because the Miller timing reduces the combustion speed due to deteriorated charge motion [10]. In the case of the Miller engine, furthermore, the possible compression ratio may be reduced because of the higher knock probability due to the higher temperature T 1 at start of compression. That causes a lower realizable expansion ratio of the Miller engine compared to the engine with extended expansion. Supercharger versus exhaust turbocharger With conventional engines, exhaust gas turbocharging is generally better in terms of efficiency than mechanical supercharging. However, above considerations (see Sect. 2) indicate that on engines with extended expansion also supercharging can gain an efficiency benefit. Thus, these two charging concepts have to be compared again. 3 The upper chart of Fig. 13 compares the efficiency of the engine with exhaust turbocharger with the engine with supercharger as function of the volume ratio . With increasing volume ratio, the efficiency advantage of the engine to turbocharger decreases compared to the engine with supercharger. At high volume ratio, the required exhaust back pressure of the turbocharged engine increases because of the decreasing blow down energy. This causes high gas exchange losses of the turbocharged engine at high volume ratios. A supercharger with a (more) positive gas exchange loop could be the better solution. The lower chart of Fig. 13 shows the influence of the exhaust turbocharger efficiency on the indicated efficiency for three different volume ratios. In the case of the conventional engine ( = 1 ), the influence of the turbocharger efficiency TC is low. The blow down generates enough energy to drive the compressor. Below a certain turbocharger efficiency, the blow down does not supply enough energy and the back pressure has to be increased. In case of extended expansion, this threshold value is found at higher turbocharger efficiencies. Subsequently, the characteristics of the curves in the left chart depend on the efficiency of the turbocharger and the supercharger. Summary and outlook A broad-based survey tried to find a machine for passenger car propulsion with superior efficiency compared to conventional engines. Without any restrictions whatsoever thermal engines of various types were assessed. Unexpectedly, it was shown that a rather conventional process let expect the highest efficiency potential, and that is the reciprocation piston engine with extended expansion, i.e. an conventional engine with a modified crank train. The so-called Atkinson engine was investigated in detail by means of theoretical thermodynamics, measurements and engine process simulation. The thermodynamic theory revealed some interesting differences to conventional combustion engines, mainly with respect to charging. It was shown that a mechanical supercharger can lead to increasing efficiency on extended expansion engines. Further, the theory identified an efficiency gain of almost 10 percent point for loss-free processes. Three different test carriers with different expansion ratios were used to confirm the predicted high efficiency potential of extended expansion. The test carriers used an stoichiometric Otto combustion process. It was found that the engine with a of 1.56 had an efficiency 4.6 percentage points higher than the conventional engine at NA full load operation. It was also found that the extended expansion has no inherent effect on combustion, emission formation and wall heat transfer. However, due to restrictions of prototype construction this is most likely not the entire potential of extended expansion. Based on these measurements, a thermodynamic 1D simulation model was built and calibrated. With this model it was possible to overcome the restrictions of the measurements, and it was found that a extended expansion engine with a of 2 is most likely to gain an efficiency benefit of 7 percent point compared to a conventional engine with above boundary conditions. However, for throttle controlled stoichiometric Otto engines, a serious deterioration of efficiency occurs at part load, with the extended expansion being very sensitive to pumping work. This favours such engines to be used in stationary or phlegmatized operation. Major effort was made to assess the Miller cycle as a thermodynamic alternative to the crank train solution. Measurements and simulations revealed that Miller suffers from inherent and also actual losses, cutting a substantial share of the efficiency potential of a equivalent cranktrain solution. Considering mechanical issues such as friction [14] and mass balancing [15] it may nevertheless turn out to be more convenient to use Miller cycle process in short-term series solution [16] accepting some disadvantages in peak efficiency.
5,397
2018-05-18T00:00:00.000
[ "Engineering", "Physics" ]
First-Principles Analysis of Vibrational Properties of Type II SiGe Alloy Clathrates We have mostly performed vibrational studies of Type-II silicon-germanium clathrate alloys, namely, Si136-xGex (0 < x ≤ 128), using periodic density functional theory (DFT). Our computed lattice constant for various stoichiometric amount, namely, x, of Ge agrees to some extent with the observed X-ray diffraction (XRD) data, along with monotonically increasing dependence on x. According to our bandgap energy calculation via Vienna ab initio simulation package (VASP), Si128Ge8 has a “nearly-direct” bandgap of approximately 1.27 eV, which agrees well with the previously calculated result (~1.23 eV), which was obtained using the Cambridge sequential simulation total energy package (CASTEP). Most of our first-principles calculations focus on exploring the low-energy transverse acoustic (TA) phonons that contribute dominantly to the induction of negative thermal expansion (NTE) behavior. Moreover, our work has predicted that the Si104Ge32 framework exhibits NTE in the temperature range of 3–80 K, compared to the temperature regime (10–140 K) of NTE observed in such pure Si136. It is posited that the increased number of Ge–Ge bonds may weaken the NTE effect substantially, as the composition, which is denoted as x, in Si136-xGex is elevated from 32 (or 40) to 96 (or 104). Introduction In contrast to the diamond phase of silicon (d-Si), there are two forms of crystalline clathrate: Si 46 (Type I) and Si 34 (Type II). Each of these pure materials consists of a covalently bonded framework that is composed of polyhedron cage elements. The enlarged unit cell of the Type II clathrate framework contains 136 atoms, exhibits a face-centered cubic (FCC) lattice structure and contains 20-and 28-atom cages that are connected periodically in a 4:2 ratio [1]. Growing interest in this expanded-volume silicon has arisen for two main reasons: the confirmed existence of superconductivity in metal-doped clathrate, namely, Ba x Na y Si 46 [2][3][4][5], and the massive studies that have been conducted on efficient thermoelectric (TE) performance with guest-filled Si clathrates, which display glass-like thermal conductivity while behaving as a crystalline-cubic material [6][7][8]. Specifically, the efficiency of a TE device is manifested by the material's figure-of-merit, ZT ≡ σS 2 T/κ, where σ denotes the electrical conductivity, S is the Seebeck coefficient; T is the absolute temperature, and κ is the thermal conductivity. An effective way of enhancing ZT is through reducing the phonon thermal conductivity by nanostructuring [9], alloying [10], or introducing cage-like configuration that encapsulates rattling atoms, such as Si-or Ge-based clathrate compounds [11]. At present, many reports have discussed the electronic and thermodynamic properties of Si-and Ge-based Type II clathrate compounds [12][13][14][15][16][17] with the objectives of identifying prominent TE materials and gaining insight into interesting properties such as anomalous thermal expansion. One characteristic Nanomaterials 2019, 9,723 3 of 17 Computational Approach Our first-principles calculations are conducted using the Vienna ab initio simulation package (VASP) [38], which exploits the Ceperley-Alder exchange-correlation potential and pseudopotentials that are obtained via the projector augmented wave (PAW) method. The energy cutoff parameter that accounts for the plane-wave basis was selected as the default value (245.7 eV) when initiating the phonon calculations, which helps provide insight into the vibrational frequency of the Γ-point normal mode. A 4 × 4 × 4 Monkhorst-Pack k-point grid [39] is selected for Brillouin zone integration. The procedure of extracting electronic, vibrational and thermodynamic properties of the SiGe alloy clathrate from the periodic density functional theory computation is described as follows: The first step of geometry optimization is to relax the internal coordinates of the atoms, which are confined in a fixed unit cell of the materials. Then, the ground-state structural and electronic properties, such as the cohesive energy, were determined within the local density functional formalism. Next, a limited number of energy-volume (E, V) pairs were fitted to a 3rd-order Birch-Murnaghan equation of state (EOS) [40], thereby enabling the calculation of the global minimum energy and the equilibrium lattice parameter. In addition to optimizing the geometry of each of the studied alloy clathrates, electronic properties, including the Fermi energy level (E F ), the electronic band structure (BS) and the electronic density of states (EDOS), are calculated in the framework of consistent structural settings. To investigate the lattice dynamics of these Si-based clathrate compounds, a 2 × 2 × 2 Monkhorst-Pack k-point was applied to obtain Γ-point vibration frequencies and dispersive relations, which are derived from the harmonic force constant matrix. In addition, the thermodynamic properties that are related to phonon anharmonicity were evaluated with the aid of the QHA method: The fractional change in volume, namely, ∆V/V, which governs structural dilation or contraction, and the fractional change in the mode frequency are inspected to determine the microscopic Grüneisen parameter γ i . For this purpose, phonon calculations are repeated at three corresponding volume points that contain one equilibrium volume and two additional volumes that are slightly larger and smaller. Using the Feynman-Hellmann theorem, which is based on the FDM, the mode Grüneisen parameter of each phonon is evaluated by approximating the volume derivatives of dynamical matrix elements (D ij (q)) as ∆D ij (q)/∆V. Electronic Properties First, it is necessary to show the crystal structures with respect to Si 136-x Ge x (x = 8, 40) in Figure 1. Here, the specified cubic unit cells are schematically given for the configurations that consist of 256 and 192 silicon atoms out of 272 atoms per cell respectively. The blue solid balls in the figure denote the Ge atoms that replace the Si counterparts at all 8a Wyckoff sites in Si 128 Ge 8 and at all 8a along with 32e Wyckoff sites in Si 96 Ge 40 . These clathrate alloys are expanded volume phase with sp 3 tetrahedrally bonded framework. Next, we performed the ab initio computation to determine various electronic properties of Type II SiGe alloy clathrates, which are structurally formulated in covalently bonding configurations and exhibit sp 3 -hybridized configurations. Previously, in synthesis work on Si136-xGex (0 ≤ x ≤ 136) by Baranowski et al., their phase formats were classified into two categories according to the Ge composition, which is denoted as x [37]. Their study determined that the stoichiometric amount (x) of Ge for amorphous formation ranges from approximately 20.4 to 68. The amorphous region is likely caused by a miscibility gap. Analogous to those experimental results, the following figures present the results of our first-principles work on the composition-dependence of the lattice parameter and the bandgap for semiconducting [Si1-x'Gex']136 (0 < x' < 1). Here, it is noticed that x' appearing in redefined chemical notation [Si1-x'Gex']136 remains equivalent to the ratio of Ge composition (x) to 136. Next, we performed the ab initio computation to determine various electronic properties of Type II SiGe alloy clathrates, which are structurally formulated in covalently bonding configurations and exhibit sp 3 -hybridized configurations. Previously, in synthesis work on Si 136-x Ge x (0 ≤ x ≤ 136) by Baranowski et al., their phase formats were classified into two categories according to the Ge composition, which is denoted as x [37]. Their study determined that the stoichiometric amount (x) of Ge for amorphous formation ranges from approximately 20.4 to 68. The amorphous region is likely caused by a miscibility gap. Analogous to those experimental results, the following figures present the results of our first-principles work on the composition-dependence of the lattice parameter and the bandgap for semiconducting [Si 1-x' Ge x' ] 136 (0 < x' < 1). Here, it is noticed that x' appearing in redefined chemical notation [Si 1-x' Ge x' ] 136 remains equivalent to the ratio of Ge composition (x) to 136. In Figure 2, the lattice parameter increases with the Ge content; a similar trend is observed between XRD data and our LDA work in the absence of an amorphous region (0.15 ≤ x' ≤ 0.5). At various compositions of added Ge atoms (e.g., x' = 0.15 and x' = 0.5), the SiGe alloy clathrate exhibits a mostly crystalline phase with a small amount of amorphous background [37]. This demonstrates that the alloyed clathrate structures expand because of substitutional host atoms (Ge), in comparison with the pure Si 136 framework. In addition, for x'~0.77, our equilibrium lattice constant is 15.05 Å, which is approximately 0.3% smaller than the XRD value [37]. In analogy to this, the previously calculated lattice constant of Si 136 (14.56 Å) [1] is approximately 0.7% smaller than its experimental counterpart (14.63 Å) [41]. figures present the results of our first-principles work on the composition-dependence of the lattice parameter and the bandgap for semiconducting [Si1-x'Gex']136 (0 < x' < 1). Here, it is noticed that x' appearing in redefined chemical notation [Si1-x'Gex']136 remains equivalent to the ratio of Ge composition (x) to 136. The dashed line that is drawn for the LDA data was obtained via a linear fitting procedure and acts as a guide for the eye. The unit 10 −10 m is equal to 1 Å. A lower DFT-determined bandgap compared to the experiment result [37] in Figure 3 is expected no matter how x' appearing [Si 1-x' Ge x' ] 136 changes, because the use of LDA formalism always causes the fundamental bandgap energy to be underestimated [42,43]. All optical band gap energies are measured from the top of the valence band at L, the zero of which remains stably fixed and independent of the Ge concentration. Additionally, we theoretically found that degeneracy of the lowest conduction band at L and Γ points is not noticeably distinguished in the presence of Si 128 Ge 8 (see Figure 4), since eigenenergy of the conduction band edge at L is slightly higher (about only 30 meV larger) than eigenenergy of the conduction band edge at Γ point. Thus, we call this sort of bandgap a "nearly-direct" bandgap. Furthermore, the depicted band structure provided in Figure 4 shows that Si 136-x Ge x (x = 8) exhibits the "nearly-direct" behavior regarding band gap redefinition. The calculated magnitude of such band gap value turns out to be approximately 1.27 eV for Si 128 Ge 8 , which agrees well with the previous DFT result (~1.23 eV), which was obtained via the Cambridge sequential simulation total energy package (CASTEP) code [31]. in comparison with the pure Si136 framework. In addition, for x' ~ 0.77, our equilibrium lattice constant is 15.05 Å, which is approximately 0.3% smaller than the XRD value [37]. In analogy to this, the previously calculated lattice constant of Si136 (14.56 Å) [1] is approximately 0.7% smaller than its experimental counterpart (14.63 Å) [41]. A lower DFT-determined bandgap compared to the experiment result [37] in Figure 3 is expected no matter how x' appearing [Si1-x'Gex']136 changes, because the use of LDA formalism always causes the fundamental bandgap energy to be underestimated [42,43]. All optical band gap energies are measured from the top of the valence band at L, the zero of which remains stably fixed and independent of the Ge concentration. Additionally, we theoretically found that degeneracy of the lowest conduction band at L and Γ points is not noticeably distinguished in the presence of Si128Ge8 (see Figure 4), since eigenenergy of the conduction band edge at L is slightly higher (about only 30 meV larger) than eigenenergy of the conduction band edge at Γ point. Thus, we call this sort of bandgap a "nearly-direct" bandgap. Furthermore, the depicted band structure provided in Figure 4 shows that Si136-xGex (x = 8) exhibits the "nearly-direct" behavior regarding band gap redefinition. The calculated magnitude of such band gap value turns out to be approximately 1.27 eV for Si128Ge8, which agrees well with the previous DFT result (~1.23 eV), which was obtained via the Cambridge sequential simulation total energy package (CASTEP) code [31]. In order to identify the detailed picture of "nearly-direct" band gap from the viewpoint of band structure (BS) given in Figure 4, we restrict the vertical scale about energy to range from −1.5 eV to 3.5 eV for the purpose of zooming into the BS in an intricate manner. Therefore, Figure 5 shows the apparent "nearly-direct" behavior of bandgap energy, because eigenenergy of conduction band edge at L is only about 30 meV larger than that of the conduction band edge at Γ point, compared to significantly large band gap value (about 1.27 eV). In order to identify the detailed picture of "nearly-direct" band gap from the viewpoint of band structure (BS) given in Figure 4, we restrict the vertical scale about energy to range from −1.5 eV to 3.5 eV for the purpose of zooming into the BS in an intricate manner. Therefore, Figure 5 shows the apparent "nearly-direct" behavior of bandgap energy, because eigenenergy of conduction band edge at L is only about 30 meV larger than that of the conduction band edge at Γ point, compared to significantly large band gap value (about 1.27 eV). eV to 3.5 eV for the purpose of zooming into the BS in an intricate manner. Therefore, Figure 5 shows the apparent "nearly-direct" behavior of bandgap energy, because eigenenergy of conduction band edge at L is only about 30 meV larger than that of the conduction band edge at Γ point, compared to significantly large band gap value (about 1.27 eV). Vibrational Properties The low-lying acoustic and optic mode regions are of greater importance than other portions of the predicted phonon-dispersion curves in Figure 6. Six phonon branches are primarily discussed here for each studied Si 136-x Ge x material (x = 8, 40, 104): the longitudinal acoustic, transverse acoustic (TA (1) & TA (2)) with double degeneracy along the specified direction, longitudinal optical (LO) and transverse optical (TO (1) & TO (2)) branches, which might coincide at various q-points. To see the difference of the low-frequency portions (0-75 cm −1 ) of the dispersion relations for Si 128 Ge 8 and Si 96 Ge 40 , we listed the frequency at L, X, W and K high-symmetry point in the following Table 1. Vibrational Properties The low-lying acoustic and optic mode regions are of greater importance than other portions of the predicted phonon-dispersion curves in Figure 6. Six phonon branches are primarily discussed here for each studied Si136-xGex material (x = 8, 40, 104): the longitudinal acoustic, transverse acoustic (TA (1) & TA (2)) with double degeneracy along the specified direction, longitudinal optical (LO) and transverse optical (TO (1) & TO (2)) branches, which might coincide at various q-points. To see the difference of the low-frequency portions (0-75 cm −1 ) of the dispersion relations for Si128Ge8 and Si96Ge40, we listed the frequency at L, X, W and K high-symmetry point in the following Table 1. From the above Table, the vibrational frequency at fixed point decreases with the ascending order of Ge concentration x. Accordingly, the acoustic phonon speeds occur to be decreased with the increasing x. Furthermore, the dispersion spectrum for Si32Ge104, which is displayed in Figure 6, shows its compressed optical band region (71 cm −1~3 90 cm −1 ), for which the maximum frequency is reduced by approximately 21% compared to Si128Ge8 and Si96Ge40. Near the top of the optical bands, an extremely flat and dense phonon mode region is observed for Ge-dominant alloy Si32Ge104. This apparent reduction of the highest optical band in Si32Ge104 might be attributable to the raising number of loose Ge-Ge bond which force constant was previously reported to be around 10 eV/Å 2 From the above Table, the vibrational frequency at fixed point decreases with the ascending order of Ge concentration x. Accordingly, the acoustic phonon speeds occur to be decreased with the increasing x. Furthermore, the dispersion spectrum for Si 32 Ge 104, which is displayed in Figure 6, shows its compressed optical band region (71 cm −1~3 90 cm −1 ), for which the maximum frequency is reduced by approximately 21% compared to Si 128 Ge 8 and Si 96 Ge 40 . Near the top of the optical bands, an extremely flat and dense phonon mode region is observed for Ge-dominant alloy Si 32 Ge 104 . This apparent reduction of the highest optical band in Si 32 Ge 104 might be attributable to the raising number of loose Ge-Ge bond which force constant was previously reported to be around 10 eV/Å 2 according to Dong's work [44], compared to the "rigid" Si-Si bond, for which the effective force constant is approximately 24 eV/Å 2 in Si 136 [45]. Consequently, the existence of comparably weak coupling in the Ge-Ge bond might help suppress the sound speed of lattice phonons in Si 136-x Ge x when x abruptly jumps from 8 to 104. In addition to that, a much smaller frequency range is used in Figure 7 to illustrate how the low-lying acoustic phonon branches differ from each other among the alloyed clathrate system Si 136-x Ge x (x = 8, 40, 104). It is seen that each vibrational mode at specified point such as L, X, W, K possesses the frequency value appearing in the Table 1. Simultaneously, the acoustic phonon speed is also reduced accordingly as Si 128 Ge 8 is switched to be Si 96 Ge 40 to Si 32 Ge 104 . according to Dong's work [44], compared to the "rigid" Si-Si bond, for which the effective force constant is approximately 24 eV/Å 2 in Si136 [45]. Consequently, the existence of comparably weak coupling in the Ge-Ge bond might help suppress the sound speed of lattice phonons in Si136-xGex when x abruptly jumps from 8 to 104. In addition to that, a much smaller frequency range is used in Figure 7 to illustrate how the low-lying acoustic phonon branches differ from each other among the alloyed clathrate system Si136-xGex (x = 8, 40, 104). It is seen that each vibrational mode at specified point such as L, X, W, K possesses the frequency value appearing in the Table 1. Simultaneously, the acoustic phonon speed is also reduced accordingly as Si128Ge8 is switched to be Si96Ge40 to Si32Ge104. We postulate that the collective motion of the framework atoms at each optimized geometry of Si136-xGex is affected by the number of Ge-Ge bonds, from both vibrational and transport points of view. The models that were considered here for the composition of the Si136-xGex system were suggested by Moriguchi et al., who stated that host atoms reside at three inequivalent sites (8a, 32e, and 96g) [31]. On the basis of this ideal Fd3m symmetry, they noted that the number of Ge-Ge bonds in each framework unit cell ranges from 0 in Si128Ge8 (and Si104Ge32) to 8 in Si96Ge40 and 36 in Si40Ge96 (and Si32Ge104); hence, they follow an ascending order. As many more and more Ge-Ge bonds begin to replace Si-Si bonds in Si136-xGex framework with abruptly increasing stoichiometric amount of Ge, the existence of the relatively weakened bond-bond strength (lowered force We postulate that the collective motion of the framework atoms at each optimized geometry of Si 136-x Ge x is affected by the number of Ge-Ge bonds, from both vibrational and transport points of view. The models that were considered here for the composition of the Si 136-x Ge x system were suggested by Moriguchi et al., who stated that host atoms reside at three inequivalent sites (8a, 32e, and 96g) [31]. On the basis of this ideal Fd3m symmetry, they noted that the number of Ge-Ge bonds in each framework unit cell ranges from 0 in Si 128 Ge 8 (and Si 104 Ge 32 ) to 8 in Si 96 Ge 40 and 36 in Si 40 Ge 96 (and Si 32 Ge 104 ); hence, they follow an ascending order. As many more and more Ge-Ge bonds begin to replace Si-Si bonds in Si 136-x Ge x framework with abruptly increasing stoichiometric amount of Ge, the existence of the relatively weakened bond-bond strength (lowered force constant) of Ge-Ge is anticipated to relate to the lowered absolute value of negative mode Grüneisen parameter found in transverse acoustic phonons. This leads to that the weighted average of γ i switches its sign from negative to positive at low-temperature regime (e.g. 24-100 K) corresponding to the weakened NTE effect, when x is tuned from 8 (or 40) to 104. Detailed discussion on the derived mode Grüneisen parameter along with the macroscopic Grüneisen parameter is given in the following. Additionally, one can notify the abrupt change in the dispersion bands from Si 96 Ge 40 to Si 32 Ge 104 clathrates of Figure 6. In order to zoom into the smaller phonon energy band widths to identify the "forbidden gap" as well as the thin band level located around 350 cm −1 , we use According to the DFT-determined diagram (Figure 9), we see how the number of Ge-Ge bonds that are formed relates to the Si-fraction-dependent mode Grüneisen parameter of TA (1) and LA phonons at various high-symmetry points in [Six"Ge1-x"]136 (0 < x" < 1). Here, γi is computed theoretically via γi = (−V/ωi)(Δωi/ΔV) using the finite different method. Despite the almost constant calculated value of γi of an LA phonon that is located near the gamma point, the remaining mode Grüneisen parameters of the same phonon confined to the BZ boundary (L and X points) are positive in sign and exhibit approximately decreasing trends as the number of Ge-Ge bonds decreases from 36 to 8. In addition, the negative value of γi for an acoustic phonon at the zone center or boundary also approximately decreases with increasing Si fraction. The determined ratio of γTA(1)(L) representing γi of a TA (1) phonon at L-point for Si32Ge104 to γTA(1)(L) for Si104Ge32 is approximately 0.72; hence, the lattice framework exhibits a weak vibrational response upon geometry dilation when the Ge fraction dominates. According to the DFT-determined diagram (Figure 9), we see how the number of Ge-Ge bonds that are formed relates to the Si-fraction-dependent mode Grüneisen parameter of TA (1) and LA phonons at various high-symmetry points in [Si x" Ge 1-x" ] 136 (0 < x" < 1). Here, γ i is computed theoretically via γ i = (−V/ω i )(∆ω i /∆V) using the finite different method. Despite the almost constant calculated value of γ i of an LA phonon that is located near the gamma point, the remaining mode Grüneisen parameters of the same phonon confined to the BZ boundary (L and X points) are positive in sign and exhibit approximately decreasing trends as the number of Ge-Ge bonds decreases from 36 to 8. In addition, the negative value of γ i for an acoustic phonon at the zone center or boundary also approximately decreases with increasing Si fraction. The determined ratio of γ TA(1) (L) representing γ i of a TA (1) phonon at L-point for Si 32 Ge 104 to γ TA(1) (L) for Si 104 Ge 32 is approximately 0.72; hence, the lattice framework exhibits a weak vibrational response upon geometry dilation when the Ge fraction dominates. The results of the following first-principles calculations ( Figure 10) demonstrate the low-energy (0-125 cm −1 ) band structures of the phonon dispersions along the L-Γ-X line for Si128Ge8 and Si8Ge128, respectively. To illustrate the dilation geometry effect on the lattice framework anharmonicity, for our plotted phonon spectrum (dashed line), we consider expanded unit cell that is +6% larger than the material's optimized structure (see "opt. system" in Figure 10a) in Si128Ge8 to facilitate comparison. Similarly, in Si8Ge128, the expanded unit cell remains 6% larger than the material's "opt. system" in Figure 10b. We allow the expanded volume for each material to be +6% times larger than their optimized geometry. This is due to the reason that, the extremely low resolution corresponding to the variation on the wave-vector-dependent phonon mode in the low-frequency ωi(q) regime (such as 0-100 cm −1 ) causes the dispersion relation spectrum difficult to identify, if we use the fractional change in volume that is less than +4%. The red shift of the peak of the vibrational density of states (VDOS) at approximately 68 cm -1 in the "opt. system" of Si8Ge128 is attributable to suppression of its lowest-optic phonon modes (TO branches). A similar red shift of VDOS in optimized Si128Ge8 is observed for optic phonons, which is in the range of 100 cm −1 and 110 cm −1 . The results of the following first-principles calculations ( Figure 10) demonstrate the low-energy (0-125 cm −1 ) band structures of the phonon dispersions along the L-Γ-X line for Si 128 Ge 8 and Si 8 Ge 128 , respectively. To illustrate the dilation geometry effect on the lattice framework anharmonicity, for our plotted phonon spectrum (dashed line), we consider expanded unit cell that is +6% larger than the material's optimized structure (see "opt. system" in Figure 10a) in Si 128 Ge 8 to facilitate comparison. Similarly, in Si 8 Ge 128 , the expanded unit cell remains 6% larger than the material's "opt. system" in Figure 10b. We allow the expanded volume for each material to be +6% times larger than their optimized geometry. This is due to the reason that, the extremely low resolution corresponding to the variation on the wave-vector-dependent phonon mode in the low-frequency ω i (q) regime (such as 0-100 cm −1 ) causes the dispersion relation spectrum difficult to identify, if we use the fractional change in volume that is less than +4%. The red shift of the peak of the vibrational density of states (VDOS) at approximately 68 cm -1 in the "opt. system" of Si 8 Ge 128 is attributable to suppression of its lowest-optic phonon modes (TO branches). A similar red shift of VDOS in optimized Si 128 Ge 8 is observed for optic phonons, which is in the range of 100 cm −1 and 110 cm −1 . Nanomaterials 2019, 9, x FOR PEER REVIEW 11 of 17 Figure 10. Low-frequency dispersion relation curves of (a) Si128Ge8 (Ge @ 8a) and (b) Si8Ge128 (Si @ 8a) along the L-Γ-X line, which correspond to the original geometry (solid line) and the dilated configuration (dotted line). LDA-calculated results on the vibrational density of states are also shown. The circled areas correspond to the longitudinal acoustic phonon branch and the transverse acoustic phonon branches along with transverse optical phonon branches with double degeneracy. Thus, the apparent reduction of the mode frequency values for the degenerate TO band in the "+6% system" (Figure 10b), in which the wave-vector spans over the Brillouin zone, results in the existence of positive mode Grüneisen parameters. On the other hand, the phonon frequency for TA branch is elevated in both materials for enlarged geometry, relative to its counterpart ("opt. system" in Si128Ge8 and "opt. system" in Si8Ge128). Hence, the value of γi(q) is negative, which is anticipated to contribute efficiently and dominantly to inducing the low-temperature negative thermal expansion (NTE) phenomenon to occur. The exact mode Grüneisen parameters of the specified phonon that are obtained via LDA are listed in Table 2. The measured or theoretically estimated values are obtained at high-symmetry points Γ and L of BZ in the direction given by [111]. It is noted that Wei et al. has reported some Thus, the apparent reduction of the mode frequency values for the degenerate TO band in the "+6% system" (Figure 10b), in which the wave-vector spans over the Brillouin zone, results in the existence of positive mode Grüneisen parameters. On the other hand, the phonon frequency for TA branch is elevated in both materials for enlarged geometry, relative to its counterpart ("opt. system" in Si 128 Ge 8 and "opt. system" in Si 8 Ge 128 ). Hence, the value of γ i (q) is negative, which is anticipated to contribute efficiently and dominantly to inducing the low-temperature negative thermal expansion (NTE) phenomenon to occur. The exact mode Grüneisen parameters of the specified phonon that are obtained via LDA are listed in Table 2. The measured or theoretically estimated values are obtained at high-symmetry points Γ and L of BZ in the direction given by [111]. It is noted that Wei et al. has reported some predictions [46] on γ i of d-Si before. All transverse acoustic phonons considered here have γ i values that are below zero. The calculated values of γ i at the L point for Si 128 Ge 8 are similar to the experimentally determined values of γ i for diamond-phase silicon (see Ref. [35]). The mode Grüneisen parameter of the LA phonon at the Γ point lies between 0.90 and 1.03 for a series of Si 136-x Ge x , thereby resulting in fair comparison with the value of 1.18 that was determined for Na 1 Si 136 via Raman-scattering experiments. These calculated results also correlate to the γ i value of 1.1 that was obtained experimentally for diamond-phase silicon. In addition to anharmonicity exploration on the low-lying acoustic modes of phonons, our computations demonstrate that the γ i values for most of the optical phonon modes are positive. Guided by the quasi-harmonic approximation method, our theoretically derived macroscopic Grüneisen parameter, namely, γ(T), is the weighted average of mode Grüneisen parameter γ i , which is expressed as γ(T) = i γ i C V,i / i C V,i [47,48] where C V,i is the partial vibrational mode contribution to the heat capacity. In other words, γ(T) is related to the anharmonicity of the lattice vibrations and describes how the vibrational frequencies (phonons) change as the volume is varied through γ i . In addition, γ(T) also serves as an indirect tool for surveying anomalous thermal expansion because γ(T) = α v (T)K T /C V ρ [49,50] where α v (T) denotes the volumetric thermal expansion coefficient. The sign of α v (T) depends directly on whether γ(T) is negative or positive since the bulk modulus at the specified temperature K T and heat capacity C V, along with material's density ρ, always remains positive. The results of our first-principles calculation of the overall Grüneisen parameter for Si 136-x Ge x (x = 32, 40, 96, and 104) is shown in Figure 11, where the axis of abscissa gives rise to a temperature that is limited from 3 K to 99 K. The values of the Grüneisen parameter γ(T) for Si 104 Ge 32 and Si 96 Ge 40 have similar temperature profiles and are always negative from 3 K to approximately 80 K under the scenario of null formation of Ge-Ge bonding. These results on predicting NTE effect can be compared to the reported work of Tang et al., who experimentally and theoretically investigated the thermal properties of Si 136 and pointed out an NTE region exists between in the 10-140 K temperature range [24]. However, increased numbers of Ge-Ge bonds in Si 40 Ge 96 and Si 32 Ge 104 may weaken the NTE effect substantially: the predicted Grüneisen parameters for Si 40 Ge 96 and Si 32 Ge 104 remain negative from In addition, γ(T) also serves as an indirect tool for surveying anomalous thermal expansion because γ(T) = αv(T)KT/CVρ [49,50] where αv(T) denotes the volumetric thermal expansion coefficient. The sign of αv(T) depends directly on whether γ(T) is negative or positive since the bulk modulus at the specified temperature KT and heat capacity CV, along with material's density ρ, always remains positive. The results of our first-principles calculation of the overall Grüneisen parameter for Si136-xGex (x = 32, 40, 96, and 104) is shown in Figure 11, where the axis of abscissa gives rise to a temperature that is limited from 3 K to 99 K. The values of the Grüneisen parameter γ(T) for Si104Ge32 and Si96Ge40 have similar temperature profiles and are always negative from 3 K to approximately 80 K under the scenario of null formation of Ge-Ge bonding. These results on predicting NTE effect can be compared to the reported work of Tang et al., who experimentally and theoretically investigated the thermal properties of Si136 and pointed out an NTE region exists between in the 10-140 K temperature range [24]. However, increased numbers of Ge-Ge bonds in Si40Ge96 and Si32Ge104 may weaken the NTE effect substantially: the predicted Grüneisen parameters for Si40Ge96 and Si32Ge104 remain negative from 3 K to the reduced upper temperature limit, which is approximately 20 K. Further exploration of how the bonding geometry of the Ge-Ge covalent bond (including the bond angle and bond-bond length) impacts the NTE behavior in Si136-xGex is beyond the scope of this study. We decouple the effect of the lowest-lying phonon branches, which contribute to the production of negative mode-dependent Grüneisen parameters, from the contribution of all other phonon modes along all possible high-symmetry directions (see Figure 12a,b). The two lowest phonon bands (transverse acoustic branches), rather than the remaining 100 branches, which are confined to a unit cell of the clathrate system, are anticipated to play a substantial role in producing the NTE phenomenon. Hence, the macroscopic γ(T) can be calculated primarily from the TA mode contribution via γ TA (T) = γ(T) − γ <ω'> (T), where γ <ω'> (T), which is relatively small, describes the weighted average of the overall Grüneisen parameter over all optical branches, plus the LA phonon mode contribution. In Figure 12, γ TA (T) dominates γ(T). It is noted that, the sign of the difference between parameter γ TA (T) and γ(T) can indicate within what temperature regime, the transverse acoustic phonons may play a much greater role in contributing to the induction of negative thermal expansion behavior than other phonons. As shown in the Figure 12, when the temperature is increasing towards about 80 K in Si 104 Ge 32 (or 20 K in Si 32 Ge 104 ), the temperature-dependent macroscopic Grüneisen parameter approaches almost zero, leading to the vanishing behavior of NTE. Thus, existence of the positive difference (γ(T) − γ TA (T) > 0) in Figure 12 indicates that, vibration of TA phonons occurring in the temperature range of 0 and 30 K in Si 104 Ge 32 (or in the range of 0 and 20 K in Si 32 Ge 104 ) can contribute more effectively to the induction of NTE than the rest of phonons. We decouple the effect of the lowest-lying phonon branches, which contribute to the production of negative mode-dependent Grüneisen parameters, from the contribution of all other phonon modes along all possible high-symmetry directions (see Figure 12a,b). The two lowest phonon bands (transverse acoustic branches), rather than the remaining 100 branches, which are confined to a unit cell of the clathrate system, are anticipated to play a substantial role in producing the NTE phenomenon. Hence, the macroscopic γ(T) can be calculated primarily from the TA mode contribution via γTA(T) = γ(T) − γ<ω'>(T), where γ<ω'>(T), which is relatively small, describes the weighted average of the overall Grüneisen parameter over all optical branches, plus the LA phonon mode contribution. In Figure 12, γTA(T) dominates γ(T). It is noted that, the sign of the difference between parameter γTA(T) and γ(T) can indicate within what temperature regime, the transverse Conclusions We have employed the ab initio DFT method to conduct systematic investigations on the electronic, vibrational and thermodynamic properties of the Si 136-x Ge x clathrates. Most of the DFT results relate to vibrational features. We found that low-frequency transverse acoustic phonons, which have an unusual anharmonic vibration response (negative γ i ) to slight structural expansion, are primarily responsible for the occurrence of the NTE phenomenon. In addition, the reduction of the maximum optic band spectrum and the suppression of the acoustic phonon band width are accompanied by an increase in the number of Ge-Ge bonds that are formed, from 0 (or 8) to 36. Moreover, the number of Ge-Ge bonds is expected to affect the upper limit of the temperature range beyond which NTE vanishes, thereby making it possible to have a strongly weakened NTE effect when x changes from 32 (or 40) to 96 (or 104) in Si 136-x Ge x . Our structural investigation of Si 136-x Ge x (0 ≤ x ≤ 128) serves as the fundamental step for initiating our entire first-principles work, since all vibrational and thermodynamic properties are extracted, in addition to the optimized geometry of each alloy. Our LDA-determined lattice parameter agrees well with XRD data: both show almost monotonically increasing behavior as the Ge composition, namely, x, increases. Regarding the electronic properties, the previous DFT results, which were obtained using the CASTEP code, reveal an optical band gap of Si 128 Ge 8 of 1.23 eV, which agrees extremely well with the result of our calculation via VASP (~1.27 eV). The tunable band gap modulated by Ge content in Si 136-x Ge x has attracted attention for photovoltaic (PV) applications, because alloyed SiGe semiconductors demonstrating "nearly-direct" or direct wide band gap may be a very suitable and practical choice for optoelectronics applications [27,37] due to their reduced weight and cost.
8,589
2019-05-01T00:00:00.000
[ "Materials Science" ]
Low bacterial community diversity in two introduced aphid pests revealed with 16S rRNA amplicon sequencing Bacterial endosymbionts that produce important phenotypic effects on their hosts are common among plant sap-sucking insects. Aphids have become a model system of insect-symbiont interactions. However, endosymbiont research has focused on a few aphid species, making it necessary to make greater efforts to other aphid species through different regions, in order to have a better understanding of the role of endosymbionts in aphids as a group. Aphid endosymbionts have frequently been studied by PCR-based techniques, using species-specific primers, nevertheless this approach may omit other non-target bacteria cohabiting a particular host species. Advances in high-throughput sequencing technologies are complementing our knowledge of microbial communities by allowing us the study of whole microbiome of different organisms. We used a 16S rRNA amplicon sequencing approach to study the microbiome of aphids in order to describe the bacterial community diversity in introduced populations of the cereal aphids, Sitobion avenae and Rhopalosiphum padi in Chile (South America). An absence of secondary endosymbionts and two common secondary endosymbionts of aphids were found in the aphids R. padi and S. avenae, respectively. Of those endosymbionts, Regiella insecticola was the dominant secondary endosymbiont among the aphid samples. In addition, the presence of a previously unidentified bacterial species closely related to a phytopathogenic Pseudomonad species was detected. We discuss these results in relation to the bacterial endosymbiont diversity found in other regions of the native and introduced range of S. avenae and R. padi. A similar endosymbiont diversity has been reported for both aphid species in their native range. However, variation in the secondary endosymbiont infection could be observed among the introduced and native populations of the aphid S. avenae, indicating that aphid-endosymbiont associations can vary across the geographic range of an aphid species. In addition, we discuss the potential role of aphids as vectors and/or alternative hosts of phytopathogenic bacteria. INTRODUCTION Associations between bacterial endosymbionts and insects are widespread in nature (Gibson & Hunter, 2010). The microbial community inhabiting insects can be as diverse as the symbiotic associations that they maintain with their host insects. Mutualistic, pathogenic, and commensal relationships can take place concurrently and can significantly influence the insect host ecology (Toft & Andersson, 2010). For instance, ancient mutualistic relationships with primary or obligate bacterial endosymbionts that provide missing essential amino acids to phloem-based diets are common among plant sap-sucking insects (e.g., psyllids, whiteflies, mealybugs and aphids) (Baumann, 2005). Primary endosymbionts are usually found among the Betaproteobacteria and Gammaproteobacteria subgroups (Toft & Andersson, 2010). Contrary to primary endosymbionts, secondary or facultative endosymbiotic bacteria are not essential for host survival and reproduction and they are mainly found among the Alphaproteobacteria, Gammaproteobacteria (especially Enterobacteriaceae) and Bacteroidetes (Baumann, 2005;Moran, McCutcheon & Nakabachi, 2008). However, secondary endosymbionts may produce ecologically important phenotypic effects on their insect hosts. Specifically, they can establish facultative mutualistic associations with insects thus conferring beneficial traits such as protection against natural enemies (review by Oliver et al., 2010;Jaenike et al., 2010;Jiggins & Hurst, 2011), or they can establish parasitic associations that have deleterious effects on host fitness (Werren, Baldo & Clark, 2008). Aphids (Hemiptera: Aphididae) are phloem-feeding insects that reproduce by cyclical parthenogenesis (clonal) (Simon, Rispe & Sunnucks, 2002). They represent serious pests by reducing crop yields and quality, and can act as vectors of phytopathogenic viruses and bacteria (Dedryver, Le Ralec & Fabre, 2010;Ng & Perry, 2004;Nadarasah & Stavrinides, 2011). At least 15 aphid species are considered global crop pests of major agricultural importance (including the grain aphid Sitobion avenae, bird cherry-oat aphid Rhopalosiphum padi and pea aphid Acyrthosiphon pisum), of which the majority are of Palaearctic origin (Eurasia) (Van Emden & Harrington, 2017). Symbiotic bacteria have been well studied in this insect group, becoming a model system of the insectsymbiont interactions (Oliver, Smith & Russell, 2014). Aphids have a well-known obligate nutritional relationship with the primary endosymbiont Buchnera aphidicola, which confers essential nutrients to the aphid host (Douglas, 1998). At least nine common secondary endosymbionts have been reported among aphid species, including six Gammaproteobacteria; Hamiltonella defensa, Serratia symbiotica, Regiella insecticola, PAXS (Pea aphid X-type symbiont), Rickettsiella viridis and Arsenophonus sp., and two Alphaproteobacteria of the genera Wolbachia and Rickettsia, as well Spiroplasma from Mollicutes (reviewed in Zytynska & Weisser, 2016). These secondary endosymbionts have diverse effects on the aphid phenotype, such as conferring protection against natural enemies (parasitoids and fungal pathogens) (Oliver et al., 2003;Oliver, Moran & Hunter, 2005;Vorburger, Gehrer & Rodriguez, 2009;Scarborough, Ferrari & Godfray, 2005;Parker et al., 2013), providing resistance to heat stress (Montllor, Maxmen & Purcell, 2002), influencing insect-plant interactions (Tsuchida, Koga & Fukatsu, 2004;Tsuchida et al., 2011;Ferrari, Scarborough & Godfray, 2007), as well as manipulating aphid reproduction (Simon et al., 2011). These heritable bacterial endosymbionts are mainly maintained in aphid populations through vertical transmission (i.e., maternal) and to a lesser extent by horizontal transmission (e.g., sexual) (Vorburger, 2014;Peccoud et al., 2014). Although, the aphid-endosymbiont interactions have received considerable attention, much of this research has been focused in the model pea aphid, A. pisum. Accordingly, there is a lack of data for some aphid species across different regions particularly at the continental scale (e.g., South America) (Zytynska & Weisser, 2016). Therefore, it is necessary to make greater efforts to other aphid species in order to have a better understanding of the role of endosymbionts in aphids as a group. In addition, aphid endosymbionts have frequently been studied by PCR-based approaches, using species-specific primers. In spite of increasing the ease of testing for specific symbionts, and being useful for detecting target endosymbiont groups, this approach may omit other non-target bacteria cohabiting a particular host species. Regarding this, advances in high-throughput sequencing technologies are now complementing our previous knowledge of microbial endosymbiont communities (Riesenfeld, Schloss & Handelsman, 2004). A greater understanding of the microbiome of aphid species through next-generation sequencing could allow the identification of novel bacterial associations and their potential effects on the ecology and phenotype of aphid species. Such knowledge could be instrumental for understanding the role of the bacterial interactions on the invasive potential of economically important aphid species. We used a 16S rRNA amplicon sequencing approach to study the microbiome of aphids, in order to describe the bacterial community diversity in introduced populations of the cereal aphids, Sitobion avenae and Rhopalosiphum padi in Chile (South America). Then we discuss whether the bacterial community diversity found in these introduced populations of cereal aphids is similar to the previously estimated in native populations of these aphid species (Europe). Sample collection and DNA extraction A total of 80 individuals of the aphid S. avenae and 52 individuals of the aphid R. padi were collected from oat (Avena sativa) and wheat (Triticum aestivum) crops in two different agroclimatic regions (Maule and Los Ríos regions) in Chile (Table 1). In addition, the field experiments performed in this study were approved for Ethical scientific committee of the Universidad de Talca in Chile (FONDECYT project 3140299). DNA extraction was individually performed for each aphid specimen using the ''Salting out'' method described by Sunnucks & Hales (1996). The quantification and quality of the extracted DNA was examined by absorbance using Infinite 200 PRO NanoQuant (TECAN) and by electrophoresis in 0.8% agarose gels. Each individual DNA extraction was normalized to a concentration of 5 ng/ul and kept at −20 • C until later 16 S library preparation. 16S rRNA amplicon sequencing library preparation In order to produce DNA pools that represent the genetic diversity of aphids from different species, locations and host-plants; four DNA pools of 20 S. avenae aphids and Table 1 Summary of collection details and 16S rRNA gene sequencing results for aphid samples. Host plant, locality, date, total number of reads, and Shannon diversity index for each sample of S. avenae (SA-1, SA-2, SA-3 and SA-4) and R. padi (RP-1, RP-2, RP-3, RP-4, RP-5 and RP-6). six DNA pools of 9 R. padi aphids were used ( Table 1). Pools of the genomic DNA were generated in two steps using the Illumina MiSeq protocol for 16S amplicon sequencing (Table 1). Then DNA pools were subjected to a second PCR where dual indices and Illumina sequencing adapters were attached using a NEXTERA XT index Kit (Illumina, San Diego, CA, USA). This second PCR was conducted in a total volume of 50 µl which contained 5 µl of each pooled DNA, 5 µl of each Nextera XT index primer, 25 µl of 2x KAPA Hifi HotStart Ready Mix, and 10 µl of PCR grade water. PCR program consisted of initial denaturation at 95 • C for 3 min, followed by eight cycles of: 95 • C for 30 s, 55 • C for 30 s and 72 • C for 30 s, and 72 • C for 5 min. The PCR product was corroborated using a Fragment Analyzer and the DNF 479 kit. Finally, each DNA pool was normalized to a concentration of 4 nM and then pooled. The mix DNA pool was prepared for sequencing using the Denature and Dilute Libraries Guide. Paired-end sequencing was performed using the Miseq Reagent Kit v3 (2 × 300 cycles) on the MiSeq Illumina sequencing platform in the AUSTRAL-omics Core-Facility (Facultad de Ciencias, Universidad Austral de Chile). Data analysis Removal of adapters and quality filtering of the data were conducted using the Trimmomatic and PRINSEQ software (Bolger, Lohse & Usadel, 2014;Schmieder & Edwards, 2011). To assemble the overlapping Illumina Paired-end reads PANDAseq was used (Masella et al., 2012). In order to determine operational taxonomic units (OTUs), sequences sharing 97% identity were assembled as suggested by Kunin et al. (2010); this was done using the software QIIME (Caporaso et al., 2010). The OTUs were aligned using the GreenGenes database (http://greengenes.lbl.gov). Bacterial diversity was studied using the Shannon diversity index calculated for each DNA pool. The relative abundance of each OTU was estimated by examining the number of reads for each sequence and each sample as recommended by Jousselin et al. (2016). Taking into account that bacterial DNA contaminants can be commonly found in DNA extraction kits and other laboratory reagents or could enter samples during analysis (Salter et al., 2014), reads from taxa accounting for <1% of all the reads of a given sample were excluded from the data analysis (''unrepresented reads''). Regarding this, Jousselin et al. (2016) found that the removal of low frequency sequences (<1%) excluded the most DNA contaminants allowing for increased repeatability and reliability of results. They showed that by using this method, DNA contaminants have little impact on the analysis of aphid endosymbionts when using 16S rRNA Illumina sequencing. While, reads for which significant BLAST hits with known taxon could not be found are detailed as ''unassigned reads''. Identifying Pseudomonas species by 16S Sanger sequencing From the 16S rRNA amplicon sequencing, a species of Pseudomonas was encountered (see 'Results'). In order to characterize the Pseudomonas species from the 16S rRNA sequences identified, a portion of the 16S and 23S ribosomal genes (∼1,500 bp) was amplified and sequenced in 20 aphids collect from field and used to prepare sample SA-1 (Table 1); this was done using the universal bacterial primers 10F and 35R (Sandström et al., 2001;Russell & Moran, 2005). These primers were selected because they target the intergenic spacer between the 16S and 23S genes, which can be used to avoid amplifying the aphid primary endosymbiont, B. aphidicola as both genes are not contiguous in this endosymbiont (Russell & Moran, 2005). The PCR reactions were performed in a total volume of 25 µl including; 2.5 µl of 10× buffer, 0.2 mM dNTP's, 2 mM MgCl 2 , 0.3 µl of Taq (5 U/µl), each primer at 0.5 uM, and 3 µl of DNA (10 ng/µl). The PCR conditions consisted of initial denaturation at 94 • C for 5 min, followed by 35 cycles of 94 • C for 40 s, 57 • C for 40 s and 72 • C for 45 min and a final extension at 72 • C for 7 min. The resulting amplicons were sequenced in an ABI PRISM R 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA). The alignment of the sequences with known 16S rRNA sequences from species of the genus Pseudomonas was conducted in Geneious v.8.1 (Drummond et al., 2011). All sequences of the genus Pseudomonas were obtained from GenBank, including sequences from the seven Pseudomonas clusters reported by Anzai et al. (2000): ''Pseudomonas syringae group'', ''Pseudomonas chlororaphis group'', ''Pseudomonas fluorescens group'', ''Pseudomonas putida group'', ''Pseudomonas stutzeri group'', ''Pseudomonas aeruginosa group'', and ''Pseudomonas pertucinogena group'' (Data S1). A phylogenetic tree of the Pseudomonas sequences was constructed using the HKY genetic distance model and the neighbor-joining method implemented in Geneious v.8.11 (Drummond et al., 2011). Branch significance was calculated using bootstrap values of 1,000 replications. 16S rRNA amplicon sequencing A total of 1,327,786 reads were obtained after filtering the four DNA pools of the aphid S. avenae (SA-1, SA-2, SA-3 and SA-4) ( Table 1). The mean Shannon diversity index was 1.51 (SD = 0.61) and ranged from 1.13 to 2.39 for the aphid S. avenae (Table 1). Of the total reads for S. avenae, 98% were classified as Gammaproteobacteria and included mostly bacteria from the Enterobacteriaceae (94.7% of the total reads) (Buchnera aphidicola, Regiella insecticola and Hamiltonella defensa) and to a lesser extent from the Pseudomonadaceae (Pseudomonas) families (3.3% of the total reads). The Gammaproteobacteria, the aphid primary endosymbiont, Buchnera aphidicola, was the most common endosymbiont in the four samples (84.4% of the total reads). The second most common taxon was the aphid secondary endosymbiont R. insecticola (9.3% of the total reads), which was also found in all studied samples (Fig. 1). Whilst another well-known aphid endosymbiont, H. defensa, represented an average of 0.9% of the total reads in two of the four samples studied (SA-1 and SA-3) ( Fig. 1 and Data S2). Pseudomonas sp. sequences were well represented in two aphid samples (SA-1 and SA-4), making up an average of 3.3% of the total reads ( Fig. 1 and Data S2). Unrepresented reads (i.e., reads of taxa accounting for <1% of all the reads; see methods) were found in an average of 1.5% of the total reads. Also, the four DNA pools of S. avenae had a low proportion of unassigned reads (i.e., reads for which no significant BLAST hits with known taxon were found); unassigned reads ranged from 0.5% to 0.6% (average of 0.5% of the total of reads). For the aphid R. padi, a total of 2,095,602 reads were obtained from the six DNA pools analyzed (RP-1, RP-2, RP-3, RP-4, RP-5 and RP-6) ( Table 1). A lower bacterial diversity than S. avenae was observed with an estimated mean Shannon diversity index of 0.07 (SD = 0.04) ( Table 1). B. aphidicola was found in a percentage >98.5% in all DNA pools, however no additional bacteria were found in R. padi (Fig. 1). Finally, a low proportion of unassigned reads was detected among the six DNA pools (<0.01%), as well the proportion of unrepresented reads was found in an average of 0.24% of the total reads, being the highest proportion of unrepresented reads detected in the DNA pool RP-1 (1.3% of the total of reads) (Fig. 1). Sequencing data generated on Illumina were submitted to GenBank 16S rRNA sequencing and phylogenetic analysis of Pseudomonas species Of the sequences generated for the 20 aphid samples of S. avenae, only one DNA sample corresponded to a Pseudomonas species (GenBank accession number MF536106). Sequences of the other DNA samples were observed as belonging to some of the other aphid secondary endosymbionts (R. insecticola and H. defensa), as it was identified by the 16S rRNA sequencing. The phylogenetic tree constructed show the seven clusters previously described for the genus Pseudomonas (Fig. 2). The Pseudomonas sequence generated from aphid DNA was located into the ''P. fluorescens group'', being closely related to Pseudomonas palleroniana with an identity percentage >95% (Fig. 2). Secondary endosymbionts in the introduced aphid populations A low bacterial diversity in the introduced populations of the cereal aphids S. avenae and R. padi was revealed by 16S rRNA amplicon sequencing in Chile. Gammaproteobacteria was the most common class identified and as expected the aphid primary endosymbiont, B. aphidicola, was the most common bacterial species detected in S. avenae and R. padi. In all DNA pools of both aphid species, Buchnera made up a large percentage of all of the reads (ranged between 84.4% and 99% respectively). In contrast to our systems, a greater diversity of secondary endosymbionts can be found in other aphid species (Zytynska & Weisser, 2016). For instance, the well-studied pea aphid, A. pisum, hosts at least eight secondary endosymbionts (Serratia symbiotica, R. insecticola, H. defensa, Rickettsiella, PAXS, Spiroplasma, Rickettsia and Wolbachia) that are highly abundant according to two 16 rRNA amplicon sequencing studies (Russell et al., 2013;Gauthier et al., 2015). (Łukasik et al., 2013;Henry et al., 2015;Alkhedir et al., 2015). In particular, a positive association between H. defensa and S. avenae was found, being this the most common endosymbiont followed by R. insecticola, whilst S. symbiotica was reported in a lower frequency (≤6%) in the aphid populations (Łukasik et al., 2013;Henry et al., 2015). Differently, higher infection rates of R. insecticola and S. symbiotica were found in Chinese populations of S. avenae (Luo et al., 2016), as well a high prevalence of R. insecticola (75% of infected aphids) was found in introduced populations of S. avenae in Morocco (Fakhour et al., 2018). In this study, we found that R. insecticola was the dominant secondary endosymbiont in S. avenae, while H. defensa was observed at lower prevalence among DNA samples studied. However, the read abundance should be interpreted carefully when it is used as an estimate of the infection frequency of endosymbionts, because PCR amplification bias can be introduced by primer specificity (Klindworth et al., 2012). Despite this, our results from the deep sequencing of 16S rRNA gene are consistent with previous PCR-based studies on Chilean populations of S. avenae, in which ∼50% of the aphids harbored R. insecticola and a lower proportion of aphids harbored H. defensa (between 4% and 15%) (Sepúlveda et al., 2016;Zepeda-Paulo, Villegas & Lavandero, 2017), suggesting that the aphid-endosymbiont associations can vary across geographic range of aphid species. Secondary endosymbionts make up an important component of the bacterial community of aphids and several studies have indicated that they have important effects on the host phenotype. Specifically, aphid secondary endosymbionts can protect the host from natural enemies, can provide tolerance to heat shock and can facilitate the colonization of new host plants (Oliver et al., 2010). Although recent studies have not found evidence that the endosymbionts R. insecticola nor H. defensa can confer defense against parasitoid wasps in S. avenae (Łukasik et al., 2013;Zepeda-Paulo, Villegas & Lavandero, 2017), at least one strain of R. insecticola has shown to provide protection to S. avenae against the pathogenic fungus Pandora neoaphidis (Łukasik et al., 2015). This symbiont-mediated advantage could explain the higher prevalence of R. insecticola in the populations of S. avenae here studied; however, this is not consistent with the lower prevalence of this endosymbiont reported in native regions of S. avenae. An explanation for this observation could be the founder effect and drift experienced by aphid populations introduced in a new region (Desneux et al., 2018). During the invasive process only a subset of symbiont-harboring aphid clones may have been introduced from the native regions, resulting in particular aphidendosymbiont associations in the novel established populations. Indeed, variation in the associations between aphid clones and endosymbionts can be found in field populations, suggesting that they could be relevant for understanding of aphid-symbiont populations dynamics (Zepeda-Paulo, Villegas & Lavandero, 2017). In addition, we cannot rule out the effect of sampling method (e.g., number and distribution of sampling in a season) on the infection rates observed in aphid populations, since the frequency of endosymbionts can increase and/or fluctuate during the course of a season (Henry et al., 2015). In this regard, our aphid sampling would be considered representative of the endosymbiont diversity, as it was performed during the period of highest abundance of aphids (Raymond, Ortiz-Martínez & Lavandero, 2015;Ortiz-Martínez & Lavandero, 2018) and endosymbionts of the populations in the field (F Zepeda-Paulo & B Lavandero, 2018, unpublished data). Unlike S. avenae, there is little knowledge on the diversity of bacterial endosymbionts in the aphid R. padi. Despite this, the existing data are consistent with our results in show an absence of secondary endosymbionts in aphid samples from the native range of R. padi (Europe) analyzed using species-specific primers developed for three aphid endosymbionts (H. defensa, R. insecticola and S. symbiotica) (Henry et al., 2015;Desneux et al., 2018) nor their introduced range (Morocco) using 16S rRNA gene sequencing (Fakhour et al., 2018). The bacterial diversity could be non-randomly distributed throughout host species. In this sense, it has been raised that the prevalence of secondary endosymbionts in a particular insect host may depend on the balance between the costs and benefits of harboring symbionts (Oliver, Smith & Russell, 2014). Indeed, the lack of an important protective phenotype providing direct benefits, fitness costs on symbiont-harboring host and the transmission rates of endosymbionts are some of the factors that could explain the low occurrence of endosymbionts in a particular host species (Oliver, Smith & Russell, 2014;Dykstra et al., 2014). Another factor that may influence bacterial diversity of aphids are the symbiont-symbiont interactions, such as competition between primary and secondary endosymbionts. Regarding this, several studies have shown that the density of the aphid primary endosymbiont, B. aphidicola, could be affected by the coexistence with secondary endosymbionts in the same host (Koga, Tsuchida & Fukatsu, 2003;Sakurai et al., 2005;Leclair et al., 2017). A negative effect on Buchnera abundance may be detrimental to the fitness of aphids and could significantly affect some aphid species. Aphids species can vary in their ability to increase the amino acid concentration in the phloem, in response to chlorotic damage induced by them (Sandström, Telang & Moran, 2000). This increase may reduce the nutritional dependence of aphids on Buchnera for the synthesis of essential amino acids, which could affect the aphid-symbiont associations. For instance, R. padi could show a high dependence on Buchnera for the synthesis of essential amino acids, since this does not affect the phloem composition of the host plant, compared to a higher amino acid concentration induced by other aphid species (Sandström & Moran, 1999;Sandström, Telang & Moran, 2000). A greater dependence in Buchnera could limit the infection of secondary endosymbionts, if they affect the abundance of the primary endosymbiont of hosts and thus explain the absence of secondary endosymbionts in some aphid species. However, the association between Buchnera-dependent aphids and the prevalence of secondary endosymbionts still have to be studied for a better understanding of the role of symbiont-symbiont interactions on the bacterial diversity of aphid species. Presence of Pseudomonas sp. in cereal aphids In addition to the most common aphid endosymbionts, the results from 16S rRNA sequencing showed the occurrence of Pseudomonas sp. in two DNA pools analyzed of the aphid S. avenae. However, of the sequences generated for 20 aphid samples of S. avenae, only one DNA sample corresponded to a Pseudomonas species. The phylogenetic analysis incorporating known Pseudomonas sequences showed clustering with the ''P. fluorescens group''; the Pseudomonas sp. sequence generated here was closely related to the bacteria P. palleroniana and P. tolassi. These bacterial species are known phytopathogenic Pseudomonads, which have been found in rice (Oryza sativa) and garlic (Allium sativum), respectively (Gardan et al., 2002;Höfte & De Vos, 2007). Others studies based on 16 rRNA amplicon sequencing have identified phytopathogenic Pseudomonas sp. in the pea aphid (Pseudomonas syringae) and R. padi (P. viridiflava and P. veronii) (Gauthier et al., 2015). Moreover, the pea aphid has previously proven capable of acting as both a vector and a non-plant host for P. syringae (Stavrinides, McCloskey & Ochman, 2009). Some strains of P. syringae could be pathogenic to aphids causing death by bacterial sepsis (Stavrinides, McCloskey & Ochman, 2009;Hendry, Clark & Baltrus, 2016). The finding of Pseudomonas sp. in different aphid species suggests that these types of phytopathogenvector associations may be more common than previously thought among aphid species. Secondary endosymbionts can also influence the interactions between phytopathogens and insects. Hendry, Hunter & Baltrus (2014) reported that secondary endosymbionts can influence interactions between whiteflies and the phytopathogen (P. syringae); whiteflies harboring Rickettsia have decreased their mortality from P. syringae (Hendry, Hunter & Baltrus, 2014). This latter finding might suggest that similar interactions among endosymbiotic and phytopathogenic bacteria may also occur in other host insects (Gonzalez et al., 2016). However, there are currently no studies on the extent of phytopathogenvector/host associations or the effect of secondary endosymbionts on the interactions between aphids and phytopathogenic bacteria. CONCLUSIONS The study presented employing 16S rRNA gene sequencing indicates that the bacterial diversity of the introduced populations of the aphid pests, S. avenae and R. padi, is low. A similar endosymbiont diversity has been reported for both aphid species in their native range. However, variation in the secondary endosymbiont infection could be observed among the introduced and native populations of the aphid S. avenae, indicating that aphid-endosymbiont associations can vary across the geographic range of an aphid species. Our results showed that R. insecticola was the dominant secondary endosymbiont of the introduced populations, while this endosymbiont could be less important in the native range of S. avenae; where H. defensa is the most common endosymbiont reported. Interestingly, the presence of a Pseudomonas sp. closely related to phytopathogenic Pseudomonad species was detected in the aphid samples. As has been observed for other aphids, the detection of Pseudomonas sp. could suggest that aphids can act as a potential vector of phytopathogenic bacteria. However, further studies are necessary to determine the role of aphid species as vectors and/or alternative hosts of important phytopathogenic bacteria.
6,018.6
2018-05-07T00:00:00.000
[ "Biology", "Environmental Science" ]
Quantum Beam Science — Applications to Probe or Influence Matter and Materials The concept of quantum beams unifies a multitude of different kinds of radiation that can be considered as both waves and particles, according to the quantum mechanical model. Examples include light, in the form of X-rays and synchrotron radiation, as well as neutrons, electrons, positrons, muons, protons, ions, and photons. While the past century saw the discovery of these types of radiation and particles along with the investigations of their physical properties and their fundamental interaction with matter, the current century focuses extensively on their applications to characterize and understand materials in their broadest context, under all imaginable conditions. X-rays diffract to deliver crystal structures, while muons probe for the local magnetism in such crystals. Similarly, neutrons diffract and probe for magnetism, while both γ-rays and positrons allow to measure the electronic density of states; or again X-ray, neutron or electron diffraction probes for crystal defects in addition to ion beam channeling. Because of their penetration, X-rays, neutrons and muons can be used for imaging, such as radiography and tomography. At the same time, the types of quantum beams are different in which information can be obtained when investigating a particular material. Take the difference in cross-sections between neutrons and X-rays, respectively emphasizing the light or the heavy elements in a compound or alloy. While neutrons diffract from nuclei and, as elementary magnets via their spins, they allow determination of crystal and magnetic structure via crystallographic methods. Muons, on the other hand, can be embedded as interstitials into crystals, locally probing the site and its surrounding electromagnetic potential landscape. There is much interest in the dynamics of matter—how electricity and heat are transported through a crystal, related to inelastic scattering of quantum beams. Again, neutrons win overall for the investigation of phonons, while visual light scattering in the form of Raman spectroscopy is much easier to conduct and delivers complementary information. Introduction The concept of quantum beams unifies a multitude of different kinds of radiation that can be considered as both waves and particles, according to the quantum mechanical model.Examples include light, in the form of X-rays and synchrotron radiation, as well as neutrons, electrons, positrons, muons, protons, ions, and photons.While the past century saw the discovery of these types of radiation and particles along with the investigations of their physical properties and their fundamental interaction with matter, the current century focuses extensively on their applications to characterize and understand materials in their broadest context, under all imaginable conditions.X-rays diffract to deliver crystal structures, while muons probe for the local magnetism in such crystals.Similarly, neutrons diffract and probe for magnetism, while both γ-rays and positrons allow to measure the electronic density of states; or again X-ray, neutron or electron diffraction probes for crystal defects in addition to ion beam channeling.Because of their penetration, X-rays, neutrons and muons can be used for imaging, such as radiography and tomography.At the same time, the types of quantum beams are different in which information can be obtained when investigating a particular material.Take the difference in cross-sections between neutrons and X-rays, respectively emphasizing the light or the heavy elements in a compound or alloy.While neutrons diffract from nuclei and, as elementary magnets via their spins, they allow determination of crystal and magnetic structure via crystallographic methods.Muons, on the other hand, can be embedded as interstitials into crystals, locally probing the site and its surrounding electromagnetic potential landscape.There is much interest in the dynamics of matter-how electricity and heat are transported through a crystal, related to inelastic scattering of quantum beams.Again, neutrons win overall for the investigation of phonons, while visual light scattering in the form of Raman spectroscopy is much easier to conduct and delivers complementary information. Facilities and Sources Quantum-beam sources vary by many orders of magnitude in both beam parameters and physical size, ranging from the table-top to multi-billion-dollar large user facilities.The best example is X-rays.Any laboratory can afford an X-ray tube source, and a little more sophisticated, a rotating anode or even a modern liquid-metal-jet microfocus anode.The next step is particle-accelerator based synchrotron radiation, producing very soft X-rays with storage rings of only a 10 m circumference, climbing to 100 m ranging for very common 3 GeV machines and up to several 1000 m sized flagship synchrotrons that produce high energy X-rays.Such large facilities can host up to 50 beamlines in parallel and generate dedicated radiation from the infrared through ultraviolet to X-ray and gamma rays [1].To top the brightness obtainable at synchrotrons, X-ray free-electron-lasers recently became operational at the Linac Coherent Light Source (LCLS) in the USA [2] and the SPring-8 Angstrom Compact Free Electron Laser (SACLA) in Japan [3].A European X-ray Free-Electron Laser (X-FEL) is being commissioned in Germany [4].In those facilities, synchrotron-like insertion devices of typically a few meters in length are expanded to lengths of several hundred meters and X-ray light is emitted coherently [5].All these sources and facilities cover differences in brilliance of over 25 orders of magnitude-some of the largest ranges of magnitude found in modern technology!As large on a technological infrastructure and user-base scale are the neutron facilities.In the early laboratory-based days, neutron irradiating radioactive sources were used, until nuclear-reactor sources came along in the 1940s.Dedicated neutron facilities such as the Institut Laue Langevin (ILL), hosting over 40 instruments, became the state-of-the-art multi-user facilities where high-flux, compact cores are used, and beams are guided away from the reactor to dedicated instruments [6].More recently, spallation sources have been developed, of which flagships are the Spallation Neutron Source (SNS) in the USA [7] and J-PARC in Japan [8], to be soon followed by the European Spallation Source (ESS) project in Sweden [9].In such sources, highly accelerated proton beams are sent to a target evaporating neutrons from its nuclei, called spallation.Beams can be pulsed in time to allow energy-dispersive neutron detection by time-of-flight analysis, maximizing use of the created neutrons. Because low-energy muons can be produced by a similar proton beam, such facilities are often located with spallation neutron sources.Furthermore, neutrons and muons cover an overlapping community for solid-state matter research, for whom the logistics of such a symbiosis is highly beneficial.Examples are the ISIS facility in the UK [10], J-PARC in Japan [11] and the Paul Scherrer Institute (PSI) in Switzerland [12], while the muon source at TRIUMF in Canada is based on a proton cyclotron [13]. Positrons are either produced by radioactive β + decay or by pair production from a high-energy γ photon, above 1.022MeV.The latter is achieved at the reactor FRM-II in Munich, Germany by exploiting the neutron capture reaction 113 Cd (n,γ) 114 Cd yielding primary γ photons with an energy of 9.041 MeV [14].Inverse Compton scattering of laser photons at accelerated electrons can reach these energies, meaning future positron sources will be based at neutron and accelerator facilities.The community is relatively small, however ambition is high to erect facilities with stronger beams. The use of proton and ion-beam facilities is most diverse, as ions span in character from one single proton to heavy nuclei.Many ion beams are applied in conventional laboratories, such as focused ion beam milling for making nanoscopic specimens.Of high interest is the interaction of ions with matter, such as radiation damage, channeling, treating of tumors and ions as a characterization method (such as mass spectrometry).Stronger and high-energy ion beams are often used in creating exotic particles, as in nuclear physics at the borderline of stability, and for particles containing higher quark flavors.One of the largest projects is the international Facility for Antiproton and Ion Research (FAIR) being constructed in Germany [15], aiming at fundamental science studied along with applied research on bio-and hard materials. Last but not least, lasers play a very important role in daily life and it will be out of scope to discuss all their applications here.Of interest in the context of Quantum Beam Science is the interaction of laser photons with other quantum beams, or with materials under extreme and exotic conditions, and in pump-probe arrangements.An example is above-mentioned inverse Compton scattering, nowadays developed and exploited at the Lawrence Livermore National Laboratory in the USA [16].The Extreme Light Infrastructure (ELI) is a flagship project in Europe to produce laser driven quantum beams.Terawatt to petawatt lasers create plasmas and particle acceleration in the range 10 MeV up to 100 MeV, enough particle energy to create all kind of quantum beams including directly produced electrons, protons and ions, and secondary beams of X-rays, γ-rays, neutrons and positrons [17].Such sources will pioneer not only capabilities in nuclear physics, but also the characterization of solid state materials and their properties.For example, high-flux, high energy γ photons not only allow for high-energy X-ray diffraction, but also to perform nuclear spectroscopy in solid state materials, related to novel imaging techniques. Applications to Condensed Matter Physics and Materials Much broader than the range of quantum-beam sources is their application to condensed matter and materials, including functional materials, structural materials, soft matter and medical treatment, i.e., the human body.It is out of the scope of this editorial to appraise the full capabilities of quantum beam applications.As a single example, I work with penetrating radiation of both neutrons and high energy synchrotron X-rays [18].Neutrons typically show attenuation lengths in the many centimeter range while high-energy X-rays >100 keV penetrate up to a centimeter, say in steel and medium-heavy metals and compounds.This gives the opportunity to study bulk properties of crystalline and non-crystalline materials.When studying metals [19] interests range from engineering over characterization to fundamental science.Strain scanning for the characterization of residual stress is undertaken by both neutron and X-ray diffraction [20] and allow not only for the investigations of stresses but also to determine load partitioning between phases and crystal orientations [21,22].Generally, neutron beams are larger and penetrate further, while high-energy synchrotron X-rays are of high brilliance and focused.They enable, respectively, fine statistical averaging, as needed for texture analysis and quantitative phase analysis in even coarse-grained material [23], versus single and multiple-grain studies scanned locally in a poly-crystalline matrix [24].The complementarity in scattering contrast is exemplified by titanium aluminides, where the Ti scattering length for neutrons is negative while Al is positive, emphasizing large structure factors for superstructure reflections that describe atomic order.This is in contrast to X-rays, for which both scattering lengths are positive, rendering them sensitive to the overall structural packing [25].Concepts of various quantum beams may be similar, such as the dynamical theory of diffraction [26,27], which can be used to trace crystal defects with neutrons [28], high-energy X-rays [29], and electrons [30], but on very different scales. The advantage of using complementary and different quantum beams is well demonstrated by the study of magnetism.Here, neutron scattering is the conventional probe, as the neutron itself is a spin 1 2 elementary magnet which interacts with spins of atoms in a crystal structure and thus reveals their arrangements.The interaction of X-rays with magnetism is extremely weak [31,32], however tuning the X-ray energy to the absorption edge of the magnetic atom enhances its scatting by orders of magnitude [33], making the method advantageous in certain situations, such as contrast variation attained by tuning through the resonance and working with small specimens.Here a third quantum beam comes into play, muons-elementary charged and spin 1 2 magnetic particles which easily implant in crystal interstitials to probe the local magnetic fields, by techniques known as muon spin rotation and relaxation [34]. Again, although just two examples have been given, the importance of quantum beam science in thousands of disciplines cannot be emphasized enough.Quantum beams include synchrotron radiation, neutron beams, electrons, lasers, muons, positrons and ions, while materials can be crystalline, amorphous, magnetic, metallic, ceramic, biologic, hard and soft matter, warm dense matter, functional, structural and so on.Quantum Beam Science covers a broad range of disciplines including, but not limited to, solid-state physics, chemistry, crystallography, materials science, biology, geology, earth and planetary materials, and engineering.Examples of investigations are phase transformations in alloy development, modulated structures in spintronic systems, crystalline order and disorder, stresses in engineering specimens, changes in amorphous structure, excitations in functional materials, the interior of stars, electrochemistry in ion battery systems, imaging in life sciences, and propagation of dislocations in crystals. Welcome to Quantum Beam Science With this editorial I would like to welcome authors, institutions and readers to the new journal Quantum Beam Science.It is envisaged to cover sources, techniques, optics, properties and instrumentation, from a scientific point of view and to expose their innumerable applications to an interdisciplinary audience.Quantum Beam Science is supported by a renowned founding Editorial Board, seeking growth, and Guest Editors for Special Issues.We have started with the call for the special issue Facilities, aiming to lay a basis and awaken interest in the journal.A dedicated special issue Laser-Driven Quantum Beams is now in progress, demonstrating a new generation of powerful sources, while other special issues concentrating on quantum beam applications will follow shortly.The goal of the Editorial Board and the MDPI editorial staff is to make Quantum Beam Science a high-level scientific journal with short turnaround time, welcoming authors, institutions, and readers to a bright future!
3,050.6
2017-02-28T00:00:00.000
[ "Physics" ]
Hypertrophy of Rat Skeletal Muscle Is Associated with Increased SIRT1/Akt/mTOR/S6 and Suppressed Sestrin2/SIRT3/FOXO1 Levels Despite the intensive investigation of the molecular mechanism of skeletal muscle hypertrophy, the underlying signaling processes are not completely understood. Therefore, we used an overload model, in which the main synergist muscles (gastrocnemius, soleus) of the plantaris muscle were surgically removed, to cause a significant overload in the remaining plantaris muscle of 8-month-old Wistar male rats. SIRT1-associated pro-anabolic, pro-catabolic molecular signaling pathways, NAD and H2S levels of this overload-induced hypertrophy were studied. Fourteen days of overload resulted in a significant 43% (p < 0.01) increase in the mass of plantaris muscle compared to sham operated animals. Cystathionine-β-synthase (CBS) activities and bioavailable H2S levels were not modified by overload. On the other hand, overload-induced hypertrophy of skeletal muscle was associated with increased SIRT1 (p < 0.01), Akt (p < 0.01), mTOR, S6 (p < 0.01) and suppressed sestrin 2 levels (p < 0.01), which are mostly responsible for anabolic signaling. Decreased FOXO1 and SIRT3 signaling (p < 0.01) suggest downregulation of protein breakdown and mitophagy. Decreased levels of NAD+, sestrin2, OGG1 (p < 0.01) indicate that the redox milieu of skeletal muscle after 14 days of overloading is reduced. The present investigation revealed novel cellular interactions that regulate anabolic and catabolic processes in the hypertrophy of skeletal muscle. Introduction Atrophy of skeletal muscle could be a consequence of exposure to anti-gravitation, immobilization, cancer therapy, or aging [1][2][3], with serious functional and pathophysiological outcomes [4]. On the other hand, hypertrophy of skeletal muscle has benefits on health and sport performance as well [5]. Hence, as a result of intensive investigation a lot is known about the molecular pathways that are involved in increased protein synthesis and attenuated catabolic processes, which happen to occur during the hypertrophy of skeletal muscle [6,7]. One of the well accepted models of muscle hypertrophy on rodents is the overload-induced hypertrophy in which the surgical removal of gastrocnemius and soleus muscles results in a 30-40% increase in the mass of plantaris muscle [8][9][10]. Recently, we discovered that the NAD-dependent histone deacetylase SIRT1 is upregulated during muscle hypertrophy and associated with enhanced nicotinamide phosphoribosyltransferase (NAMPT), Akt, endothelial nitric oxide synthase (eNOS), and glucose transporter type 4 (GLUT4) levels, and suppressed forkhead box class O protein 1 (FOXO1) [8]. However, that study, just examined SIRT1-associated cellular pathways, and important regulatory proteins, like Akt and mTOR are not studied in details. The Akt-mediated cellular pathways promote cellular survival by supporting proliferation and inhibiting apoptosis [11]. The protein kinase called mechanistic target of rapamycin (mTOR) is a downstream regulator of Akt and stimulates protein synthesis via ribosomal protein S6 kinase (S6 kinase) [12]. The mTOR/Akt pathway is upregulated during muscle hypertrophy and down regulated at atrophy [13]. The adenosine monophosphate activated protein kinase (AMPK) signaling, which is activated upon energy depletion, such as exercise or caloric restriction, could curb the activation of mTOR signaling [14]. Sestrins are highly conserved but functionally not well characterized p53 modulated proteins with antioxidant activity [15], which can inhibit mTOR via AMPK [16]. Although sestrin and SIRT1 are distinct proteins, the fact that SIRT1 deacetylates p53 and sestrins are regulated by p53 might link them functionally. However, this possible link needs to be investigated. One of the intrinsic activators of SIRT1 is H 2 S [17] and this gas has antioxidant effects, suppresses oxidative stress [17][18][19] and, hence, modifies the NAD/NADH ratio which can lead to increased SIRT1 activity [20,21]. Therefore, based on these characteristics of SIRT1, sestrin and H 2 S it cannot be excluded that they could be involved of the regulation of muscle hypertrophy. Therefore, we tested whether newly discovered role of SIRT1 in muscle hypertrophy involves modulation of mTOR, S6, sestrin, and H 2 S producing proteins. Results Fourteen days after surgery the weight of the plantaris muscle increased by 43% ( Figure 1A). Plantaris/body weight ratio also changed significantly ( Figure 1B). The effects of overload on muscle mass. The removal of gastrocnemius and soles muscles resulted in greater weight carrying load on the plantaris muscle, which significantly increased in the muscle mass of plantaris (O) compared to control (C) muscle (Panel (A)). Panel (B) shows the muscle in a relation to body mass is expressed. n = 9, ** p < 0.01, Results are expressed as mean ± SE. The level of the anabolic factors increased in the overload group. The overload increased the level of the Akt, mTOR, pmTOR, S6, pS6 proteins significantly ( Figure 2). On the other hand, the level of the catabolic protein FOXO1 decreased in the overload group ( Figure 3). The Sestrin 2 protein ( Figure 3) which negatively regulates the TORC1 signaling pathway showed significant reduction in the operated group. Moreover, the AMPK which is a marker of the cell's energetic condition showed a significant decrease in the operated group (Figure 3), otherwise pAMPK level did not change. The level of SIRT1 (Figure 4), and NAMPT elevated significantly in the operated group. The activity of NAD (Figure 4), as well as the content of DNA repair enzyme of OGG1 levels was significantly lower in the operated group than in the control. It seems that the increased muscle size was not associated with similar increase in mitochondrial content, judged by the levels of Cytochrome C, COX4, SOD2, SIRT3 decreased significantly in the overload group compared to the control, while the decrease in Nrf2 protein concentration did not reach the significant levels ( Figure 5). Finally, the mitophagy marker of PINK1 did not change after the operation (Figure 6), and the level of monobromobimane measured H 2 S and the activity of the cystathionineβ-synthase (CBS) enzyme ( Figure 6), which is one of the enzymes responsible for the hydrogen sulfide formation, did not show any difference between the groups. Discussion Additionally, the confirmation of the involvement of SIRT1 in overload induced hypertrophy the novel observations of this study revealed that the SIRT1-mediated pathways include the activation of mTOR and S6 proteins. Moreover, we have discovered novel mosaics of the complex cellular regulation of muscle hypertrophy. One of the major roles of p53, which is a powerful tumor suppressor, is to inhibit cell proliferation while cell growth is positively regulated by mTOR [16]. It has been shown that Sestrin2, which is a highly conserved protein and target of p53, activates AMPK which can lead to inhibition of mTOR [16]. Independently from the inhibitory role of Sestrin2 on mTOR, this protein is an antioxidant since it acts as a cysteine reductase and modulates peroxide signaling [15]. In the present overload-induced hypertrophy model, we have found increased SIRT1, Akt, mTOR, and S6 levels which were associated with decreased protein levels of Sestrin2 and AMPK. The phosphorylation ratio of AMPK did not change significantly, but the decreased protein levels suggest that the cellular adaptation either decreased the synthesis or increased the degradation of AMPK to overload induced hypertrophy. Prolonged activation of mTOR could generate ROS and activate sestrins [14]. However, this could be not the case in the present study, since Sestrin2 or OGG1 levels were decreased in the overload-induced hypertrophy, compared to control muscle. This suggestion is further supported by that fact that Sestrin2s are positive regulators of Nrf2 pathways, most likely due to the antioxidant capacity of Sestrin2 [22]. In the present hypertrophy model, the Sestrin2 level decreased in parallel with Nrf2, although the decrease in Nrf2 case was just a tendency. Moreover, it has also been reported that in cell culture, knockdown of sestrin2 reduced AMPK and SOD2 levels [23], and we have observed simultaneous effect during overload induced hypertrophy. In addition, we measured decreased level of SIRT3, the enzyme which deacetylates two critical lysine residues on SOD2, promotes its antioxidant activity, and decreases the level of ROS in the mitochondria [24]. Because NAD + levels of overloaded muscle were lower than those of controls, it is suggested that in hypertrophied skeletal muscle at the time of sampling there was a reduced cellular milieu. In addition, it has been shown that sestrins play critical roles in exercise-induced adaptation since sestrins are required to increase endurance, insulin sensitivity and mitochondrial biogenesis via PGC-1 alpha [25]. Interestingly, in our overload-induced hypertrophy, the decreased sestrin2 levels were associated with decreased levels of mitochondrial markers like Cytochrome C, COX4, NRF2, SIRT3. According to our suggestion, the increase in the mass of muscle filaments due to overload was not associated with a similar increase in mitochondrial mass, this could lead to this result. Indeed, a recent paper reports that 14 days of functional overload increased the levels of proteins, which regulate mitochondrial fusion and decreased fission controlling proteins, and this could explain the relative reduction in mitochondrial proteins [26]. In this study, we have confirmed that SIRT1 levels increased in overload-induced hypertrophy but the possible relationship between sestrin2 and SIRT1 is not well known. It has been shown that resveratrol administration, which activates SIRT1, upregulated the expression of sestrin2 [27]. In another experimental model amyloid beta-induced stress in human neuroblastoma cells showed increased sestrin2 and deceased SIRT1 expression [28]. When the sestrin2 and SIRT1 levels were measured in serum samples of asthma patients, only sestrin2 levels increased compared to control groups [29]. Aging results in decreased sestrin concentration in human skeletal muscle [30]. In a recent study, the effects of daily protein supplementation were measured in downstream responsiveness of skeletal muscle mTOR in human immobilization [31]. It turned out that immobilization reduced postabsorptive skeletal muscle phosphorylation of the mTOR, S6, and sestrin2 [31], suggesting complex regulation and role of sestrin2. SIRT1 is generally considered to be a protein which increases cell survival [32], and during caloric restriction (CR) the activity and protein levels of SIRT1 are increased [33]. It has been also reported that CR-induced SIRT1 activation is associated with enhanced generation of the small signaling molecule, H 2 S [34]. Since, in our previous study, we have found that overload-induced hypertrophy of skeletal muscle increased the activity and protein levels of SIRT1 [8], which is confirmed in the current study. We measured the activity of CBS, which is one of the major H 2 S producing enzymes [35]. The results of a recent study suggest that exogenous H 2 S (Na H 2 S) injection increased the diameter of fast twitch muscles via activation of mTOR, the S6 pathway leading to increased protein synthesis [36]. H 2 S causes the persulfidation of SIRT1, which increases SIRT1 binding to zinc ion by which the SIRT1 deacetylase activity is increased [37]. However, in our overload induced hypertrophy we could not detect increased levels of bioavailable H 2 S or CBS activity, suggesting that SIRT1 activation has different signaling pathways during CR and hypertrophy. Indeed, CR suppresses mTOR signaling [38], while overload-induced hypertrophy increases mTOR signaling, but in both situations SIRT1 level is increased. Hypertrophy, the increased protein synthesis of skeletal muscle is regulated by anabolic and catabolic cellular processes. FOXO1 regulates protein breakdown and mitochondrial turnover [39]. Akt can phosphorylate FOXO1, which translocates into the nucleus and thereafter translocates into the cytosol or is degraded [39]. Moreover, SIRT1 can directly deacetylate FOXO1 and decrease the activity of this protein [21]. The decreased levels of FOX1 in hypertrophied muscle, could mean suppressed degradation of proteins [8], however we were also interested in mitochondrial quality control during hypertrophy. Therefore, we measured the content of PINK1, since PINk1 signaling pathway regulates mitochondrial fission, and ubiquitylation, during mitophagy [40]. When PINK1 is activated by loss of the mitochondrial membrane potential or excessive production of ROS, this can readily lead to mitochondrial degradation via phosphorylation of Parkin. Our data from the maintained level of PINK1 suggest that overload-induced hypertrophy does not cause mitochondrial dysfunction. Indeed, we also found decreased levels of SIRT3 in overloaded muscle compared to control, and SIRT3 is implicated in mitophagy, since, in human glioma cells, silencing of SIRT3 blunted the degradation of mitochondria [41]. Therefore, the decreased levels of mitochondrial proteins in overload-induced hypertrophy are unlikely due to enhanced mitophagy, but could be due to the decreased response to mitochondrial biogenesis to overload-induced hypertrophy. Animals Eighteen middle aged (8 months) male Wistar rats were randomly divided into a control (C) and a hypertrophied (H) group. Animals were held in a thermoneutral room on a 12:12 h photoperiod and were provided with food and water ad libitum. The entire experiment was carried out in the Research Center for Molecular Exercise Science, University of Physical Education of Hungary and approved by the National Ethical Committee (63/2/2017 and PE/EA/62-2/2021). Synergist Muscle Ablation The main synergist muscles (gastrocnemius, soleus) of the plantaris muscle were surgically removed. All the operations were carried out under deep anaesthetic conditions with pentobarbital sodium (50 mg/kg). The surgical procedures were performed bilaterally as described previously [8]. The neural and vascular supplies of the plantaris muscle remained intact. The control group underwent a sham operation when the tendon of the plantaris and its synergist's tendon were separated carefully but the soleus and the gastrocnemius muscles were not damaged or removed. After the operation and for the next two days the animals were administered analgeticum. The overload period lasted for 14 days and the animals were monitored for the whole period. On day 14 the food was taken away and the next morning the animals were euthanized (decapitation) after an overnight fast. The plantaris muscles were collected immediately after the removal of the fat and the connective tissues. The muscles were weighted and frozen in liquid nitrogen and stored at −80 • C until further analysis. NAD Measurement A NAD/NADH Assay Kit (ab176723) was used to measure the NAD levels in the plantaris muscles according to the manufacturer's instructions. Plantaris muscles were homogenized in NADH/NAD Lysis buffer. Then the samples were centrifuged and separated into treated and untreated parts. The samples and 25 µL diluted NADH standards were loaded into 96-well microplates in duplicate. Then, 25 µL of NAHD/NADH Control Solution was added to the standards and 25 µL of NADH Extraction Solution or NAD Extraction Solution were added, respectively, to the NADH and the NAD samples. After this the plates were heated at 37 • C for 15 min for NAD/NADH decomposition. Then 25 µL of NAD/NADH Control Solution were added to the standards and 25 µL of NAD Extraction Solution or NADH Extraction Solution were added to the NADH and NAD samples, respectively. Then, 75 µL of Reaction Mix were added to all wells. For 2 h the optical density was measured every five min at ex485 and em538 nm wavelength. Western Blots The plantaris muscle homogenates were procreated by Ultra Turrax homogenizer (IKA, Staufen im Breisgau, Germany) with 10 vol of lysis puffer. The samples were electrophoresed on 6-15% polyacrylamide (SDS-PAGE) gels. The samples were between 3 and 6 µL. The proteins in the samples were transferred into PVDF membranes. Then, the membranes were blocked with BSA (0.5-5%) or Milk (5%) for 2 h at 4 • C. After blocking the membrane were incubated with primary antibody at 4 • C overnight. Antibody list: . The next day the membranes were washed three-times with Tris-buffered saline-Tween-20 (TBST) at room temperature and incubated with HRP-conjugated secondary antibody for 2 h at 4 • C. After that the membranes were washed again with TBST three times at room temperature. Then the membranes were incubated with chemiluminescent substrate and protein bands were visualized on X-ray films. The bands were quantified by ImageJ software. The relative density was calculated to our housekeeping protein, which was GAPDH. Measurement of H 2 S with the Monobromobimane Method H 2 S assay was based on a previously published method adapted here for tissue lysates [42]. First, approximately~10-20 mg of the tissue samples were disrupted by a dismembrator. Alkylation/lysis was carried out by the addition of 500 µL PBS set to pH 8.0 containing 1 mM monobromobimane (Sigma Aldrich, St. Louis, MO, USA) in a light-protected environment. After a short sonication on ice the solutions were incubated for one hour at 37 • C in the dark. The reaction was stopped by the addition of 50 µL 50% TCA followed by centrifugation at 12,000× g 4 • C for 10 min to remove precipitated proteins. Supernatants were removed and transferred into HPLC vials for measurement, and the remaining pellets were redissolved in 300 µL 4% SDS/0.1 M NaOH for BCA protein assay. Bimane labeled species from the supernatants using 3 µL injection volumes were separated on a Phenomenex Luna C18(2) 250 × 2.0 mm × 3µm column on a Thermo Ultimate 3000 UHPLC system (Thermo Fisher, Waltham, MA USA). A linear gradient elution using solvents 0.1% TFA/H 2 O (A) and 0.1% TFA/ACN (B) was carried out as described in Table 1. The fluorescent detector was set to excite at 390 nm and detect emission at 475 nm. Quantitation was conducted by establishing a calibration curve by derivatizing standardized H 2 S solutions. Measurement of CBS Activity Frozen tissue samples of~10-20 mg were disrupted by a dismembrator (B. Braun 853162) followed by the addition of the lysis buffer (150 mM KCl, 50 mM HEPES pH 7.4, 0.1% CHAPS, 2% protease inhibitor cocktail) of 400 µL. After a brief sonication on ice, tubes were placed on a rotator for 30 min at 4 • C. After centrifugation at 12,000× g, 4 • C for ten minutes, supernatant protein content was measured by BCA assay. All samples were diluted to 1 mg/mL protein concentration using the lysis buffer. The prepared solutions were used to carry out the CBS activity assay exactly as described previously [43]. In brief, samples were mixed with cofactors (SAM, PLP) and substrates homocysteine (prepared fresh from HCys-thiolactone) and stable isotope labeled serine (2,3,3-D-serine, Cambridge Isotope Laboratories, Inc., Tewksbury, MA, USA) followed by four hours of incubation at 37 • C. Reaction mixtures were quenched using with "Reagent 1" of the EZ:faast kit (Phenomenex, Torrance, CA, USA) spiked with a known amount of stable isotope labeled cystathionine (3,3,4,4-D-cystathionine, Cambridge Isotope Laboratories, Inc.) as an internal standard. Sample preparation and measurement with the EZ:faast kit was carried out following the manufacturers manual. For the HPLC-MS/MS measurements a Thermo Vanquish (Thermo Scientific, USA) UHPLC coupled to a Themo Q Exactive Focus MS was used and the SRM transitions of 4813 → 421 (product) and 4833 → 423 (internal standard) were monitored. Specific activities were calculated from the amounts of cystathionine produced and the protein contents of the samples. Statistical Analysis To assess significance, the two-sample t test was used and to interpret the relationship between the values correlation matrices were employed. Significance level was set at p < 0.05. Conclusions In conclusion, overload-induced hypertrophy of skeletal muscle is associated with increased SIRT1, Akt, mTOR, S6, and suppressed sestrin 2 levels, which are mostly responsible for anabolic signaling. On the other hand, the decreased FOXO1, and SIRT3 signaling suggest downregulation of protein breakdown and mitophagy. The decreased levels of NAD + , sestrin2, OGG1 indicate that the redox milieu of skeletal muscle after 14 days of overloading is reduced. This paper confirms that SIRT1 is involved in hypertrophy of skeletal muscle, and a causative relationship between SIRT1 and anabolic and catabolic signaling pathways was established. This study revealed new members of signaling pathways which have an active role in overload induced-hypertrophy of skeletal muscle. Some potential signaling agents, like H 2 S was excluded from the contributing molecules of overload induced-hypertrophy.
4,526.2
2021-07-01T00:00:00.000
[ "Biology" ]
Government Debt Expansion and Stock Returns Using an international dataset, this paper documents a negative association between increases in the central government debt-to-GDP ratio and dollar-denominated stock index returns. Depending on the estimation method, raising the debt ratio by one percentage point diminishes the stock returns by between 39 to 95 basis points. We show that this result cannot be explained by changes in the investment risk. Instead, government debt issuance exerts upward pressure on private interest rates and appears to signal a greater tax burden in the future. These two factors coincide to produce a fall in stock market prices. I. Introduction For fear of losing popular support, democratically elected governments may be reluctant to embark on fiscal consolidation initiatives involving the raising of distortionary taxes or cutting expenditure. A society that is not averse to the idea of leaving negative bequests may opt for persistent deficits, leaving the burden of debt repayment to future generations (Cukierman and Meltzer, 1989). Popular anxieties, expressed recently in politicians' public statements and in the press, centre on countries' abilities to service their debts and the possibility of sovereign debt default. Indeed, such concerns appear to be well-founded, as the average central government debt to GDP ratio for OECD countries has risen from 38.7% in 1990 to 100.0% in 2015. 2 Lane (2012) points out that economies laden with debt are characterized by multiple equilibria with the distinct possibility of a self-fulfilling speculative attack. A perception of heightened likelihood of default will increase yields, which in turn hinders efforts to service debt and makes default more probable. The recent European sovereign debt crisis illustrates this mechanism and exemplifies the grave ramifications that debt overhang can have for the economy, financial markets and broader society. A casual interpretation of governments' policy announcements might lead to the conclusion that their policies are based upon sound economic reasoning and strong empirical evidence. That, however, is far from the case. Prior to the financial crisis and the ensuing Great Recession, knowledge of fiscal policy was a highly contested area. Summarising the current state of scholarship on fiscal policy, Alesina (2012) concluded: "we as economists, do not know as much as we would like or perhaps we should. The issues are complicated...". One area about which very little is known is the relationship between government indebtedness and stock market performance. With a few exceptions, the literature has been silent on this issue. Notably, theoretical linkages between fiscal policy and stock prices were the subject of work of carried out by Blanchard (1981) and Shah (1984). On the empirical side, Darrat (1988Darrat ( , 1990 used Canadian data to examine the relationship between stock market fluctuations and budget deficits. Our paper contributes to the literature by providing evidence based on a data set comprising 61 countries. The use of this resource permits us to draw conclusions that are generalizable internationally. Our results indicate that stock prices tend to decrease as governments become more indebted. Depending on the econometric specification, increasing public debt by 1% of GDP leads to a ceteris paribus drop in the stock market index ranging from -0.39% to -0.95%. We probe this issue further and attempt to provide a rationale for this unfavourable market reaction. Perhaps our finding could be driven by changes in the risk premium component of discount rates, as the threat of government insolvency looms larger at higher levels of indebtedness. On the other hand, public spending stabilizes the economy in times of recession, providing a safety net for businesses. On balance, we find no strong evidence that changes in market risk are associated with issuance of additional bonds and bills by the government. Nevertheless, increasing the stock of public debt exerts an upward pressure on interest rates and results in a larger tax burden in the future. We believe that these two by-products of rising debt obligations are responsible for the observed stock price declines. The remainder of the paper is organized as follows. By reviewing theoretical and empirical studies, the next section reflects on the channels through which government debt expansions could affect stock prices. This review motivates our research questions and the testing that follows. Section III elaborates on data sources, sample composition and basic summary statistics. The main body of empirical evidence on the four hypotheses of interest is presented in Section IV. Alternative specifications of our stock returns model are considered in Section V. We end the paper with concluding remarks and recommendations. II. Public Sector Debt and Stock Prices It is difficult to make a priori predictions about the effect that issuance of additional government debt may have on equity prices. Within the framework of the extended IS-LM model, Blanchard (1981) argued that fiscal expansion under fixed prices can influence stock values. However, the direction of the effect is ambiguous within this setting. Similarly, in the theoretical model of Shah (1984), short-term jumps in stock prices can occur in response to an unanticipated increase in government expenditure, but whether these jumps are upwards or downwards depends on the parameters of the model. This theoretical indeterminateness is perhaps exacerbated by the lack of empirical research in this area. To the best of the authors' knowledge, no prior study explicitly measured the response of stock markets to changes in the stock of government obligations. The most closely related research is that of Darratt (1988Darratt ( , 1990 who focused on the link between Canadian fiscal deficits and local stock returns. However, joint reading of these two papers does not necessarily help to resolve the controversy regarding the direction of the impact. We begin our theoretical considerations with the conventional frameworks for pricing stocks, namely the dividend discount model (Gordon and Shapiro, 1956;Gordon, 1962) and cash flow valuation model (Fisher, 1930;Williams, 1938). At their core, both models rely on a similar conceptual approach in that they sum the discounted future benefits accruing to shareholders, be they measured by dividends or free cash flow to equity. As we will proceed to argue, government borrowing can influence both the discount rate and the benefits realized by stock market participants. It is through these two channels that the impact of public sector debt on stock valuations could potentially manifest itself. In standard models, such as the Capital Asset Pricing Model (Sharpe, 1964;Lintner, 1965;Mossin, 1966) or the three factor Fama French model (Fama and French, 1993), the discount rate can be viewed as the risk-free rate augmented by the relevant risk premiums. Yields on short term government debt are typically taken to approximate the risk-free rate. However, as pointed out by Blinder and Solow (1973) flotation of new government debt issues will exert upward pressure on interest rates and, consequently, on discount rates. Whenever the increase in government bond yields becomes intolerable, the government may resort to "financial repression" by using regulatory and other indirect measures to force domestic financial intermediaries to invest more money in government bonds (Shaw, 1973;McKinnon, 1973). Even if such actions may restrain the yields on government bills and bonds, they will be unequivocally detrimental to corporations. The glut of public sector debt held by banks crowds out corporate lending (Becker and Ivashina, 2018) and will ultimately increase the costs of corporate borrowing. From a theoretical perspective, in the IS-LM model, a fiscal expansion increases aggregate demand and shifts the IS curve rightwards. This leads to a rise in interest rates and, relatedly, depresses investments and capital stock (Faini, 2006). If, however, agents are rational and live either indefinitely or in dynasties, they will recognize that debt issued to finance current tax cuts will have to be repaid in the future. Consequently, the increase in current disposable income arising from the tax cut will be saved by agents, in anticipation of a higher tax burden in the future. This saving behavior will offset the upward pressure on interest rates generated by public debt expansion. Barro (1974) shows that, in the presence of operative intergenerational transfer, increasing government borrowing leaves interest rates unaffected. To put it differently, given a certain level of public spending, agents are indifferent to whether the government chooses to finance itself by levying taxes or by issuing debt. This is because debt can be viewed as delayed tax liabilities (Plosser, 1987). This logic came to be known in the literature as the 'Ricardian equivalence'. Of course, one may argue that the assumptions necessary to derive this invariance proposition do not hold in the real world. What is more important, however, is whether Ricardian equivalence holds empirically. While there is long-standing discussion on this issue in the literature, no clear consensus has emerged. By examining US data, Plosser (1982Plosser ( , 1987 argued that the stock of public debt is unrelated to interest rates, a result that was later confirmed by Boothe and Reid (1989) for Canada. On the other hand, Engen and Hubbart (2005) as well as Laubach (2009) show a strong positive association between the projected increase in US federal debt and forward rates. Similarly, Bernoth et al. (2003) show that the interest rate spread between a Eurobond-issuing EU country and Germany depends on their relative debt changes. In light of these mixed results, we have decided to conduct our own independent analysis. In addition to the risk-free rate, discount rates also embody a risk premium element which increases with the level of uncertainty. Issuance of additional public sector debt makes the possibility of default or repudiation more tangible. Corsetti et al. (2013) perceptively point out that sovereign default risk can spill over to the private sector. What is more, poor financial condition of a government is likely to instigate volatile developments in the political arena. Baker et al. (2016) construct an economic policy uncertainty index derived from the content of newspaper articles and conclude that some of the index's peaks coincide with "battles over fiscal policy". On the other hand, government budgets naturally stabilize output variations and can be used as lifelines for too-big-tofail privately-owned businesses (Brown, 1955;Fatás and Mihov, 2001;Wren-Lewis, 2010). The net effect of the forces involved is difficult to predict and needs to be assessed empirically. For this reason, we empirically evaluate to what extent the jumps in the level of prevailing risk, as measured by changes in stock market volatility, are related to increases in public debt. While the discount factor is critical for pricing of stocks, one also needs to consider the ability of corporations to generate income, as this will affect both the level of free cash flows to equity and dividends. Since issuance of government debt can lead to interest rate increases, consumers may become disinclined to finance their purchases with credit which, in turn, will lead to a drop in demand for products. Corporate profitability may be undermined further by the rising cost of servicing variable rate debt. What is more, investors pay attention to after-tax cash flows, which is important considering the view of Barro (1974) and Lucas and Stokey (1983) that government bonds are simply "congealed future taxes". Investors may equate increases in public sector debt to prospective hikes in corporate, dividend and capital gains taxes. This is corroborated by the empirical results of Park (1997), who shows that expected tax changes implied by yields on tax-exempt municipal bonds are linked positively to the level of federal debt. Our inquiry also attempts to establish whether raising government indebtedness is associated with future tax increases. III. Data The sample used in this study includes all countries for which stock market information and government debt data could be found in our datasets. The country-level stock market indices used here have been constructed by MSCI and downloaded from the Thomson Reuters Datastream. These indices are market capitalization weighted and denominated in US dollars. The common currency denomination is necessary, since we are adopting a global investor's perspective. At the time of the study, MSCI provided index information for 77 markets for which we computed continuously compounded returns. The annual series of government debt-to-GDP ratio, along with other macroeconomic variables, came from the World Development Indicators (WDI) database compiled by the World Bank. Unlike MSCI, the WDI dataset does not cover Taiwan and Palestine, so these two nations had to be excluded from our investigation. Furthermore, there was no debt data for another 14 countries, which led to their exclusion. Consequently, the final analysis is conducted on a set of 61 countries, which are listed in Appendix I at the end of the paper. Often the size of cross-section in our regressions is smaller due to availability of control variables and the need to difference or lag our indicators, which proves problematic for very short series. The WDI starts to provide debt data from 1990, a date marking the beginning of our investigation timeframe. The time series dimension ends in 2014. At this stage it must be mentioned that, for many nations, it is impossible to obtain data for the full period, which effectively means that we are basing our inferences on an unbalanced panel. [ Table I provides definitions of the variables used in our study, while Table II reports summary statistics. The dollar-denominated returns averaged about 3.72% per annum. Credit Suisse (2015) reports that real return averages computed over a longer period of 115 years exceeded our estimate for most countries. This is likely to be due to the fact that part of the sample considered here was affected by the recent global banking crisis and the economic slowdown that followed. The average debt-to-GDP ratio was 56% and tended to increase by 26 basis points per annum. We also gauge the changes in stock market risk using ln variable, which measures the continuously compounded increase in volatility. Within a given year, volatility is calculated as a standard deviation of daily returns. The mean of this variable reported in Table II reveals a tendency towards diminishing riskiness associated with stock market investments over time. The interest rate here is that paid on bank deposits, which represent a convenient alternative to investing in equity. Depositors struggled to increase their wealth in real terms, as the mean of Inflation exceeded that of Interest_Rate. However, examination of the statistics for different percentiles, reveals that depositors' real losses occurred primarily in nations struggling with hyperinflation. Our sample countries had, on average, a real growth rate of 3.22% per annum and unemployment of 8.31%. Finally, we do not observe any strong trends in the tax burden imposed by governments. Although the mean change in the government tax revenue to GDP ratio is slightly negative, the median has a small positive value. The last three columns in Table III report the results of three panel root tests. Since each of them relies on a different methodological approach, juxtaposition of the findings allows us to reach more reliable conclusions. The first test by Levin, Lin and Chu (2002) assumes that the persistence parameter does not vary across cross-sectional units and relies on a t-statistic that, under the null of a unit root, is asymptotically normally distributed. The version of the test presented here allows for individual intercepts. The approach of Im, Pesaran and Shin (2003) is different in that ADF tests are run separately for each of the cross-sectional units. The W-statistic is based on the standardized average of the t-statistics obtained from these tests and, under the null, W has an asymptotic standard normal distribution. Instead of working with t-statistics, Maddala and Wu (1999) focus on the p-values from individual unit root tests. These can be combined, according to the principles outlined in Fisher (1932), to create a test statistic following a  2 distribution. Table II reveals that, with the exception of Debt, the hypothesis of unit root is strongly rejected for all variables. The stationarity of Debt is questionable, considering that the Im, Pesaran and Shin (2003) fails to reject the null, while the Fisher-ADF test indicates a rejection only at a 10% significance level. Consequently, in the regressions that follow, we use the first difference of this variable (Debt). Government Debt Changes and Stock Market Returns As was argued in Section II, issuance of additional government bonds can increase discount rates and lead to future tax increases, which would depress valuations of equities. On the other hand, the traditional Keynesian view holds that expansionary fiscal policy can provide a stimulus to the economy, which could benefit shareholders. The relative validity of these two viewpoints can only be assessed empirically. To this end, we proceed to quantitatively measure the influence of central government debt increases on stock market valuations. Our primary objective is to focus on increases in debt, rather than deficit. This is because debt needs to be sold in the markets and may consequently influence prices of assets, while deficit is a purely accounting construct. Consequently, Table III reports models linking our Return variable with Debt and additional controls. Models (1) and (2) employ a simple pooled OLS estimation with common intercept, while models (3) and (4) include both country and year dummies. Since the null hypothesis of redundant fixed effects is strongly rejected, the latter two regressions are preferred on econometric grounds. The most important finding that becomes immediately apparent is that, irrespective of the estimation method and regression specification, issuance of new government debt depresses stock market valuations. Increase in debt equivalent to one percent of GDP leads to lowering of the dollardenominated index returns by between 39 and 95 basis points. The hypothesis of debt neutrality is rejected in all models at 5% significance level, or better. These results add credence to the claim that stock market investments can be crowded out by government bonds and bills. [ Table III about here] The estimated coefficients on the control variables warrant further reflection. Firstly, no significant contemporaneous association between GDP growth and returns has been detected. This finding mirrors the conclusions of Binswanger (2000Binswanger ( , 2004 who argued that the nexus between growth rates in real activity and stock price movements broke down in the 1980s, both in the US and in the G7 countries. High unemployment appears to be a good signal for markets, which at first glance may seem counterintuitive, as it measures underutilization of resources. Although Boyd et al. (2005) note that rising unemployment is indeed followed by slower growth, they also report that, during expansion periods, this effect is dominated by an expectation of declining future interest rates. As a result, the stock market usually rises following bad news from the labor market. Lastly, interest rates are inversely related to market valuations, which is particularly apparent in model (4). This is not surprising, since a higher rate of interest leads to heavier discounting of future cash flows generated by companies and translates into higher costs of servicing corporate debt. The measures of fit seem to be much better for the two-way fixed effect panels. This is primarily due to the fact that the year dummies are able to capture the common global trend in stock market movements and effectively isolate the domestic component of returns. The hypothesis that the regressors do not have explanatory power is rejected in all specifications. We also note that Inflation is not included in the set of independent variables, as it is highly correlated with Interest_Rate (=0.87). Its inclusion could lead to multicollinearity problems and inflated standard errors. As specified, our models do not suffer from multicollinearity and the highest variance inflation factor (VIF) in the models is 1.18. According to Chatterjee and Price (1991), estimation problems can arise when VIFs exceed the value of 10. In summary, the findings in Table III support the claim that increases in central government's indebtedness diminish the wealth of shareholders. At this stage, it is important to ask through which channels this relationship establishes itself in the data. We will consider three possible mechanisms and endeavor to verify related evidence. Firstly, issuance of new debt erodes the creditworthiness of the government and increases the probability of its default. Such political uncertainties could potentially translate into higher stock market risk. Secondly, the action of selling newly issued government bonds and bills may increase interest rates, consequently depressing stock prices. Lastly, the need to borrow may reflect structural problems in balancing the budget and signal future tax increases. In what follows, we investigate each of the possible explanations in greater detail. Government Debt and Stock Market Riskiness In the seminal model of corporate debt pricing proposed by Merton (1974), the probability that a company will go bankrupt increases nonlinearly in the present value of debt relative to the current value of the firm. A similar relationship holds if the situation is assessed from the point of view of governments. As new debt is issued, the probability of default increases, undermining creditworthiness and credit ratings. Aizeman et al. (2013) use a large sample of countries to show that spreads on sovereign credit default swaps, which represent the cost of default insurance, increase with the public debt-to-tax base ratio. Notably, exceeding the debt capacity can also destabilize a country politically. In recent years, this has been witnessed in Greece, which balanced precariously on the edge of solvency. During the 2007-2015 period, this country had no less than 7 different prime ministers. In general, policy uncertainty has been documented to adversely affect stock prices and to exacerbate investment risk (Bittlingmayer, 1998;Baker et al., 2016;Antonakakis et al., 2013;Pástor and Veronesi, 2013). The above arguments explain why debt-financed fiscal profligacy can create a hazardous investment environment. However, one needs to bear in mind that there could be strong offsetting effects. Fiscal policy may be deliberately counter-cyclical, with stimulus or bailout packages and automatic fiscal stabilizers having a dampening effect on economic fluctuations (Brown, 1955;Fatás and Mihov, 2001;Fernández-Villaverde, 2010;Wren-Lewis, 2010). That being the case, questions can be raised about the net effect of these opposing forces. We endeavor to measure it empirically by linking changes in stock market risk to increases in government indebtedness. Table IV reports the estimates of four models where ln is taken as a dependent variable. [ Table IV about here] The sign of the coefficient on Debt appears to change depending on specification, with statistical significance being reached only in one model and at merely 10% level. The assertion that increases in government borrowing aggravate investment risk finds little support in the data. Consequently, the story that the stock price declines accompanying debt expansion are caused by jumps in risk premium should be treated with skepticism. Furthermore, other macroeconomic variables lack consistency in terms of the strength of their predictive power. Most variation can be explained by the period dummies, indicating that stock markets are strongly integrated and tend to change their riskiness simultaneously. Overall, the findings presented in this section suggest that we need to look for drivers other than risk to rationalize the negative debt-return nexus. Are Interest Rates Affected? Whenever a government takes large quantities of bonds and bills to the market, they compete with private debt and drive up the interest rates (Blinder and Solow, 1973). This could potentially raise the level of private interest rates in the economy and negatively affect stock prices. However, as Friedman (1978) reminds us, the conclusions of Blinder and Solow (op cit) hinge upon the assumption that government bonds and private sector real capital are perfect substitutes and, should this assumption be violated, debt-financed deficits will not necessarily lead to the abovementioned portfolio crowding out effect. The academic discussion is further complicated by the fact that prior empirical papers fail to reach a consensus regarding the impact of fiscal imbalances on interest rates (Ploser, 1982(Ploser, , 1987Evans, 1985;Faini, 2006;Ardagna et al., 2007;Laubach, 2009). If our investigation shows that expansions of government debt lead to higher level of interest rates, this will have ramifications for stock market prices. Since the cash flows generated by stocks will be discounted more heavily, equities will consequently depreciate in value. This, in turn, will diminish the wealth of households and could reduce their consumption of corporate products. As the option of buying consumer durables on hire purchase becomes more costly, consumption will drop even further (Engen and Hubbart, 2005). Moreover, interest rate rises imply higher costs of servicing variable-rate corporate debt and, therefore, diminished profits. Lastly, high borrowing costs can reduce investors' demand for stocks, as investing on margin becomes less affordable. All of these effects could potentially coincide to produce significant falls in stock prices. [ Table V about here] Table V reports parameter estimates for models that link the interest rate level to increases in government debt and control variables. Inflation appears as a regressor in all specifications and always has a t-statistic in excess of 40. This means that dropping it could result in severe omitted variable bias. The most important finding in Table V is the robust rejection of the Ricardian equivalence. An increase in the debt-to-GDP ratio by one percentage point appears to raise interest rate by about 6 to 10 basis points. These estimates are twice as large as those obtained for the US by Engen and Hubbart (2005) and Laubach (2009). Consequently, in our sample, expansions in central government indebtedness increase interest rates in a non-trivial way which, in turn, has dire ramifications for stock markets. The interest rate modeled here is that accruing to depositors. This is quite sensible, because as Cebula (1985) pointed out, for the crowding out effect to affect the private sector, government borrowing needs to influence private interest rates. Interestingly, the World Development Indicators dataset also includes information on bank lending rates for short-and medium-term loans. We replicate our regressions with the lending rates acting as the dependent variable and report our findings in Appendix II. The debt-neutrality hypothesis is again rejected in most of the specifications and the sensitivity of lending rate with respect to Debt seems to be even greater than that recorded for deposit rates. As a side note, we would like to point out that the same is true of sensitivity to inflation, which suggests that the loan-deposit interest rate spread increases in an inflationary environment. Another issue that has been pointed out by Faini (2006) and Ardanga et al. (2007) is that the interest rate effect could be asymmetric. Since debt increases are more worrying in countries that already have an above-average indebtedness level, the market reaction could potentially be stronger. However, we have discovered that once the fixed effects and relevant controls are incorporated into our model, no evidence of asymmetries could be found. Taken together, the results presented in this section attest to the fact that government decisions to increase borrowing are accompanied by jumps in interest rates, which can adversely affect share prices through several channels. However, this is unlikely to be the end of the story. After all, our results in Table III indicated that the strong negative relationship between Return and Debt persists even after controlling for the level of interest rates. Clearly, other forces must also be at play here. To probe this issue further, we investigate whether government debt issuance may signal increases in future tax burden. Tax Implications of Government Debt Expansion In the absence of sales of public assets, the government needs to satisfy a borrowing constraint equating current debt to the present value of expected future surpluses (Smith and Zin, 1991;Chung and Leeper, 2007). Fiscal surpluses may not be easily achievable, as cuts to public spending can prove politically perilous. An alternative way to follow, would be to increase the tax burden. Needless to say, raising taxes on corporate profits, capital gains or dividends reduces cash flows to shareholders and can result in stock prices declines. Taxation can also lead to a significant deadweight loss (Feldstein, 1999) and expansion of the underground economy (Tanzi, 1983). The side-effects of high corporate tax rates are particularly troublesome and include lower economic growth (Lee and Gordon, 2005) as well as declines in investments, FDI and entrepreneurial activity (Djankov et al., 2010). All of these unintended consequences of the tax burden can further aggravate stock market falls. On one hand, debt issuance may be viewed as an innocuous way to smooth government revenue. On the other hand, it may be an ominous sign that structural budget imbalances are present and that the future tax burden will have to rise. We verify empirically whether public debt expansion is followed by increases in tax revenue-to-GDP ratio by estimating the following model: All variables appearing in the equation above have been defined in Table I. β0,i and β1,t stand for the country and year fixed effects, while i,t denotes a random residual. (1) and (3) restrict βn1+3 coefficient to zero. We would also like to note that (β3+ β4+…+ βn1+2) represents the total increase in the tax revenue-to-GDP ratio in the n1 years following the year in which an increase in public debt equivalent to 1% of GDP took place. [ Beginning our analysis with model (1), we selected the lag length n1 by using the Akaike criterion (Akaike 1973(Akaike , 1974 and capping the maximum number of lags at 4. This selection criterion indicated that 4 lags should be included. It should be noted that selecting higher order models leads to a substantial loss of degrees of freedom, since we are dealing with a panel with a relatively large cross-sectional dimension. Turning our attention to the results, it can be seen that the value of β2 coefficient is negative, which suggests the presence of debt-finance tax cuts. However, the statistical significance of this finding is debatable and such policy actions are not sustainable in the long-run. This is clear, as the initial cut is followed by tax increases, which are of much greater magnitude (i.e. | β2|<(β3+ β4+ β5+ β6)). This is true regardless of the estimation method. The cumulative tax hikes in the four years following the year of debt expansion are statistically significant in all of the regressions (i.e. the null of H0: β3+ β4+ β5+ β6 = 0 is consistently rejected). Such results indicate that the growing indebtedness of government does not simply represent ephemeral government revenue smoothing. Instead, it signals a structural deficit problem that will need to be addressed by changing taxation policy in the years to follow. Some economists of the Keynesian persuasion would argue that, due to its short-term growth boosting effect, a debt-financed expansion may be desirable, even if it is not sustainable over a long period. However, since our Tax_Increase variable is defined as the first difference in tax revenue-to-GDP ratio, our estimates indicate that the tax burden in the four years following debt expansion increases faster than GDP. This is undoubtedly bad news for companies and investors alike. Therefore, if agents are rational and forward-looking, increases in government indebtedness will have to result in immediate decreases in stock prices. This is why, in addition to the interest channel, the tax effects can be propounded as a rationalization for the negative association between Return and Debt. Although somewhat tangential to our main analysis, it is interesting to note that the coefficient on Inflation is negative, and significantly so in the two-way fixed effect panel. This may suggest that some countries are trying to inflate their way out of financial difficulties without resorting to increasing the tax burden. Such a policy could be implemented by using the open market operations of the central bank. It goes without saying that governments operating in countries where central banks are strongly independent or those residing in the Euro zone will be restricted in pursuing such policy avenues. V. Further Considerations In what follows, we present alternative specifications of the stock returns model introduced in the previous section. Since the joint hypothesis of redundant country-and year-fixed effects is consistently rejected in the returns regressions, we constrain ourselves to presenting two-way fixed effect panel models with a full set of controls. The first concern that we want to contemplate relates to whether the reaction to public debt issuance is uniform across different countries, regardless of their level of indebtedness. Presumably investors could become more apprehensive and agitated in cases where the public debt burden is already sizable. Here we use a 60% debt-to-GDP threshold to distinguish between the nations that are heavily laden with debt and those that are not. Our threshold selection is motivated by the fact that across EU member states a limit of 60% had been imposed by the Stability and Growth Pact of 1997 and its importance was further underscored by the Fiscal Compact of 2012 (Lane, 2012). To operationalize our inquiry, we created a dummy variable indicating heavily indebted countries and label it accordingly as I (Debti,t>60%). Subsequently, we interact I(Debti,t>60%) as well as (1-I(Debti,t>60%)) with Debt and enter the resultant constructs as explanatory variables into our return regression. Such an approach permits us to differentiate the strength of the stock market reaction to debt issuance conditional on the level of government liabilities. The estimation results (displayed in Column (1) of Table VII) reveal that increasing public debt by 1% of GDP in countries that are below our debt threshold leads to a 75 basis points decrease in returns. An analogous estimate of a fall for the highly indebted countries is 109 basis points. While this difference between the two estimates may appear sensible and nontrivial from an economic perspective, it is insignificant from a statistical point of view (p-value = 0.4309). [Insert Table VII about here] Another important issue that we ought to consider at this stage is that not all forms of debt are equal. Governments that have control over their own legal tender and central bank may resort to monetizing their domestic currency denominated debt in times of need. Needless to say, such liberties cannot be taken with respect to sovereign debt, making the likelihood of default appreciably higher. To delve into this issue empirically, we collect new data from the World Development Indicators database and construct the External_Debt variable, which divides the external public debt stocks by GDP. By regarding all non-external debt as domestic, we further create a Domestic_Debt variable, which is likewise scaled by GDP. Both of these indicators, in their first-differenced from, are entered into our return regression (see specification (2) in Table VII). Although increases in both types of government debt significantly depress stock valuations, their impact is not homogeneous. As anticipated, the detrimental impact of foreign debt is more severe, which is evidenced by the significantly higher regression coefficient (p-value = 0.0089). Some caution is advised when interpreting these results, as the data used for this estimation was available only for 17 countries. Nevertheless, governments that consider equity investors to be an integral part of their electorate are advised to carefully consider the forms of debt that they plan to issue. Our next point of inquiry relates to the selection of the functional form. In our prior estimations we have presupposed that the relationship between changes in debt and stock prices is linear. Table VII disposes of this assumption and includes a squared Debt term as an explanatory variable. This term proves to be statistically significant at the 5% level and bears a negative coefficient. Such a finding points towards the existence of concavity in the function of interest. While the returns increase with debt reductions, they do so at a diminishing rate. On the other end of the spectrum, huge increases in public debt decrease the returns more than linearly, which could potentially reflect the devastating impact of defaults and financial panics. Specification (3) in Last but not least, we investigate whether there is any evidence of a delayed response of stock market prices to changes in public debt. Unless central government indebtedness is explicitly considered as a risk factor, an observation that Debt forecasts future returns would run contrary to the Efficient Market Hypothesis (Fama, 1970). In an efficient market, future prices should follow a random walk and be completely unpredictable. To examine this issue in greater detail, we include lagged Debt as an additional explanatory variable in our return model. According to the findings reported in column (4) of Table VII, the impact of this variable is negative, which is consistent with our a priori expectations. Although the redundancy of this variable cannot be rejected at the conventional significance levels, the associated p-value is relatively low and equal to 0.13. It can be further inferred from the estimates that an increase in public debt by 1% of GDP reduces stock returns by a total of 137 basis points over a two-year period. VI. Conclusions This paper contributes to the vigorous debate on the impact of fiscal policy by showing that stock price performance is weakened by the issuance of additional public debt. In order to rationalize this finding, we empirically tested auxiliary hypotheses. First, the stock declines do not seem to be accompanied by elevated levels of return volatility, which invalidates justifications based on the risk premium story. Second, we examine the pressure that increasing government debt exert on interest rates. Despite over 40 years of theoretical and empirical research in this area, there is still little consensus about the strength of such pressure and the size of the arising effect. Using an international sample, we show that interest rates increase between 6 to 10 basis points when government debt is increased by 1% of GDP. From the perspective of stock market investors, the situation is exacerbated even further by the fact that the costs of servicing bloated public sector borrowing are financed by future increases in the tax burden. Our findings have a range of practical implications. First, they highlight the importance of fiscal self-restraint to policy makers. Future governments may find the prioritization of the balanced budget imperative difficult, as the population of developing countries is aging (Alesina, 2012). However, many pension funds are heavily invested in equity markets and issuance of more public debt could seriously undermine the quality of life of senior citizens. Second, the results provide clear-cut guidance to international investors. In selecting their portfolio composition, forwardlooking stock market participants may want to underweight countries that are expected to run chronic budget deficits financed by debt. Our findings with regard to interest rate behavior could also be instructive to those who have committed their funds to fixed income instruments. Last but not least, the insights provided are food for thought for some voters who believe that rising government liabilities are of no immediate concern, as the burden of debt repayment can be left to future generations. This is a somewhat misguided notion, as the ramifications of such actions are felt immediately in capital markets. Prob (F-stat) 0.0000 0.0000 Note: The dependent variable in the models above is continuously compounded return on MSCI county stock market index denominated in US dollars. Definitions of the explanatory variables are provided in Table I. To conserve space, the fixed effects are not reported. F-stat (Regression) tests the hypothesis that the model has no explanatory power, while F-stat (Redundant Fixed Effects) is for the null that both cross-section and period fixed effects can be omitted. *** , ** , * denote statistical significance at 1%, 5% and 10%, respectively. Prob (F-stat) 0.0000 0.0000 Note: The regressions above model the continuously compounded increase in stock market risk, which is measured by standard deviation of daily MSCI index returns within a given calendar year. For the exact definitions of explanatory variables see Table I. F-stat (Regression) tests the hypothesis that the model has no explanatory power, while F-stat (Redundant Fixed Effects) is for the null that both cross-section and period fixed effects can be omitted. *** , ** , * denote statistical significance at 1%, 5% and 10%, respectively. Table I. The first F-statistic is for the hypothesis that the model has no explanatory power, while the second one is for the null that both cross-section and period fixed effects are redundant. *** , ** , * denote statistical significance at 1%, 5% and 10%, respectively. Note: The regressions presented above link increases in tax burden to the current and past government debt changes and inflation. The general regression equation can be written as follows: Tax_Increasei,t = β0,i + β1,t + β2Debti,t + β3Debti,t-1 + β4Debti,t-2 + β5Debti,t-3 + β6Debti,t-4 + β7Inflationi,t + I,t. Models (1) and (2) assume that (β0,1 = β0,2 = … = β0,N) and (β1,1 = β1,2 = … = β1,T), while models (1) and (3) restrict β7 to 0. In addition to the null hypotheses that the regression has no explanatory power and that the fixed effects are redundant, a third null is tested. It verifies whether historical debt increases are tax-neutral. *** , ** , * denote statistical significance at 1%, 5% and 10%, respectively. Prob (F-stat) 0.0000 0.0000 0.0000 0.0000 Note: All regressions reported in this table are two-way fixed effect panels. The dependent variable in the models above is continuously compounded return on MSCI county stock market index denominated in US dollars. I(Debti,t>60%) is a dummy variable taking the value of one whenever the debt-to-GDP ratio exceeds 60%. External_Debti,t is the first difference in the external public debt stocks-to-GDP ratio, while the Domestic_Debti,t is the first difference in the domestic central government debt-to-GDP ratio. Definitions of the remaining explanatory variables are provided in Table I. To conserve space, the fixed effects are not reported. F-stat (Regression) tests the hypothesis that the model has no explanatory power, while F-stat (Redundant Fixed Effects) is for the null that both cross-section and period fixed effects can be omitted. *** , ** , * denote statistical significance at 1%, 5% and 10%, respectively.
9,697.8
2018-08-23T00:00:00.000
[ "Economics" ]
Sympathetic Voltage-Independent Regulation of Voltage-Gated Calcium Channels in Pancreatic β-Cells Voltage-gated calcium (CaV) channels are regulated by G proteins via voltage-dependent and independent pathways. Voltage-independent regulation of calcium channels is important for intracellular calcium concentration and insulin secretion. Voltage dependence of each pathway can be elucidated by a prepulse facilitation protocol. Using this experimental approach, we compared CaV regulation by GTPγS and noradrenaline (NA) in rat pancreatic β-cells and rat superior cervical ganglion (SCG) neurons. The SCG neuron is a model in which the bases of CaV channel regulation by G proteins have been established. CaV channel regulation through activation of the sympathetic nervous system has been poorly studied in native insulin-secreting cells. We recorded CaV channel currents by means of the patch-clamp technique in the whole-cell configuration. We found that application of both GTPγS (a nonspecific activator of G proteins) by cell dialysis and noradrenaline (NA)-exposure reduced CaV current amplitude in pancreatic β-cells and in SCG neurons. However, the inhibition of CaV channel currents in GTPγS-dialyzed SCG neurons was relieved by a strong depolarizing pulse. By contrast, in pancreatic β-cells, the inhibition was maintained after a strong depolarizing pulse. In SCG neurons, the CaV channel inhibition by NA is predominantly voltage-dependent, whereas in pancreatic β-cells it is only 40%. Thus, it appears that CaV channels in rat pancreatic β-cells are regulated mainly through a voltage-independent pathway. The signaling pathway for CaV channel regulation by NA in pancreatic β-cells appears to differ from the classic signaling pathway described in SCG neurons. Therefore, voltage-independent regulation of Ca2+ entry through CaV channels is a critical step in understanding the pathophysiology of type 2 diabetes. Introduction The islets of Langerhans are innervated by the sympathetic nervous system [1,16].Secretion of insulin by cells in these islets is inhibited by the neurotransmitter noradrenaline (NA) through activation of α2-adrenergic receptors coupled to G proteins [1,19].NA has been shown to regulate voltage-gated calcium (Ca V ) channels in insulin-secreting cell lines, however, this has not been observed in native pancreatic β-cells [3,13,19]. G protein regulation of Ca V channels has been studied extensively in neurons of the superior cervical ganglion (SCG), and the biophysical properties of Ca V channels are differentially expressed via voltage-dependent and -independent pathways [11,20].Both pathways reduce the Ca V channel current amplitude and regulate Ca 2+ -dependent processes, such as neurotransmitter release.NA-induced inhibition of Ca V channels in neurons is a fast voltage-dependent mechanism [6].The voltage-dependent pathway is thought to be delimited to the plasma membrane by G protein βγ subunits; the voltageindependent pathway is less understood.There is evidence that the membrane lipid PIP 2 is the molecule responsible for the voltage-independent inhibition of Ca V channels triggered by M 1 R activation [9,10,15].Furthermore, some other voltageindependent pathways have been documented [20]. Voltage-independent regulation of calcium channels governs intracellular calcium concentration and insulin secretion. The voltage-dependent and -independent pathways can be elucidated by a prepulse protocol that consists of two identical voltage pulses separated by a strong depolarizing pulse [8].The strong depolarizing pulse is commonly used as a tool to differentiate the two mechanisms.It can be mimicked by action potential burst firing and thus it is physiologically relevant to study the voltage-dependence of Ca V channel regulation [4,17]. Pancreatic β-cells and SCG neurons express α2-adrenergic receptors, therefore, we examined NA regulation of Ca V channels.We used rat SCG neurons, a model in which signaling pathways have been elucidated, so that we could compare NA regulation of Ca V channels in pancreatic β-cells against NA regulation of Ca V channels in SCG neurons.In order to compare G protein regulation in both cell types, endogenous G proteins were activated with GTPγS.A prepulse protocol was used to isolate the voltage-dependent and -independent pathways. Cell Culture Wistar rats were provided by the Universidad Nacional Autonóma de México School of Medicine's animal breeding facility and were handled according to the Mexican Official Norm for Use, Care and Reproduction of Laboratory Animals (NOM-062-ZOO-1999). Neurons were dissociated from the SCG by mechanical and enzymatic methods, as previously described [14].Pancreatic βcells were also obtained as previously described [5].SCG neurons and pancreatic β-cells were plated and incubated at 37°C in a humidified atmosphere of 95% air and 5% CO 2 for 16-24 hours before electrophysiological recordings. Electrophysiological study Recordings of Ca V currents in SCG neurons and pancreatic βcells were obtained at room temperature (22-24°C) by the patch-clamp technique in the whole-cell configuration and using an EPC-9 amplifier (HEKA Electronic, Lambrecht, Germany).Voltage protocols were generated, and current responses were digitized and stored by means of the Patchmaster software (HEKA Electronic).Pipettes were pulled from borosilicate glass capillaries with a horizontal patch electrode puller (Sutter Instrument, Novato, CA, USA), and SCG neurons were filled with internal solution consisting of (in mM) 140 CsCl, 20 TEA-Cl, 10 HEPES, 0.1 BAPTA-tetra cesium, 5 MgCl 2 , 5 Na 2 ATP, 0.3 Na 2 GTP, and 0.1 leupeptin; the solution was adjusted to pH 7.2 with CsOH.Resistance of the pipettes was 1.8-2.0MΩ.Pancreatic βcells were filled with internal solution consisting of (in mM) 140 CsCl, 32 TEA-Cl, 10 HEPES, 0.1 BAPTA-4 Cs, 1 MgCl 2 , 3 Na 2 ATP, 3 Na 2 GTP, and 0.1 leupetine; the solution was adjusted to pH 7.4 with CsOH.Resistance of the pipettes was 2.5-3.5 MΩ. Data Analysis Inhibition of the current induced by GTPγS or NA was calculated as the amplitude of the steady-state current under GTPγS dialysis or during NA exposure minus the current amplitude under control conditions.Thereafter, it was divided by the current amplitude under control conditions (thus reported as a percentage).The facilitation index was calculated by measuring the peak current amplitude at P2 divided by the peak current amplitude at P1.Time constants (τ) were calculated from an exponential equation.Data are shown as mean ± SEM. Statistical differences were analyzed by t-test.P< 0.05 was considered significant. Inhibition of Ca V Current by GTPγS We first examined the fraction of voltage-dependent andindependent regulation by activation of endogenous G proteins with GTPγS.Although both types of regulation were seen in SCG neurons, the voltage-dependent pathway was predominant under G protein activation [7].Representative Ca V currents evoked by the prepulse protocol in control and in GTPγS-dialyzed SCG neurons and in pancreatic β-cells are shown in Figure 1A and B, respectively.The Ca V current amplitude was reduced by GTPγS in both cell types, however, this inhibition was released after a strong depolarizing pulse only in SCG neurons (Figure 1A and B).In SCG neurons the time course of the reduction in Ca V current amplitude by GTPγS was biphasic (Figure 1C; τ1 = 17.2 ± 8.8 seconds and τ2 = 29.9 ± 6.09 seconds), whereas in pancreatic βcells it was monophasic (Figure 1D; τ= 31.2 ± 2.6 seconds).Ca V current inhibition was greater in SCG neurons (P1 = 88 ± 2.3% and P2 = 28 ± 4.3%) than in pancreatic β-cells (P1 = 53 ± 4.6% and P2 = 50 ± 5.6%) (Figure 1E).The facilitation index was used to evaluate the fraction of voltage-dependent and voltageindependent regulation.Facilitation indices in SCG cells were 1.08 ± 0.06 (control) and 2.82 ± 0.026 (GTPγS), whereas in pancreatic β-cells, they were 1.04 ± 0.01 and 0.99 ± 0.042, respectively.In summary, these data support that Ca V currents are reduced by endogenous G protein activation.However, voltage-dependent regulation is predominant in rat SCG neurons, whereas voltageindependent regulation does the same in rat pancreatic β-cells Voltage-Independent Inhibition of Ca V Current by NA in Pancreatic β-Cells Ca V current in sympathetic neurons has been shown to be reduced by NA mainly through a voltage-dependent pathway [6].Thus, we hypothesized that Ca V currents in rat pancreatic β-cells are regulated by NA in a similar manner.We showed recently that Ca V channels are regulated in SCG neurons and pancreatic β-cells via similar pathways when M 1 R is activated.We used a prepulse protocol and characterized a specific time course as a hallmark of a voltage-dependent pathway.Representative P1 and P2 Ca V current traces elicited in control and NA-treated sympathetic neurons and pancreatic β-cells are shown in Figure 2A and B, respectively.Ca V current was reduced by NA in both cells types, however, the current inhibition was released by the prepulse only in sympathetic neurons.The time course of I Ba amplitude in NA-treated sympathetic neurons and pancreatic βcells is shown in Figure 2C and D, respectively.Notably, there was a reduction in Ca V current amplitude in treated SCG neurons (87 ± 8.8% in P1 and 25 ± 3.5% in P2) and treated pancreatic β-cells (44.7 ± 4% in P1 and 34.5 ± 4.4% in P2) (Figure 2).However, the reduced current was scarcely restored by a depolarizing prepulse in pancreatic β-cells compared to SCG neurons (Figure 2F: SCG control= 1.1 ± 0.06, SCG NA= 2.6 ± 0.42 and pancreatic β control= 1.02 ± 0.02, pancreatic β NA= 1.4 ± 0.03 cells).Taking together, these data show that NA reduces the Ca V current amplitude in rat pancreatic β-cells through a voltage-independent pathway and suggest that this Ca V current regulation by NA is significantly different from that observed in SCG neurons. Discussion We showed in rat pancreatic β-cells that Ca V channels are inhibited by G proteins mainly via voltage-independent pathways.Voltage-independent regulation of calcium channels determines intracellular calcium concentration and insulin secretion.The macroscopic Ca V channel current was reduced to 55% by GTPγS dialysis.Similarly, in mouse pancreatic β-cells, GTPγS dialysis has been shown to reduce Ca 2+ currents by 30% [2].Inhibition in both mouse and rat pancreatic β-cells is minimal compared to that in SCG cells.The Ca V channel inhibition in the mouse model is partially released by a strong depolarizing pulse, suggesting a predominant voltage-dependent pathway under endogenous G protein activation [2].On the contrary, Ca V channel inhibition was not relieved despite the application of a strong depolarizing pulse in GTPγS-dialysed rat pancreatic β-cells.According to Elmslie and colleagues [8], the voltage-dependent regulation by G proteins in SCG neurons is released by a strong depolarizing pulse, and this relief is the hallmark of G protein regulation in neurons.Our data suggest that regulation of Ca V channels by G protein-coupled receptors in insulin-secreting pancreatic cells could be mediated by pathways that have no relationship to the classic regulatory pathways described in SCG neurons [8]. In SCG neurons, the NA signal is transmitted mainly through α 2 -adrenergic receptors coupled to Gi proteins.This signaling pathway is voltage-dependent and requires the action of Gβγ subunits [9,10].Similarly, α 2 -adrenergic receptor activation by NA reduces insulin release in rat pancreatic β-cells [18,19]. Ca V channels are thought to be similarly inhibited by activation of α 2 -adrenergic receptors in human β-, RINm5F, and HIT cells [13,19].The regulation in RINm5F is predominantly voltagedependent, whereas in human β-cells the inhibition is voltageindependent [13].Our data suggest that NA inhibits Ca V currents via a voltage-independent pathway, in a similar way to human β-cells.Interestingly, α 2 -adrenoreceptor stimulation does not inhibit L-type calcium channels in mouse pancreatic β-cells [3].Our study in primary cultured cells is the very first providing evidence that NA inhibits Ca V channels in pancreatic β-cells.The classic understanding is that voltage-independent regulation is mediated by G q/11 proteins involving PIP 2 hydrolysis, however, there is no evidence that activation of α 2 -adrenergic receptors hydrolyzes PIP 2 .The signaling cascade and specific Sympathetic Voltage-Independent Regulation of Voltage-Gated Calcium Channels in Pancreatic β-Cells activities of G protein subunits and their link to α 2 -adrenergic receptors in the modulation of β-cell Ca V channels remain to be elucidated. Inhibition of the insulin secreting process by activation of the sympathetic nervous system has been reported [18,19].Our findings are consistent with this notion.We showed that Ca V current diminishes in rat pancreatic β-cells treated with NA, resulting in a decreased Ca 2+ entry.Although calcium influx through Ca V channels is a known part of this process, little is known about Ca V channel regulation that occurs via G-protein activation of the voltage-independent pathway.In conclusion, regulation of Ca V channels by G proteins in pancreatic β-cells via a voltage-independent pathway appears to be an additional mechanism by which the Ca 2+ concentration is controlled and a key step in such an important physiological process.Whatever the mechanism turns out to be, a voltage-independent mechanism is involved in pancreatic β-cells.However, an important question for future studies is whether or not a specific voltage-independent mechanism may account for the Ca V current inhibition in pancreatic β-cells. Figure 1 : Figure 1: Inhibition of Ca V current amplitude by GTPγS via a voltage-independent pathway in rat pancreatic β-cells.(A) and (B) Representative current traces elicited by a prepulse protocol, which consisted of a pair of 10-ms depolarizing pulses to -5 mV (P1, P2) from a holding potential of -80 mV and separation of P1 and P1 by a prepulse (PP) of +125 mV during a 25-ms period.Left panel shows overlapping P1 and P2 current traces from rat superior cervical ganglion (SCG) neurons under control and GTPγS conditions.Right panel shows overlapping P1 and P2 current traces from rat pancreatic β-cells under control and noradrenaline (NA)-treatment.Time course of I Ba relative (P1) in a GTPγS-dialyzed neuron (C) and a pancreatic β-cell (D).Summary of the percentage inhibition of P1 and P2 (E) and facilitation index (F) in rat SCG neurons and rat pancreatic β-cells. Figure 2 : Figure 2: Inhibition of Ca V current amplitude by(NA)-treatment via a voltage-independent pathway in rat pancreatic β-cells.(A) and (B) Representative current traces elicited by the prepulse protocol, which consisted of a pair of 10-ms depolarizing pulses to -5 mV (P1, P2) from a holding potential of -80 mV and separation of P1 and P1 by a prepulse (PP) of +125 mV during a 25-ms period.(A) and (B), overlapping P1 and P2 current traces from (A) rat superior cervical ganglion (SCG) neurons and (B) rat pancreatic β-cells under control and NA-treatment.Time course of I Ba relative (P1) in NA-treated neurons (C) and in pancreatic β-cells (D).Summary of percentage inhibition of P1 and P2 (E) and facilitation index (F) in rat SCG neurons and rat pancreatic β-cells.
3,259.2
2018-02-02T00:00:00.000
[ "Biology" ]
Fuzzy neuro-genetic approach for feature selection and image classification in augmented reality systems ABSTRACT INTRODUCTION In the recent years, the researchers in the area of image processing, computer vision and robotics have shifted from 2 Dimensional technologies to 3 Dimensional technologies for capturing and analyzing images due to the recent advance in computing with respect to memory and processing power. An augmented reality system needs to perform many operations in real time including image capturing using input sensors, processing the images and providing suitable response to the user. Research in robotics and augmented reality systems use the information technology for providing efficient solution in medical diagnosis, military applications, online education, robotics and computer aided manufacturing. Augmented reality helps to enhance the view of real world objects present in the physical environment in terms of virtual elements so that the view is nothing but the blend of real and virtual elements [1]. It is interactive in nature and hence it lives between the reality and the virtual world. Using virtual reality the simulation of real world is possible and it also provides additional features for motion, sizing, coloring and effective recognition of objects. In computer vision based robotics, two dimensional images were used for forecasting the next move of the robot by analyzing the past and the current moves of the robot along with the rules governing change of the environment by applying constraint satisfaction techniques. On the other hand in augmented reality, the vision recognition system of a robot uses a three dimensional rendering engine for blending the virtual objects with the real objects in the world [2]. Learning is an important phenomenon in the area of robotics and augmented realities due to the nature of the images to be processed in the systems for making the robot movements more efficiently. The machine learning researchers in the past proposed feed forward and back propagation neural network for providing effective training for making the systems intelligent. However, the training time is more and the accuracy achieved by such systems is not sufficient to make efficient decisions in many real life applications including computer vision. In such a scenario, many extensions were proposed by various researchers to the existing machine learning algorithms. These algorithms increase the accuracy and reduce the convergence time. Among these, deep learning is currently the area of interest for researchers in the areas of artificial intelligence and machine learning. Moreover, the deep learning algorithms make use of a large-scale and hierarchical neural network which provides more efficiency through strong connections. One of the main advantages of deep learning algorithm is that, such algorithms provide great predictability with high accuracy and hence surpasses the other traditional machine learning algorithms which are used in various applications including image processing and speech recognition, natural language processing and intelligent question answering systems, cross lingual systems and machine translation [3,4]. In the past, the AlexNet system was proposed and it used a convolutional neural network that comprised of 60 million parameters as attributes and hence was able to come first in the ImageNet Large Scale Visual Recognition Challenge by providing higher accuracy of classification and reduced error rate when it was compared with other machine learning algorithms. After that, many extended forms of neural networks including radial basis function based neural networks were introduced and hence they were able to handle complex situations leading to the use of deep learning algorithms in many areas of image processing. One among the image processing application namely face recognition systems used a dataset of faces and classified them using neural networks and provided an accuracy rate of 97.45% in the effective recognition of faces which are available in a database of Labelled Faces in the Wild and it worked well by the optimization of transfer learning algorithms [5]. In image processing systems, feature extraction provides large number of features. Even in the benchmark datasets, the number of features is so high in such a way that they increase the time for classification. Moreover, many features obtained through feature extraction either do not contribute in the classification process or they provide only a negligible amount of contribution for decision making but increase the time exponentially. In such a scenario, the most contributing and important features must be identified. Most of the feature selection algorithms perform feature reduction based on the entropy values. Such techniques filter the features from the whole set of features. On the other hand, the incremental feature selection techniques select a subset of features and analyze them first. They add additional features after performing classification with initial set of features and by analyzing and adding other important features in order to perform feature optimization. Many optimization techniques are available in the literature for feature reduction including genetic algorithms, rule optimization techniques and linear programming models. Among them, swarm intelligence techniques and nature inspired techniques provide more smart decisions through intelligent decision making [6]. Genetic Algorithm (GA) is one such optimization method that has been researched by many researchers in the areas of artificial intelligence, machine learning, intrusion detection systems and image processing areas. The GAs follow the survival of the fittest model by selecting two parents at a time and producing children. An activation function is used to find the fittest children which will be used as the parents for the next generation. It provides operators like crossover and mutation to generate more number of off springs. The GA process uses a heuristic approach to solve the optimization problem by reducing the search space. It can either work individually in the decision making process or it can work with fuzzy rules and machine learning algorithms to solve an optimization problem more effectively. It uses a number of generation techniques for generating individuals from the given population. Therefore, it is more suitable for performing feature selection as well as classification [7]. Fuzzy set theory was proposed by Zadeh (1965) [8] and it has been used more frequently in the development of intelligent decision making systems because of its simplicity through rules and its similarity to human reasoning based on qualitative reasoning. Many fuzzy machine learning algorithms were used in the literature for generating and applying rules from a dataset. Such systems used standard membership functions for making efficient decisions. Higher order logics like fuzzy logic and temporal logic have the capability of performing reasoning under uncertainty through the gradation of truth values. Temporal logic is useful to analyze the past data and is also useful to predict the future using current and past data. The combination of fuzzy logic and temporal logic help to derive inference rules which can perform explanation based reasoning and analogy based reasoning. In image processing applications, fuzzy temporal rules can be combined with genetic algorithms for performing optimization in the process of feature selection and thus reducing the classification time leading to increase in classification accuracy. In this work, a neuro fuzzy classifier developed using convolutional neural networks with a bias function for error reduction and fuzzy rules for weight adjustments is also proposed. Finally the rules derived from training using fuzzy neural networks are used to classify the similar objects for providing effective recognition. The proposed fuzzy neural network uses sigmoidal function as activated function. For feature selection triangular membership function is used to form fuzzy rules. On the other hand during classification trapezoidal membership function is used in order to improve the classification accuracy. The simulation is carried out using matlab simulation tool and the results obtained from this work have been compared with other existing systems in this area. From the experiments conducted using a robotic application scenario in which the robots are programmed for pick and place tasks which are captured by a sensor and having the image capturing system with a camera, it is found that the proposed model provides more realistic views of real world 3 Dimensional objects. The image based recognition system proposed in this work consists of sub components namely recognition of vision information based on image processing techniques [9]. For this purpose are relaed an input video is divided into a number of frames. Among all these frames the most relevant frames to the robot object recognition system proposed in this work. It considers the key frames and analyses them based on color, texture, movement and depth details based on the dimensions of the image. It performs feature identification using fuzzy neural network and extracts the relevant features by applying fuzzy rules. From the extracted the features, most relevant features are selected by applying fuzzy rules and the selected features are used as input feature vector to the classifier. The main advantage of the proposed system is the increase in classification accuracy. The rest of this paper is organized as follow's section 2 provides a survey of related works in the areas of augmented reality, neural network, fuzzy systems and classification algorithms. Section 3 explains the algorithms proposed in this paper along with the model proposed in this work. Section 4 depicts the results obtained from this work and also provides suitable discussions on the result. Section 5 gives conclusions on this work and also provides some suggestions for feature enhancements. RELATED WORKS There are many works that have been developed by various researchers in the direction of feature selection, classification and augmented reality in the past [7,8,[10][11][12][13][14]. Among them, Mohammad et al (2014) developed a novel algorithm called evolutionary oriented incremental algorithm for selecting the useful features by using their new framework. Moreover, they have tested their framework on an ordinary genetic algorithm and also developed new methods. In addition, their work employed with two new operators for performing the addition operation and deletion operation that are used for changing the length of the methods randomly. The major advantage of their algorithm is that it increases the classification accuracy and reduces the classification time. Sannasi et al (2016) [15] introduced an intelligent Conditional Random Field based Feature Selection algorithm to develop an efficient intrusion detection system. Moreover, they have used intelligent algorithms for generating rules that are helpful for making effective decisions over the KDD Cup dataset which is used for evaluating their intrusion detection system. They have conducted number of experiments and proved that their feature selection algorithm takes less time and less number of features that are used to enhance the classification accuracy. Ganapathy et al. (2013) [3] discussed in detail about the intelligent techniques for performing feature selection and classification by applying the selected and contributing features. Moreover, they have focused over the selection of contributed features that are useful for performing classification process effectively for intrusion detection system. Ulrich neumann and Suya (1999) [16] demonstrated that the augmented reality applications which direct the scene annotation, the pose stabilization and the extendible tracking range that are used for tracking the natural features in un-prepared environments. Yuan et al (2006) [5] developed a new registration algorithm that is used for estimating the method robustly, tracking the feature and for the reconstruction method which is also used for tracking the natural features. Moreover, they have super imposed and the virtual objects over the arbitrary region by the user preferred. Nitin et al (2018) [17] developed a novel method that is to find the object edges from image. Moreover, they have modified the fuzzy membership function that used to concentrate the multi-focus fusion over the fuzzy edge detection. In addition, they have proved their method by conducting the experiments with the satellite and color images that perform the fuse edge detection and compared with other existing systems. Akash Bapat et al (2016) [18] developed a novel scheme which is used a rolling shutter camera that used for tracking the 6 DOF head pose and also used to [19] developed a new model that is a simple and linear iterative clustering model to construct the super pixels with a novel scheme and it is also generated the super pixels by using clustering method based on the proximity and the color similarity from the image plane. Ioannis et al (2012) [20] proposed a new transmission system, design, control and also the performance of the tele-operation force feedback. They have achieved better performance than the other existing systems. Kanimozhi et al (2018) [21] proposed a new fuzzy logic based prediction model called Intelligent Risk Prediction System for predicting the Breast Cancer disease by using the fuzzy temporal rules. They have applied intelligent fuzzy rules for performing the feature selection and classification processes over the standard benchmark dataset called Breast Cancer dataset which is available in UCI Machine Learning dataset. Selvi et al (2016) [22] proposed fuzzy logic and time constraint based method for performing effective routing process over the wireless sensor networks. They have achieved better performance in terms of network lifetime and efficiency over the communication process. This is due to the fact that the use of fuzzy logic and the intelligence rules. Sethukkarasi et al (2014) [23] introduced an intelligent neural network based fuzzy logic which incorporates the time constrains for representing the knowledge model which is used for mining the temporal patterns. They have proved that their prediction system is better in the direction of diabetic and heart disease identification and detection when it is compared with the existing systems. Jaisankar et al (2012) [24] developed an intelligent agent based system by applying the fuzzy rough set and the outlier detection techniques for detecting the intruders. They have achieved better performance in terms of intrusion detection accuracy and false alarm rate. Sairamesh et al (2015) [25] proposed a new prediction system that is used for predicting the user interests to provide the relevant information by using the relevant feedback and the re-ranking process. The authors achieved better prediction accuracy than the existing systems. A new model called heuristic mathematical model that has been proposed by Tahriri et al (2015) [6] that model uses a technique for optimizing the robot arm movement time that minimizes make-span and also maximized the total number of units produced each day in robot cells. Their model uses the users keys in the station, unit section and robot into the simulation software after creating 3D files and also sending them to the virtual reality system. Moreover, the data is converted into a text file and also it transferred to the robot arm movement with the help of time optimization algorithm for determining the optimum task sequence. WeiJie Wang. Hua Gen Wan [26] described the characteristics and the essential services of augmented reality about the real world applications that are used augmented reality. They have given brief history about the virtual reality and the augmented reality and the work flow of the augmented reality is also explained. Moreover, they have explained the different stages of the various augmented reality applications such as Image Acquisition process, the Feature Extraction process, the Feature Matching process, the Geometric Verification process and the associated information retrieval process. In addition, they highlighted the essential needs and the essence of the augmented reality in their work. However, most of the works present in the literature are not able to provide sufficient classification accuracy due to the lack of better feature selection. Hence a new feature selection algorithm is proposed this paper to handle the accuracy of classification. SYSTEM ARCHITECTURE The overall architecture of the proposed classification system is shown in Figure 1. It consists of seven major components namely Augmented Reality Image Dataset, Camera Capture Image Dataset, User Interface Module, Object Recognition System, Decision Manager, Temporal Information Manager, Fuzzy Rule Base and Knowledge base. Augmented Reality Image Dataset: A standard bench mark data set called AR image dataset is used for evaluating the proposed system which contains millions of images. Camera Capture Image Dataset: It consists of 1000 images that are taken by camera that are collected. User Interface Module: It collects the necessary data from these two dataset and it also decides what data have to be collected for evaluating the proposed system. The collected data are transferred to the object recognition system for further processes over the image data and received the recommended result from decision manager. Object Recognition System: It consists of four sub-systems namely the feature extraction, feature selection and object recognition as an image pre-processing subsystem and the classification subsystem which contains an existing classification algorithm called Convolutional Neural Network (CNN). In the pre-processing subsystem, it performs the pre-processing task by using a newly proposed pre-processing algorithm called Intelligent Agent based Incremental Feature Selection Algorithm (IAIFSA) for effective pre-processing of 198 the image datasets. In addition, an existing classification algorithm called Convolutional Neural Network algorithm that is used in this paper for performing effective classification of the image dataset. In this classification algorithm, there is a genetic algorithm process which uses temporal constraints and temporal fuzzy rules. Figure 1. System architecture Temporal Information Manager: It is useful for classification in the object recognition process of this proposed system. It also provides the necessary details about time constraints in the proposed system. Fuzzy Rule Base: It contains the necessary fuzzy rules for decision making over the dataset using a parallel approach for applying genetic operations and to form fuzzy temporal rules. Decision Manager: This decision manager helps to take final decision over the image data which are transferred from object recognition system with the help of Temporal Information manager and fuzzy rule base. PROPOSED WORK This section explains in detail about the proposed feature selection algorithm called intelligent incremental feature selection algorithm for selecting the most contributed features that are used to classify the image datasets. In this proposed feature selection algorithm, it extracts the relevant feature from the input images, recognize the images more relevancy and finalize the features according to the feature extraction and recognition results. Next, an existing classification algorithm called Convolutional Neural Network is used for classifying the images with the help of fuzzy rules that are generated by the proposed system. Feature selection In this work, an Intelligent Agent based Optimal Feature Selection Algorithm (IAOFSA) has been proposed in order to improve the decision accuracy based on classification of the images obtained from the virtual reality environment. This proposed feature selection algorithm has been developed by combining the optimization rules based on temporal constraints with intelligent agents in order to select the features which can reduce the classification time through temporal analysis of the dataset. This algorithm uses time constrained analysis on feature from the image which can vary over time to form a video sequence. Therefore, at each time instants the information gain ratio values are computed using the frequencies of occurrence of the features in the image and the rate at which it changes with respect to time. Here, second and millisecond level temporal analysis of image features are carried out in order to provide the optimal prediction for robotic arm movement. The main advantage of the proposed feature selection algorithm is that it reduces the classification time and increases the accuracy of next movement prediction through the results of the classifiers. This algorithm finds the information gain for the segments s1, s2, s3, …, sn obtained from the image at times t1, t2, ..., tn on the image I. The equation used to find the information gain value of the image I is given in (1-3). (1) Here, I is the dataset obtained the collection of frames called V1, V2, …, Vn. Si's represent the segments obtained from the image I, FR is the set features obtained from the image segments and Info is used to represent the information gain values and IGR represents the information gain ratio obtained for the ith frame of the video V. In this model, a set of frames collected from a video are considered as the images and they are segmented using a distance based approach that divides the image into eight equal segments. The detailed steps of the proposed IAOFSA are as follows: Intelligent Agent based Optimal Feature Selection Algorithm. Input: A video sequence V Output: A set of 'm' features FS collected from a set of 'n' images formed by the frames of V. Step 1: Read the contents of video V. Step 2: Initialize the set of features selected as NULL by setting FS = { } Step 3: Initialize i =1 Step 4: Call the frame formation agent to divide the video into 'n' frames and to make segments of equal size on each frame named as S1, S2, …, Sn. Step 5: Call the computation agent to compute the value of Info (I, t) from the images using the formulae for the time instances t1 and t2 respectively. This algorithm considers the move in eight directions namely front, back, left, right and the four diagonal directions for effective movement of the robotic arm. The intelligent classification algorithm applies fuzzy rules and genetic based optimization techniques in the neural network in order to find the most optimal movement in the promising directions. RESULTS AND DISCUSSION The proposed model has been proposed and implemented for predicting the augmented reality images accurately and it is provided sufficient data. Here, we have modeled a new Robot and also have been collected 50 images by conducting various experiments. The collected images are considered our own dataset in this work as input dataset. Figure 2 shows experimental setup of the new modelled robot by conducting various experiments. Table 1 shows the number of features extracted using existing algorithm and the proposed algorithm. Table 2 shows five important features that are selected by the proposed Intelligent Agent based Optimal Feature Selection Algorithm (IAOFSA). From Table 2, it listed out the important features such as angle, time, shape, start position(x, y) and the end position (x, y). These all features are used to improve the classification accuracy when test the robot images. Figure 3 shows the classification accuracy of the proposed algorithm and the existing algorithms when uses all the features of the robot images. Here, we have considered 12 features for making decision over the classifiers such as SVM, DT, Naïve Bayes, Random Forest and the proposed NGFRC. From Figure 3, it can be seen that the performance of the proposed algorithm is performed well with the uses of all the 12 features of robot images when it is compared with the existing classification algorithms such as SVM, DT, Naïve Bayes, Random Forest and the proposed NGFRC. Start position(x,y) 6 End position (x,y) Figure 3. Classification accuracy with full features Figure 4 shows the classification accuracy of the proposed algorithm and the existing algorithms when uses all the features of the robot images. Here, we have considered only 6 features that are selected by the proposed feature selection algorithm for making decision over the classifiers such as SVM, DT, Naïve Bayes, Random Forest and the proposed NGFRC. From Figure 4, it can be seen that the performance of the proposed algorithm performs well with the use of the selected features from robot images when it is compared with the existing classification algorithms such as SVM, DT, Naïve Bayes, Random Forest and the proposed NGFRC. Figure 5 shows the comparative analysis according to the classification accuracy of the proposed algorithm by conducting various experiments such as E1, E2, E3, E4 and E5. Here, we have considered full features [15] and the selected [5] features for performing classification process. From Figure 5, it can be observed that the performance of the proposed algorithm in all the experiments perform better well when it uses the selected features than when it uses the full features of the robot images that are uses in the experiments. This is due to the use of effective and more contributed features while making decision in the classification process. Moreover, the proposed classifier uses the neuro-genetic algorithm and fuzzy rules for the decision making process in the proposed classifier. CONCLUSION AND FUTURE WORKS In this paper, a new approach for implementing an Augmented Reality system by applying fuzzy genetic neural networks has been proposed and implemented for effective image classification and predicting the arm movement. It consists of two components namely feature selection and classification modules. The proposed model uses a fuzzy logic based incremental feature selection for extracting the relevant features that are used to recognize the important features from 3D images. Moreover, this paper explains the implementation and results of the proposed algorithms for an Augmented Reality system using image recognition, feature extraction, feature selection and classification by considering the global and local features of the images. For this purpose, we propose a three layer fuzzy neural network that has been implemented based on weight adjustments using fuzzy rules in the convolutional neural networks with genetic algorithm for effective optimization of rules. The classification algorithm is also based on fuzzy neuro-genetic approach which consists of two phases namely Training phase and testing phase. During the training phase, rules are formed based on objects and these rules are applied during the testing phase for recognizing the objects which can be used in robotics for effective object recognition. From the experiments conducted in this work, it is proved that the proposed model is more accurate in 3D object recognition.
5,870.2
2019-09-01T00:00:00.000
[ "Computer Science" ]
ROI based Indonesian Paper Currency Recognition Using Canny Edge Detection Paper currency recognition is important for automatic payment system. The paper performs a nominal paper detection process using image processing with canny method implemented in python programming language. The canny method is used to find edge features in the nominal currency. By using template matching of image reference, region of interest (ROI) of nominal value is extracted so that it can be used in any orientation of paper currency image. The ROI of nominal image is processed by canny edge method and spatial transformation to strengthen the image features and being processed by template matching to decide nominal currency. The study has successfully tested nominal value of 1000, 2000, 5000, 10000, 20000, 50000, and 100000 Indonesia banknotes which then the currency value will appear in the value variable in python. Keywords— paper currency recognition, image processing, canny method, python, OpenCV. I. INTRODUCTION Cash transaction is part of our daily life. When conducting cash transactions, there is often a mistake because the conditions of money and texture are almost the same. This certainly will be detrimental when making payments for such items. This condition can occur anywhere with anyone when a transaction is carried out in cash and this will result in a loss for either party. This loss led to ideas to be able to create a system which can detect nominal paper currency quickly and accurately in the hope that the computer or the system can recognize the paper currency. Banknotes is money made from paper with certain images and stamps and is a legal payment instrument. Paper money has value because of its nominal value. Therefore, paper money only has two kinds of values, namely nominal value and exchange rate. Current Indonesia banknotes can be seen in Figure 1. Paper currency recognition (PCR) system is an important area of pattern recognition. A system for the recognition of paper currency is one kind of intelligent system which is a very important need of the current automation systems in the modern world of today. It has various potential applications including electronic banking, currency monitoring systems, money exchange machines, etc . This system is built with several methods of image processing, from segmentation, then detection of nominal currency margins, and improving image detection results. Then the detection results will be extracted into variables that will be recognized by the computer for further processing. Python was chosen as a programming language assisted by several libraries such as OpenCV and pytesseract for the extraction of images obtained into digit variable numbers. Figure 1. List of Indonesia Banknotes The system presented is designed to recognize paper currency. Input to the system is an image acquired by a scanner or a digital camera, containing the paper currency and its output is the features of the paper currency. The system consists of the modules: Image acquisition, pre-processing including noise removal, feature extraction, classification and recognition. Research of paper currency recognition has been conducted using image processing [1][2] with improved computational intelligence improvement using neural network [3][4] [5] and fuzzy algorithm [6]. Indonesia banknotes paper currency recognition system has been developed [7] [8]. The contribution of this paper is algorithm improvement to preprocess image so that it can handle image with variety of orientation. Another contribution is a validation that Canny detection method can improve accuracy of detection. A. Digital Images Digital image is an image of f (x, y) that has been digitalized both the area coordinates and the brightness level. The f value in the coordinates (x, y) shows the brightness or grayness level of the image at that point. Digital images are represented by a matrix consisting of M columns and N rows where the intersection between columns and rows is called a pixel (pixel = picture element), which is the smallest element of an image [9] . Digital image processing is a process that aims to analyze images with the help of computers. B. Template Matching Template matching is a technique in digital image processing to find small parts of an image that matches a picture template that functions to match each part of an image with an image that becomes a template (reference). Template matching is widely used in processing vision at a simple level to localize and identify image patterns Template matching is one technique in digital image processing that functions to match each part of an image with an image that becomes a template (reference). The principle of this method is to compare the object image that will be recognized with the existing image template. Image objects that will be recognized have their own level of resemblance to each image template. Recognition is done by looking at the highest similarity value value and the threshold value of the introduction of the object's image. If the similarity value is below the threshold value, the object's image is categorized as an unknown object. C. Canny Methods One of the edge detection operators is the Canny edge detection developed by John F. Canny [10]. There are some of the most optimum edge detection criteria with the Canny algorithm as follows 1) Detect well (detection criteria) The ability to place and mark all edges in accordance with the selection of convolution parameters made. While also providing a very high flexibility in determining the level of detection of the desired edge thickness. 2) Localize well (localization criteria) With Canny it is possible to produce the minimum distance between the edge detected and the original edge. 3) Clear response (response criteria) There is only one response for each edge, so that it is easily detected and does not cause confusion in subsequent image processing. The choice of Canny edge detection parameters greatly affects the results of the resulting edges. Some of these parameters are the Gaussian standard deviation and the threshold value. III. SYSTEM DESIGN This system is designed with several stages, both in image calling, image processing, cropping, and extraction. The system flow diagram in this study is shown in Figure 2. In proposed system a high resolution scanner or camera is employed to acquire the image. The acquired image then converted into grayscale. Using reference template image, the grayscale image is correlated to find the reference position. Region of interest which is image of nominal value is selected using reference position, so it is possible to use the algorithm with variety of orientation. Selecting region of interest simplify the computational complexity. The edge of the ROI image is filtered using Prewitt method. Then, the image edge is detected using Canny's edge detection method and further processed using binary threshold to produce clear image. The resulting image is detected using template matching or optical character recognition to extract the nominal information. Figure 2. System Flow Diagram The matching template is used to determine the merit of each currency which will later be used as a position parameter in determining the nominal of the currency to be detected. Then, the ROI target image is selected with position and orientation in accordance to reference coordinate. The ROI image adapt the orientation of reference image in its translation, rotation and scale. Using ROI instead of whole image processing also significantly reduce computational complexity. Figure 5. ROI Target in accordance to reference cordinate The results of further processing will be region of interest. Followed by image processing with canny and binary threshold. Based on research conducted, the system successfully detect sample currency image without error qualitatively. Each currency nominal is detected and can be extracted with variable results in the form of numeric digits according to the currency's nominal value. However, in its application it could be the size and pixel in each different currency which will affect the matching template in the cropping process. With the acquisition position of the results of cropping incorrectly will affect the nominal currency readings do not match the desired target. The performance of the system also has not been calculated or simulated quantitatively. V. CONCLUSION This paper conduct paper currency recognition with ROI based processing and canny edge detection. It is shown that ROI based image detection reduce computational complexity and add flexibility of acquired image. The canny method is continued with a threshold binnay which is very appropriate to do in this study because it gets results that are very close to the nominal character that is being targeted. The difference in the size and pixel of the banknote that is used as a sample will affect the process of reading the target, because when the size and pixel are different from the compiled code will get a different target. This system can be implemented and developed directly in the cash transaction process. VI. FUTURE WORKS Performance evaluation with quantitative measurement should be conducted using method in this paper. The practical experiment with nonideal environment is a challenge in paper currency recognition system VII. ACKNOWLEDGEMENT
2,077.6
2020-04-30T00:00:00.000
[ "Computer Science" ]
The Cosmic Ray spectrum in the energy region between 1012 and 1016 eV measured by ARGO – YBJ The ARGO-YBJ experiment has been in full and stable data taking at the Yangbajing cosmic ray observatory (Tibet, P.R. China, 4300 m a.s.l.) for more than five years. The detector has been designed in order to explore the Cosmic Ray (CR) spectrum in an energy range from few TeV up to several PeV. The high segmentation of the detector allows a detailed measurement of the lateral particle distribution which can be exploited on order to identify showers produced by primaries of different mass. The results of the measurement of the all-particle and proton plus helium energy spectra in the energy region between 1012 and 1016 eV are discussed. The measurement of Cosmic Ray (CR) energy spectrum and composition gives important information concerning the production, acceleration and the propagation of high energy particles in our Galaxy. The CR all–particle energy spectrum is roughly described by a power–law with a knee at energies around 3 PeV. It is commonly believed that the origin of the knee is related to a change of the elemental composition of CRs, in particular to a decrease of the flux of light elements (H and He nuclei). The determination of the individual abundances of elements at energies above 100 TeV must be inferred from the measurements of extensive air showers (EAS). The development of EASs is subject to large fluctuations. Owing to the high altitude location (atmospheric depth 606 g/cm2), the ARGO-YBJ experiment is able to sample the EAS induced by high energy CRs not far from the maximum of its longitudinal development where the fluctuations are reduced. The measurement of Cosmic Ray (CR) energy spectrum and composition gives important information concerning the production, acceleration and the propagation of high energy particles in our Galaxy.The CR all-particle energy spectrum is roughly described by a power-law with a knee at energies around 3 PeV.It is commonly believed that the origin of the knee is related to a change of the elemental composition of CRs, in particular to a decrease of the flux of light elements (H and He nuclei).The determination of the individual abundances of elements at energies above 100 TeV must be inferred from the measurements of extensive air showers (EAS).The development of EASs is subject to large fluctuations.Owing to the high altitude location (atmospheric depth 606 g/cm 2 ), the ARGO-YBJ experiment is able to sample the EAS induced by high energy CRs not far from the maximum of its longitudinal development where the fluctuations are reduced. The detector The ARGO-YBJ experiment was a full-coverage EAS detector operated at the Yangbajing cosmic ray observatory (Tibet, P.R. China, 4300 m a.s.l.) and it was in full and stable data taking from November 2007 up to February 2013.The detector was made of a single layer of 1836 Resistive Plate Chambers (RPCs) with ∼ 93% active area surrounded by a partially instrumented (∼ 23%) guard ring in order to improve the event reconstruction.The detector was equipped with two independent readout systems: each RPC is simultaneously read-out by 80 copper strips (6.75 × 61.80 cm 2 ) logically arranged in 10 independent pads (55.6 × 61.8 cm 2 ) and by two large electrodes called Big Pads (139 × 123 cm 2 ).Each Big Pad collects the total charge developed by the particles impinging on the detector surface (analog readout) [1].The analog readout system can be operated at different gain scales in order to measure showers induced by primaries in a wide energy region.Data coming from the most sensitive scales perfectly overlap with the digital readout, thus providing a powerful inter-calibration [1] .At the highest scale the analog readout samples the shower front up to a particle density of 2 • 10 4 m −2 , thus extending the dynamic range of the detector up to PeV energies.A dedicated calibration procedure has been implemented for each gain scale [1,2].The full-coverage technique enables a detailed imaging of the shower front which is a fundamental tool that allows a deep investigation of the shower properties even in the core region.The high segmentation of the Big Pad system allows the measurement of the shower size and of the lateral distribution of particles in the shower front that can be exploited in order to estimate the primary energy and mass. Data Analysis The analysis has been carried out on events collected during 2010 by using the analog readout system.For each event the core position, arrival direction, shower size (N 8 ), particle density on the carpet and lateral distribution are reconstructed.The shower size N 8 has been defined as the number of particles within a radius of 8 m from the shower core.It is well correlated with energy for a given mass and not affected by bias effects due to the finite detector size [3].The determination of the energy and of the primary mass from the measured quantities can be faced out by using the Bayesian unfolding.The Monte Carlo simulations are therefore used in order to evaluate a probabilistic response matrix which can be inverted by means of an iterative algorithm based on the Bayes's theorem.A detailed description of this procedure can be found in [4][5][6].Showers produced by H, He, CNO, NeMgSi, and Fe have been simulated in the energy range 1 − 31.6 × 10 4 TeV with an E −1 differential spectrum by using the CORSIKA (v.7.3) code [7] including the QGSJET-II.04and FLUKA interaction models.A smaller data set have been simulated using SYBILL 2.1 for systematic studies.Showers have been sampled at the Yangbajing altitude and randomly distributed over an area of 250 × 250 m 2 centered on the ARGO-YBJ detector.The detector response has been simulated by using a GEANT3 based code.The present analysis is based on the data collected with two analog scales (low gain and high gain) covering the energy range from about 20 TeV to a few PeV.A sample of quasi-vertical showers (ϑ rec 35 • ) has been selected within an area of 40 × 40 m 2 around the detector center ensuring that a large fraction of the shower is fully contained in the full-coverage area.Additional selection criteria based on the shower size improve the correlation between shower size and primary energy and avoid any contribution due to the electronic noise.In figure 1a the shower size N 8 of data and MC events is reported, showing a good agreement between data and simulations.In figure 1b the selection efficiency is shown for proton, helium nuclei, CNO and NeMgSi mass groups and iron nuclei.The plot shows that in the energy region 300 TeV − 10 PeV the selection efficiency is almost the same for all the species, demonstrating the selection criteria do not affect the spectrum measurement.In a shower produced by heavy nuclei a substantial amount of secondary particles is spread further away from the core region.On the contrary, in a shower produced by light elements, the largest amount of particles is concentrated in a small region around the shower core.The ratio between the particle density measured at several distances from the core and the one measured very close to the core can be exploited in order to identify showers produced by light elements.Several studies performed on simulated events have shown that the quantities β 5 = ρ 5 /ρ 0 , and β 10 = ρ 10 /ρ 0 , where ρ 0 , ρ 5 and ρ 10 are respectively the particle density measured in the core region, at 5 m from the core and at 10 m from the core, are sensitive to primary mass.In a probabilistic approach the probability P(N 8 , β 5 , β 10 |E, A) of measuring a shower size N 8 and a certain value of β 5 and β 10 giving a primary energy E and mass A, relates the characteristics of the primary particle to the experimental observables.The bayesian unfolding algorithm has been therefore tuned in order to take into account also the information coming from the two quantities β 5 and β 10 .The fraction of selected showers induced by light elements (p and He) and the corresponding contamination by heavier nuclei has been evaluated in order to check the discrimination power.In figure 1c the values obtained are reported as a function of the energy.The fraction of selected light elements increases with energy and is around 60% at energies above 50 TeV, while contamination is well below 10% over the whole energy range. All-particle and P+He energy spectra In the figure 2a the all-particle spectrum measured in this work is reported.The measurements are affected by a statistical uncertainty of the order of 1% at the lowest energies, gradually increasing up to ∼ 8% at energies higher than 1 PeV.The systematic uncertainty is of the order of 15% mainly due to the limited Monte Carlo statistics (10%) and to variations of the bin edges (10%) used in the determination of the probability response matrix.The systematic uncertainty related to the hadronic interaction model used in simulations has been derived by comparing the results obtained by QGSJET and SIBYLL.In particular, simulations with SIBYLL systematically yield to a flux ∼ 7% higher.Systematic effects introduced by variation of the fiducial cuts and by the unfolding procedure have also been studied and give a minor contribution ( 1%) to the total uncertainty.The proton plus helium spectrum, including both statistical and systematic errors is also reported in figure 2a, spanning the energy range between 20 TeV and 5 PeV.Statistical errors are of the order of 1% at the lowest energies and increase with energy up to 18% at PeV energies.The contributions to the total systematic uncertainty come from event selection, estimation of the conditional probabilities, hadronic interaction model, composition model, unfolding procedure.As for the all-particle spectrum the major contribution to the systematic uncertainty comes from the determination of the probability response matrix and is about 10% for energy below 300TeV, 8% in the region 300-500 TeV and it turns to about 21% at the PeV energies.A minor contribution comes from the selection criteria (2.5%) and the unfolding procedure ( 1%).Simulations with SIBYLL yield to a flux ∼ 4% and ∼ 10% higher in the energy region below and above 500 TeV respectively. Conclusions The ARGO-YBJ experiment allows a deep investigation of the properties of EASs providing a detailed measurement of the distribution of the charged particles in the shower front.The detector is able to investigate the CR energy spectrum in a wide energy range.The measurements of the Figure 2: CR all-particle and p+He energy spectra measured by ARGO-YBJ (a) compared with previous results of ARGO-YBJ [6,8], other experimental results [9] and theoretical models [10] (b).all-particle and p+He energy spectra are presented.As shown in figure 2b the all-particle spectrum is in good agreement with other experimental results [9].The measurement is also in agreement with an independent analysis of ARGO-YBJ data [11].The accurate reconstruction of the lateral distribution has been exploited in order to discriminate showers produced by primaries of different mass groups.The ARGO-YBJ experiment measured the proton plus helium flux over two energy decades, from 3 TeV to 5 PeV.There is a strong evidence of a deviation from a single power law at energies around 1 PeV, suggesting that the knee of the all-particle spectrum is due to heavier elements.Similar conclusion has been suggested also by the results of the hybrid experiment ARGO-WFCTA based on a Wide FoV Cherenkov telescope.These results open new scenarios about the evolution of the p+He energy spectrum towards the highest energies and the origin of the knee. Figure 1 : Figure 1: Shower size distribution for data and simulations (a), Fraction of selected MC showers produced by P, He, CNO, NeMgSi and Fe(b).Fraction of selected MC showers produced by light elements and the corresponding contamination Now at Dipartimento di Fisica -Sapienza Università di Roma and INFN sezione di Roma e-mail: paolo.montini@roma1. a
2,772
2017-01-01T00:00:00.000
[ "Physics" ]
Complex Dynamics in a Nonlinear Cobweb Model for Real Estate Market We establish a nonlinear real estate model based on cobweb theory, where the demand function and supply function are quadratic. The stability conditions of the equilibrium are discussed. We demonstrate that as some parameters varied, the stability of Nash equilibrium is lost through period-doubling bifurcation. The chaotic features are justified numerically via computing maximal Lyapunov exponents and sensitive dependence on initial conditions. The delayed feedback control (DFC) method is applied to control the chaos of system. Introduction Cobweb models describe the price dynamics in a market of a nonstorable good that takes one time unit to produce [1].In economic modeling, many examples of cobweb chaos have been demonstrated.Some of the most famous examples include [2][3][4][5][6][7][8][9].Hommes [5] applies the concept of adaptive expectations in a cobweb model with a single producer to investigate the occurrence of strange and chaotic behavior.Finkenstädt [3] applied linear supply and nonlinear demand functions.Hommes [4] and Jensen and Urban [6] used linear demand functions with nonlinear supply equations.These findings indicate that the nonlinear cobweb model may explain various irregular fluctuations observed in real economic data.In this study, we go one step further to study the cobweb model with nonlinear demand and supply function.A possible source of such an evolutionary market dynamics is an interaction between government and real estate developer. Traditional cobweb models usually describe a dynamic price adjustment in agricultural markets with a supply response lag [2].Consider, for instance, the supply of housing.The time of housing construction guarantees a finite lag between the time the production 2 Discrete Dynamics in Nature and Society decision is made and the time the housing is ready for sale.The real estate developer's decision about how many houses should be built and sale is usually based on current and past experience.This principle is the same as that of agricultural product.So it is feasible to introduce cobweb model into real estate market. The present paper attempts to establish a nonlinear model for the real estate market, and introduce adjustment parameters of housing price and land price into the model, which can denote the game behavior of players.The system stability with the variation of parameters is analyzed.Numerical simulations verify the complexity of system evolvement.Finally, time-delayed feedback control method is used to keep the system from chaos and bifurcation. Nonlinear models for real estate market In this paper we assume that all real estate developers in the market are belong to one benefit group and have a common benefit target.Usually the price p is characterized by the nonlinear inverse demand function of p = a − b Q, where a and b are positive constants, a is the maximum price in the market, and Q is the total quantity in the market.This kind of form has been used in other oligopoly models and in the experimental economics dealing with learning and expectations formation (see, e.g., [10][11][12]).The transformation of this formula is as follows: where b 0 , b 1 , b 2 , c 0 , c 1 , c 2 are positive constants, p 1 (t) is the land price at time period t, p 2 (t) is the housing price at time period t, D 1 (t) is the land demand at time period t, and D 2 (t) is the housing demand at time period t.Due to the law of demand that the slope of demand curve is negative, the prices p 1 (t) and p 2 (t) must, respectively, satisfy the inequalies: 2b 2 p 1 (t) − b 1 < 0 and 2c 2 p 2 (t) − c 1 < 0; 4b 2 b 0 − b 2 1 > 0, 4c 2 c 0 − c 2 1 > 0 must hold, thus the signs of demand equations in formula (2.1) are positive. In this case, the land market and housing market are interrelated.Though the housing market does not directly affect land market, the land price impacts the housing supply which decreases with increasing land price.This rule is the same as that of hog and corn as stated by Waugh [13].Real estate companies adjust the housing supply according to the relative policies and the situation of housing price and land price.The formula of supply can be supposed as follows: Z(p) is excess demand function descending with price, which denotes the gap between demand and supply.When the price is low, excess demand exists and when the price is high, excess supply exists, thus p * that satisfies the equation Z(p * ) = 0 is called equilibrium point. Substituting (2.1) and (2.2) into (2.4),we obtain Since Z(p) follows the law of demand, the following conditions must hold: α 1 is the adjustment parameter of land price, which denotes the adjustment degree of benchmark land price controlled by government through the land supply plan.α 2 is the adjustment parameter of housing price, the dynamic model of land price and housing price can be established as follows: where α 1 and α 2 are positive parameters.It is clear that the excess functions of land and housing with adjustment parameters are two-dimensional nonlinear map, which can be regarded as a discrete dynamic system. implies that four fixed points exist in system (3.1) and period-doubling bifurcation appears. As λ increases, the number of fixed points continues to grow until λ = 3.5699; when α 1 = 2.5699/ (e 1 + b 1 ) 2 − 4(e 2 − b 2 )(e 0 − b 0 ), the value of x(t) is unequal to any point that appeared before, system enters chaos from period doubling bifurcation. The same argument holds for the second equation in formula (3.1).Let , system enters chaotic state from period doubling bifurcation. 3.2. Stability analysis.Now we discuss the stability of fixed points of the discrete dynamic system (3.1) through analyzing the eigenvalues of asymptotic linear equation of formula (3.1). Four fixed points of difference function (3.1) are obtained: provided that: J. Ma and L. Mu 5 Lemma 3.1.The equilibrium is an unstable equilibrium point. Proof.In order to prove this result, we find the eigenvalues of the Jacobian matrix J.In fact at E 1 , the Jacobian matrix becomes a triangular matrix: whose eigenvalues are given by the diagonal entries.They are: It is clear that when condition (3.5) holds, then Then E 1 is an unstable equilibrium point of the system (3.1).This completes the proof of the proposition. The stability of other points can also be judged by the above method. The stable region of equilibrium point. In this subsection, we analyze the asymptotic stability of the equilibrium point for the two-dimensional map (3.1).We determine the region of stability in the plane of the parameters (α 1 ,α 2 ).The Jacobian matrix at E * (p 1 * (t), p 2 * (t)) takes the form (3.9) The characteristic equation of the matrix (3.9) has the form where "Tr" is the trace and "Det" is the determinant of the Jacobian matrix (3.9) which are given by The stability region is bounded by the portion of hyperbola with positive values of α 1 and α 2 , whose equations are given by the vanishing of the left-hand sides 1 + Tr+Det = 0 and Det−1 = 0.For the values of (α 1 ,α 2 ) inside the stability region (see Figure 3.1), the equilibrium point is stable node and loses its stability through a period-doubling bifurcation. The bifurcation curve intersects the axes α 1 and α 2 , respectively, whose coordinates are given by Numerical simulations In order to study the complex dynamics of system (3.1), it is convenient to take the parameters values as follows: Figure 3.1 shows the region of stability of Nash equilibrium.Equation (3.15) defines the region of stability in the plane of (α 1 ,α 2 ). Figure 4.1 shows the map of f α1,α2 .xcoordinate is p 1 and y-coordinate is f α1,α2 (p 1 ).Dynamics of land price in the cobweb model is given by system p 1 (t) = f α1,α2 (p 1 (t − 1)) with two model parameters.A graphical analysis in Figure 4.1 shows that the map f α1,α2 is nonmonotonic with one critical point, where the graph has a (local) minimum, and that initial state p 1 (0) = 1 does not converge to a low periodic orbit.Since the graphical analysis in this case does not converge, it suggests that the dynamical behavior is chaotic.becomes unstable, and one observes complex dynamic behavior occurs such as cycles of higher order and chaos.Also the maximal Lyapunov exponent is plotted in Figures 4.2 Chaos control Delay feedback control (DFC) method was brought forward by Pyragas [16].The method allows a noninvasive stabilization of unstable periodic orbits (UPOs) of dynamical systems [17].It feeds back part of system output signals as exterior input to the system after a time delay.u(•) is control signal gained by self-feedback coupling between output and input signals in chaotic system.x(t) = f (x(t − 1)) + u(t) is the form of DFC, where u(t) = k(x(t) − x(t − τ)), t > τ, τ is time delay, k is controlling factor.Though delay feedback control is only carried out on one variable, it enables other variables in the system to achieve stability simultaneously.Our goal is to control the system in such way.The system with controlling factor is shown as follows: (5.2) Substituting equilibrium point (0.4, 0.9) into (5.2),we obtain eigenvalues λ 1 = −0.83,λ 2 = (k − 1.7)/(1 + k).So when k > 0.35, absolute values of both eigenvalues are less than 1, which means that the system is stable. As shown in Figure 4.10 land price is controlled from chaotic state to stable state when k is greater than 0.35, so we select k = 0.4.Housing price and land price are also controlled to equilibrium point (0.4, 0.9) as shown in Figure 4.11. Conclusion A nonlinear model for real estate market has been presented based on the cobweb theory.It is a simple dynamic model with nonlinear demand and supply function.From numerical simulations, we deduce that the land supply system has the remarkable influence on real estate market.Therefore, policy makers who intervene in one market should recognize that what they do may also influence other relative markets.We showed that the fast adjustment cause a market structure to behave chaotically.Therefore, the dynamics of market is changed when players apply different adjustment speed.Attempts are also made to stabilize the chaotic system with the delay feedback method.Combining with this method, the land price and housing price evolve from chaotic to stable. value of p 1 (0) + 0.0001; Δp 2 (t) = p 2 (t) − p 2 (t), and p 2 (t) is the value of housing price at time period t with initial value of p 2 (0) + 0.0001.In both figures, initial condition of one coordinate differs by 0.0001, the other coordinate keeps equal.At the beginning, the difference is indistinguishable but after a number of iterations the difference between them builds up rapidly.From Figures 4.8 and 4.9, we show that the time series of the system (3.1) is sensitive dependence on initial conditions, that is, complex dynamics behaviors occur in this model.
2,605.4
2007-08-09T00:00:00.000
[ "Economics", "Mathematics" ]
Anomalous quantized plateaus in two-dimensional electron gas with gate confinement Quantum information can be coded by the topologically protected edges of fractional quantum Hall (FQH) states. Investigation on FQH edges in the hope of searching and utilizing non-Abelian statistics has been a focused challenge for years. Manipulating the edges, e.g. to bring edges close to each other or to separate edges spatially, is a common and essential step for such studies. The FQH edge structures in a confined region are typically presupposed to be the same as that in the open region in analysis of experimental results, but whether they remain unchanged with extra confinement is obscure. In this work, we present a series of unexpected plateaus in a confined single-layer two-dimensional electron gas (2DEG), which are quantized at anomalous fractions such as 9/4, 17/11, 16/13 and the reported 3/2. We explain all the plateaus by assuming surprisingly larger filling factors in the confined region. Our findings enrich the understanding of edge states in the confined region and in the applications of gate manipulation, which is crucial for the experiments with quantum point contact and interferometer. II > I We assume that regions I and II are both in FQH or IQH state. Their filling factors can be written as Ⅰ = + Ⅰ ′ and Ⅱ = + Ⅱ ′ , with as an integer and 0 < Ⅰ ′ < Ⅱ ′ ≤ 1. If transmitted edge currents obtain equilibration with reflected edge currents in region II, we can write down the equations of each "contact" in Supplementary Fig. 1a according to the Landauer-Buttiker formula 1-3 . Here, contacts labeled "U" and "L" are "virtual contacts", which indicate that edge modes inflow and outflow at these points are in equilibrium and share the same chemical potential. And then can be derived as: which is the scenario we discuss in this work. II < I If II < I , edge currents will be partially reflected as they propagate from region I to region II, as shown in Supplementary Fig. 1b. In this case, equations of contacts are shown as follows: can be derived as: This is the common situation when measuring devices with lateral confinement, and it looks as if is measuring the Hall resistance in region II. However, this is correct only with the precondition II < I . Figure 1| Sketch of edge modes propagation and reflection when the filling factor in region II is larger (a) and smaller (b) than region I. a, When II > I , edge currents propagate in the same direction as that in Fig. 3a in the main text. Edge currents are reflected when propagating from region II towards region I. In region II, mixed edge currents get equilibrium before they reach the interface of different regions. Imaginary contacts labeled as U and L indicate that at these positions, edge modes are equilibrated, and the currents inflow and outflow share the same chemical potential. b, When II < I , edge currents are reflected when propagating from region I to region II. The reflected currents propagate along the interface on the side of region I. Supplementary Note 2. Typical traces of , , , and Supplementary Figure 2| Typical traces of , , , and versus magnetic field. L is the longitudinal resistance across the confined region and can be measured from contact 1 and 3 or from contact 2 and 4 ( Supplementary Fig. 1a). When D appears as anomalous plateaus, L appears as plateaus with finite values, rather than being zero. Source data are provided as a Source Data file. Supplementary Note 3. Coexistence of plateaus and their relationship with electron density variation In the main text, we attribute the appearance of anomalous plateaus to a gate-induced density increase in the confined region. In this section, the relationship between anomalous plateaus and the electron density variation in region II is discussed. Supplementary Fig. 3a shows D traces at three different gate conditions in 1 < ν < 2. Plateaus can appear together at the same gate voltage, such as the coexistence of K /(3/2), K /(10/7), K /(9/7) and K /(16/13) plateaus in the blue trace. This suggests that the emergence of these plateaus share the same origin, which can be explained by an electron density modulation in region II. To make it clear, the relationship between plateaus and II / I is illustrated in Supplementary Fig. 3b. The y axis II / I represents the relative density between regions I and II. II / I should be larger than 1 in our experiments. The x axis is magnetic field, it corresponds to the filling factor of the open region IQH/FQH state. From the XY trace, the filling factor range for each IQH/FQH state can be obtained (defined by the plateau from XY ). As a simple estimation, we assume that the filling factor range for each state does not change when density varies. Then we know the filling factor range when region II will enter each IQH/FQH state, and we know I and II ranges when D will become plateaus. The filling factor range difference between the regions I and II determines the value of II / I at different magnetic fields. As consequence, the shapes in Supplementary Fig. 3b are drawn in relationship of II / I and B. As Hall resistance of region II cannot be measured directly in our devices, we measure D instead. Anomalous plateaus can coexist at specific II / I values, as shown by the horizontal dashed lines in Supplementary Fig. 3b. And the three dashed lines correspond to the three D traces in identical colors in Supplementary Fig. 3a qualitatively.
1,330
2022-07-14T00:00:00.000
[ "Materials Science", "Physics" ]
Control of Industrial Systems to Avoid Failures : Application to Electrical System We resolve the control problem for a class of dynamic hybrid systems (DHS) considering electrical systems as case study. The objective is to guarantee that the plan never reaches unsafe states. We consider a subclass class of DHS called Cumulative Preemptive Event-driven DHS (CPE-DHS). This class is distinguished by the dominance of its discrete aspect characterized by features as cumulative continuous variables combined with actions behavior that may be interrupted and restarted. We utilize a subclass of Rectangular Hybrid Automata (RHA), named Constant Slope RHA (CSRHA), as a solution framework to resolve the control problem. The main contribution is a control Algorithm for the class of systems described above. This algorithm ensures that the system meet the requirement specifications by forcing some events. The forcing action is given in the form of restrictions on the transition guards of the CSRHA. The termination/decidability as well as correctness of the algorithm is given by theorems and formal proofs. This contribution ensures that the system will always be safe states and avoid failure due to the reachability of unsafe states. Our approach can be applied to a large category of industrial systems, especially electrical systems that we consider as case study. Keywords—Dynamic hybrid systems; supervisory control; hybrid automata; electrical systems; safety I. INTRODUCTION Dynamic hybrid systems [1]- [4] (DHS) are systems characterized by the interaction of both discrete and continuous components.A large variety of real-time and embedded systems and many computer automated systems as well as industrial and electrical systems are described by both continuous and discrete aspects.In this paper, we concentrate in a particular class of dynamic hybrid systems where system behavior is captured essentially by preemptive activities which can be produced sequentially or in parallel.Besides, these systems are depicted by an interaction of dominant discrete component with a slight continuous one. DHS are modeled by a large variety of modeling frameworks.We distinguish essentially several timed and hybrid extensions of finite state automata [5] as well as Petri nets [6], [7].Petri nets extensions benefit a salient graphical modeling power.However, computations are mostly based on similar automata extension.On the other hand, there are many extensions of finite state machines, such as time transition systems [8], timed automata [5] and stop watch automata [9].In these frameworks, time is included in configurations and transitions in the form of constraints and/or speed rate.In order to deal with dynamic hybrid systems, we consider essentially hybrid automata, linear hybrid automata, and rectangular automata [10].All the previous frameworks capture various aspects of DHS depending on their modelling power which is generally inversely proportional with the decidability of the accessibility problem.In fact, models that cover more classes of systems become more difficult to manage by a computer due to the undecidability problems [11]. In our case, we use a subclass of RHA: the CSRHA to model our systems.This subclass is better managed from decidability side.The control problem, as one of the highly studied problems in literature [12], will be resolved using CSRHA formalism.One of the important problems in the DHS control theory is related to safety verification.This problem states that the controller has to ensure that all the trajectories of the system do not reach any "unsafe" state.In order to guarantee this safety property, the controller may restrict the scope of some controllable events.By taking such decision, the controller avoids that system trajectories interfere with any undesired state induced by uncontrollable events.However, in this paper, we consider that the computational power of the controller is limited to narrowing time intervals on transitions related to controllable events.Technically speaking, this action is similar to modifying guards on transitions associated to controllable events in the CSRHA model.This paper is organized as follows.The next section provides background of hybrid automata and a description of the CSRHA.In Section 3, we present and solve the supervisory control problem.We note that throughout this paper we use the same case study of an electrical system to illustrate our supervisory control approach. II. BACKGROUND ON HYBRID AUTOMATA In the following, we define the retained subclass of RHA: the CSRHA. A. Constant Piece-wise Rectangular Hybrid Automata We consider these notations: X = {x 1 , x 2 , . . ., x n } is a finite set of real valued clocks (variables).Ẋ = { ẋ, x ∈ X } denotes the set of first derivative variables of X .A variable x is considered piece-wise linear variable if ẋ ∈ R. ∼ denotes an element of operator's set {<, ≤, =, ≥, >, =}.A rectangular inequality over X , is an inequality of the form, x ∼ c, where c ∈ R, and x ∈ X .A rectangular predicate over X is a conjunction of rectangular inequalities over X .Rect(X ) denotes the set of rectangular predicates over X .A polyhedral inequality over X is an inequality of the form , where c, c 1 , . . ., c k ∈ R, and x 1 , . . ., x k ∈ X .A polyhedral predicate over X is boolean combination of polyhedral inequalities over X .Ψ(X ) is the set of polyhedral predicates over X .v = ( v 1 , . . ., v n ) denotes an element of R n that captures clocks valuation, v i ∈ R, of every clock x i ∈ X .v(x i ) = v i corresponds to the value of x i .We denote by region, a subset of R n .For a region z and x i ∈ X , z(x i ) = {v i |v ∈ z}.ψ(v) denotes the boolean function which equals true if the predicate ψ is satisfied by the input vector v and false if not.We denote by [[ψ]], the region composed by the set of vectors v ∈ R n , where the predicate ψ is true when we substitute each x i by its corresponding Definition 1: In [13]- [15] A constant piece-wise linear hybrid automata (CSRHA) is a tuple A = (X , Q, T ∪ {e 0 }, inv, dyn, guard, assign, l 0 ) where: • X , is a finite set of variables. • Q, is a finite set of locations. • T , is a finite set of transitions.A transition e = (l, l ) ∈ T , leads the system from the source location l ∈ Q, to the end location l ∈ Q.The entry transition of the initial state l 0 is denoted by e 0 . • inv: Q −→ Ψ(X ) is the location invariant, it associates a predicate to each location. • dyn: Q×X −→ R, is a function describing the evolution of variables.This evolution is usually of the form l, ẋ = k, k ∈ R or simply ẋ = k in the location l.Ẋ (l) denotes the evolution of all variables in the location l. • guard: T −→ Ψ(X ) is the guard function.It associates a predicate, C e to each transition, e.The guard, C e should equals true to allow the execution of the transition e. • assign, is the initialization function.It associates a relation, assign e to each transition e defining the clocks to be reset. The semantic of a constant piece-wise linear hybrid automata (CSRHA) is given by the following definition: Definition 2: The semantic of a CSRHA A = (X , Q, T ∪ {e 0 }, inv, dyn, guard, assign, l 0 ) is defined by a timed transition system S A = (Q, q 0 , −→) with where where w is a sequence of pairs (a i , δ i ), with a i ∈ T ∪ {e 0 } a transition, and δ i+1 ∈ R + is the delay between the two successive events a i and a i+1 , where : δ 0 = 0, and ∀i ≥ 1, δ i = (t i ) − (t i−1 ).Example 2.1: Consider the electrical system for mixing chemical solution given in Fig. 1.Filling action is composed of two stages.Firstly, a tray is replenished by a chemical solution with a rate of 2cm 3 /s.We assume that initially the tray is filled by 10dm 3 of a neutral liquid.This phase is accomplished when the current content of the tray is bounded by 30 and 50 liters.The next phase should be fulfilled before a deadline of 18s, elapses in order to avoid the risk of obtaining improper solution.An authorization at a random time prompts the second stage which has a deadline of 16s once started. When the next stage is activated, a second chemical solution is replenished with the rate of 4cm 3 /s.The filling process is accomplished when the total content of the tray is bounded by 70dm 3 and 90dm 3 .The CSRHA modeling of this electrical system is illustrated in Fig. 2. Fig. 2. The CSRHA of the electrical system. III. CONTROL OF CPE-DHS In the following, we describe our contribution to resolve the control problem.Our solution define a derived space where all trajectories satisfy the requested specifications to avoid system failure.Thus, all unsafe locations will be inaccessible.The safety specification is considered as the set of forbidden locations.The control action acts by reducing transition guard intervals.By nature, some events are not eligible for narrowing their time occurrence scope.Such events are considered uncontrollable from the controller perspective.An event is controllable if the controller has the power to reduce its occurrence time slot.In general, event connected to forbidden locations are uncontrollable, otherwise it becomes trivial to define the control solution.Moreover, the restriction action on the time intervals should be minimalist. A. Specification of the Control Problem The inputs are the set of unsafe locations and the partition of events as controllable/uncontrollable.The main steps that we propose to resolve the control problem are as follows: Steps: 1) Mark all unsafe locations considering the safety specification.2) Mark all transitions as controllable and uncontrollable considering the input events partition.3) Perform a computation of the desired space adopted by the controller in all the locations to ensure that the system is not accessing forbidden locations.4) Reassign the restricted guards of transition related to controllable events and update any necessary location invariant to force that the system remains in safe states. B. Control Algorithm Let A = (L, l 0 , X, Σ, E, inv, Dif ) the CSRHA model of the system to be controlled.A d represents the output (controlled) CSRHA.We consider these notations: • L F represents the set of forbidden locations (given by the safety specification). • E F represents the set of CSRHA transitions where the output location is a forbidden. • e l,l represents a transition e = (l, δ, α, Af f, ρ, l ) where the source location is l and the destination location is l . • E l represents the set of transitions having l as source location. E l = {e ∈ E|e = (l, δ, α, Af f, ρ, l ), l ∈ L} • E F l = E l ∩ E F represents the forbidden transitions having l as source location. • E F l = E l − E F represents the non forbidden transitions having l as source location. • In Algorithm III.1, we consider that l and E F l as follows: This corresponds to the closure of the set {l} under the relation {(p, q) : there is a transition e = (p, δ, α, Af f, ρ, q) ∈ E, q ∈ L R (l)}.E F := E F ∪ {e l,l } 7: end for 8: calculate L R (l) 9: initialize L R (l) := {l} 10: while ∃e = (l , δ, α, Af f, ρ, l ) ∈ E, with l ∈ L R (l) and l ∈ L R (l) do 11: L R (l) := L R (l) ∪ {l }. 12: end while 13: for all location l ∈ L\L F do 14: calculate E F l and E F l : 15: for all e l,l ∈ E F , l ∈ L do 16: end for L F := L F ∪ {l} for all calculate the new guard δ n i regarding δ i and guards of transitions in E F l : 2 end for 35: end for 36: do a forward analysis, started at the initial location.We note by S f orward l the reachable space 3 calculated by forward analysis at location l. 37: for all location l where E F l = ∅ do 38: do a backward analysis started at location l considering δ n k+1 ∨ δ n k+2 ∨ . . .∨ δ n m as initial entry space.We note by par S backward l,l the space calculated by backward analysis (from location l ) in the location l ∈ L R l 39: end for 40: for all l ∈ L R (l) where E F l = ∅ do 41: calculate the final space of backward analysis at loca- 2 Our goal is to reduce the state space in order to avoid the possibility of occurrence of prohibited events. 3The reachable space at a given location is a polyhedron with dimension |X| defined the inequalities system A.XR ≺ b, with A ∈ M a,|X| (R) a matrix with a lines and |X| columns, and X ∈ R n the vector of CSRHA variables.for all transition e l,l ∈ E F l do 47: redefine the guards : end for 49: end for 50: end function The CSRHA modelling a CPE-DHS system is the input of the Algorithm III.1.Algorithm III.1 produces the output as an updated CSRHA where forbidden states can never be reached.The control algorithm computes the new transition guards and the new location invariants. Theorem 1: The Algorithm III.1 terminates if the entry CSRHA has no loop. Proof: 1 The Algorithm III.1 terminates if the computation of reachable space (both backward and forward) terminates.This analysis use discrete and continuous predecessor and successor operators which perform certain geometric calculus on regions [14].Software like PHAVer [17] and SpaceEx [18], [19] implement such region operations, using polyhedral libraries, to accomplish the reachable space computation.We note that these analysis terminate if the CSRHA is acyclic.Nevertheless, for more general forms, the accessibility problem is known as undecidable [14], [20]. In the following, we present some particular and interesting cases where this problem is decidable. Theorem 2: The Algorithm III.1 terminates if the input CSRHA satisfies the following proprieties: 1) All derivative variables in the locations are non negative or null.2) Guards and invariants are defined by single non negative constraints.3) Assignments are of the form x := x or x = c.Proof: This is ensured due to the decidability of accessibility problems in that case [21].Furthermore, we can ensure the algorithm decidability for these interesting classes of CSRHA: 1) CSRHA where each loop contains at least one initialization of all clocks [22].2) CSRHA where each loop contains at most one transition guard in the form of "dangerous" test [22].3) CSRHA where the dynamic changing (the derivative value) of a variable between two locations is accompanied by resetting the variable assignment at the transition between the two locations [16]. Theorem 3: The automaton A d obtained by applying Algorithm III.1 ensures that all reachable spaces respect the safety specification while being maximal permissive. Part 1: We demonstrate (by contradiction) that the reachable space meets the safety specification. Suppose that ∃l ∈ L F such as it exists a run in A d from initial state: We have l ∈ L F =⇒ e a ∈ E F .Suppose that e a = (l a , δ a , α a , Af f a , ρ a , l).According to the TTS of A d , we have inv(l)(v a ) = true and δ(v a ) = true.However, according to Algorithm III.1, the calculation of S backward l conclude that According to the construction of the set E F la in the Algorithm, we have e a ∈ E F la .Thus, ∃j ∈ [ [1, k]] such as e a = e j .This implies that inv(l )(v) = f alse, which contradicts the starting assumption. Part 2: We demonstrate (by contradiction) that the reachable space at A d is maximal permissive. To do this, let us suppose that there is a location (l, v) ∈ Q A such that (l, v) ∈ Q A d and l ∈ L F .Also suppose that (l, v) do not lead to forbidden locations by the specification. The fact that (l, v) does not lead to unauthorized locations, means that there is no run from (l, v) leading to a state (l , v ) with l ∈ L F .Let l f ∈ L F a location such that l ∈ L R (l f ).Since there is no run from (l, v) leading to forbidden location, thus, (l f , v f ) is not reachable since (l, v), and that, for any v f ∈ R |X| .Similarly, (l, v) is not reachable from (l f , v f ) at the reverse automaton −A (or by backward analysis).Let S backward l,l f the obtained space at l by backward analysis from (l f , v f ).Thus, we have S backward l,l f (v) = f alse. According to Algorithm III.1, S d l start by the initial space Moreover, according to the calculation formula of location invariant, we have inv d (l)(v) = true.=⇒ (l, v) ∈ Q d .Thus, any location leading exclusively to locations respecting the specification is in the reachable space of A d .Consequently, A d is maximal permissive. Example 3.1: We reconsider the CSRHA of the electrical system illustrated in Fig. 2. According to the safety specification, we consider the following unsafe locations: SF = {l 7 , l 8 , l 10 , l 4 , l 6 }.The results related to the reachable space computation by forward and backward analysis are performed by PHAVeR [17] and SpaceEx [18], [19] software.The intersection between backward and forward spaces is illustrated in Table I.The results meets with the safety specification.Thus, the controller defines a derived CSRHA where invariant locations and transition guards are truncated by the new obtained polyhedral equations in each location.This derived automaton is maximal permissive and describes all possible trajectories that obey to the requirements. Table I illustrates the intersection space, obtained by PHAVer and SpaceEx.This allows capturing the maximal polyhedron that meet with requirements.For example, the updated location invariant of l 4 is given by I Similarly, all guards and invariants will be updated according to the results given by the intersection space.Furthermore, we omit any outgoing transition from a forbidden location (since it becomes unreachable). IV. CONCLUSION In this paper, our main contribution is to solve the problem of supervisory control of the particular class of dynamic hybrid systems (DHS) called Cumulative Preemptive Eventdriven DHS (CPE-DHS) by narrowing guards and invariants of transitions relative to controllable events in a way that forbidden states remain inaccessible.Our proposed solution can be applied in a systematic way to any system that fits with our requirements.Then we applied this approach to an electrical system as case study.Generally speaking, the control problem is known to be undecidable for this class of complex systems.Nevertheless, in quest of decidability, we propose some restrictions that makes the problem decidable.In our future directions, we will focus on the supervisor generation while considering uncontrollable variables. Algorithm III. 1 3 : Control Algorithm 1: function Control(A, M F ):A d 2: initialize the output CSRHA by the entry CSRHA.A d := A function initialize() 4: calculate the set E F : 5: for all e l,l ∈ E with l ∈ L F do 6:
4,629.8
2018-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Diffusive Propagation of Exciton-Polaritons through Thin Crystal Slabs If light beam propagates through matter containing point impurity centers, the amount of energy absorbed by the media is expected to be either independent of the impurity concentration N or proportional to N, corresponding to the intrinsic absorption or impurity absorption, respectively. Comparative studies of the resonant transmission of light in the vicinity of exciton resonances measured for 15 few-micron GaAs crystal slabs with different values of N, reveal a surprising tendency. While N spans almost five decimal orders of magnitude, the normalized spectrally-integrated absorption of light scales with the impurity concentration as N1/6. We show analytically that this dependence is a signature of the diffusive mechanism of propagation of exciton-polaritons in a semiconductor. If light beam propagates through matter containing point impurity centers, the amount of energy absorbed by the media is expected to be either independent of the impurity concentration N or proportional to N, corresponding to the intrinsic absorption or impurity absorption, respectively. Comparative studies of the resonant transmission of light in the vicinity of exciton resonances measured for 15 few-micron GaAs crystal slabs with different values of N, reveal a surprising tendency. While N spans almost five decimal orders of magnitude, the normalized spectrallyintegrated absorption of light scales with the impurity concentration as N 1/6 . We show analytically that this dependence is a signature of the diffusive mechanism of propagation of exciton-polaritons in a semiconductor. Exciton-polaritons are light-matter quasiparticles which may be excited by light in the spectral vicinity of exciton resonances in semiconductors 1 . They result from the quantum mechanical superposition of a massless photon state and a massive Wannier-Mott exciton state, which may be viewed as a chain of multiple virtual re-absorptions and re-emissions of photons. Exciton-polaritons (EPs) combine properties of excitons and photons, which makes them promising for applications in opto-electronics 2,3 and leads to a great variety of non-linear effects including the Bose-Einstein condensation 4 , polariton lasing 5 , polarization switching 6 , dissipationless motion 7 , enhanced terahertz absorption 8 , etc. In spite of the great number of publications on real-space dynamics of exciton polaritons, their propagation, scattering and non-radiative decay in bulk crystal slabs still remain poorly understood (see, e.g., discussion in 9 ). It is well known that the group velocity of an EP may vary from the speed of light in the semiconductor crystal down to the speed of a mechanical exciton which is several orders of magnitude lower, depending on the relative weight of excitonic and photonic fractions 10 . Due to their excitonic component, EPs efficiently interact with acoustic phonons, eventually decaying non-radiatively. Due to their photonic component, EPs can decay radiatively to vacuum photonic modes outside the sample 11 . For these reasons, propagation of EPs in crystal slabs is governed by scattering events and an interplay of radiative and non-radiative decay processes. When inside the crystal, a polariton can only decay non-radiatively, while once it crosses the crystal slab and achieves one of the surfaces, it decays radiatively contributing to the reflectivity or transmission signal 12 . The spectrally resolved absorption of light in a crystal A is related to the reflectivity R and transmission T by the energy conservation law: A(ω) accounts for all non-radiative losses including also the scattering to wave-guiding modes which decay through the side edges of the sample. What governs the energy dissipation in semiconductor crystals if light propagates in the spectral vicinity of an exciton resonance? What is the role of exciton-polaritons in the light absorption process? These questions are crucial for realization of polariton-based devices. They appear to be non-trivial both from experimental and theoretical points of view. Direct measurement of the absorption is a hard task requiring the high accuracy measurements of the temperature variation of the sample frozen to a fraction of Kelvin 13 . Fortunately, one can obtain a good estimate of A(ω) from the spectral dependence of the transmitted intensity T(ω) through the Beer-Lambert-Bouguer law: In Eq. (2) one assumes the exponential decay of the light intensity along its path inside the crystal slab (the Bouguer law). This assumption is generally valid in semiconductor films where the Perot-Fabry interference is suppressed because of the absorption, and the modulation of reflectivity in the vicinity of the exciton resonance is negligibly small 14 . More accurate extraction of the true absorption spectrum from the transmission spectrum can be done, e.g., using the transfer matrix method 15 . In this paper, we present an experimental study of the absorption of light in doped semiconductor crystals in the EP regime, i.e., in the spectral vicinity of the excitonic optical transition. We have chosen GaAs crystals as the best-studied model semiconductor where optical phenomena and light-matter coupling have been studied for decades. We compare the spectrally-integrated excitonic absorption of light K, measured for 15 similar thin GaAs crystal slabs which differ by the concentrations of impurity centers N. While the excitonic absorption is an intrinsic effect and, at a first glance, should not depend on N at all, we have found a surprising sublinear dependence K ∝ N β with β close to 1 6 / . This observation sheds light on the mechanisms of non-radiative losses in semiconductors and allows concluding on the diffusive propagation of exciton-polaritons in doped semiconductors. Results The samples for our studies were based on the epitaxial layers of GaAs, grown on the bulk semi-insulating GaAs substrates by means of either molecular beam epitaxy (MBE) or vapor phase epitaxy (VPE). The epitaxial layers were either nominally undoped or p-doped during the growth process. The impurity concentrations were then probed by either voltage-capacitance (VC) or Hall-effect measurements at room temperature. The measured values of the impurity concentration N are listed in Table 1 together with nominal dopants (for doped samples). The purest samples were not intentionally doped but contained residual impurities: most likely, S(As) which act as shallow donors with the binding energy of about 6 meV and the Bohr radius of about 90 Å. In doped p-type samples, the Si(As) atoms form acceptor centers with the binding energy of 34 meV and the Bohr radius of 13 Å. After characterization, the substrates were removed by etching in ammonium and hydrogen peroxide water solution, and the epitaxial layers were etched down to sub-/few-micrometer thicknesses. The resulting thin crystal slabs were annealed in the atmosphere of hydrogen (in order to remove tension and oxides remaining on the surface after etching) and then loosely packed between two cover-glasses and sealed in the air. During optical measurements, these sandwiches were immersed in liquid helium at 2 K. As a first step, we measured the photoluminescence (PL) spectra for a selection of our samples in order to check possible effects of previous etching and sandwiching on the actual concentration of the impurity states. The PL was excited by the second harmonic of continuous-wave YAG:Nd laser (2.33 eV); the spectra were taken using a diffraction spectrometer and a photomultiplier. All in all, we have found that our slabs produce quite typical PL spectra of GaAs at the corresponding levels of doping. The observed features of the PL spectra can be readily assigned to various intrinsic or impurity-related optical transitions known from the optical spectra of GaAs, as shown in Fig. 1. Namely, the free exciton line (FX) at 1.5152 eV is easily identified by its well-known spectral position. Two lines represent excitonic complexes: the exciton bound to neutral donor (D 0 X) at 1.5142 eV possesses the binding energy ca. 1 meV, while the exciton bound to defect (d, X) at 1.505 eV, the binding energy of about 10 meV [16][17][18][19] . Finally, the (A 0 h) line at 1.493 eV should be ascribed to the c-band-to-acceptor optical transitions. Remarkably, the PL spectrum of the slab with nominal N = 1⋅ 10 13 cm -3 shows a single line of the free-exciton transition; the impurity-related features are not seen. This observation, first, confirms a very high purity of the F235 sample and, second, suggests that previous treatment did not really change the impurity concentration in our slabs. At the same time, the presence of the acceptor-related and donor-related lines simultaneously in one spectrum, as observed for some specimens (see upper two spectra in Fig. 1), points to a degree of compensation. As a second step, we measured the transmission spectra of the samples. The "white light" from the incandescent lamp was first passed through the red filter which cut off the high-energy photons ( >1.9 eV), then focused on and passed through the sample, and finally focused on the entrance slit of a diffraction spectrometer. The pump density was about 1 W/cm 2 . The raw data on the spectrally-resolved transmission I(ω) for every sample were then normalized by the spectrum of the lamp. To this end, we had taken the reference spectrum I 0 (ω) of the lamp "white light" transmitted through the sample-box containing no sample inside. At the next stage, the transmission data were processed further. We found the logarithm of the normalized transmission I(ω)/I 0 (ω) to obtain, having reversed the sign, the spectrum of optical density A d ω α ω ( ) = ( ) . We aim at extracting the absorption coefficient α(ω) which should be an immanent characteristic of a corresponding semiconductor; however, the direct calculation is hindered because the slab thickness d is not accurately known and, moreover, is not quite homogeneous widhwise. Thus we took a straightforward assumption that the high-energy (interband) absorption coefficient should have the same value for any doping level, and we scaled the experimental optical density spectra of every sample in such a way that the high-energy shelves match the textbook value, about 8000 cm -1 for the bulk GaAs 20 (see Fig. 2). The absorption spectra shown in Fig. 2 demonstrate a distinct excitonic line peaked slightly above 1.515 eV, in the vicinity of the fundamental absorption edge for the interband transitions. While the peak energy is similar for all the samples, spectral widths and heights of the excitonic line vary for different samples. Thus, an integral value like the area under the excitonic absorption contour K = ∫A(ω)dω should be the most adequate characteristic of the efficiency of the excitonic absorption in a particular sample. One can clearly see the enhancement of the integrated exciton absorption with the increasing impurity concentration N (from lower to higher spectra in Fig. 2). For a quantitative analysis, the experimental spectra in Fig. 2 should be decomposed into the excitonic and non-excitonic contributions. The latter can include not only the classical fundamental absorption edge (whose spectral dependence might be known well enough), but the Urbach density-of-states tails as well, e.g., due to the Franz-Keldysh effect or disorder effects (whose spectral dependence is individual and is not precisely known). So the non-excitonic background can't be merely subtracted, and the decomposition procedure needs a clear strategy relying on a healthy physical reasoning. The methods of analyses of the shape of the exciton absorption peak are discussed elsewhere 21,22 . In this study, we adapted a simple assumption that the low-energy part of every spectrum in Fig. 2, up to the energy of the excitonic peak, is not affected by the non-excitonic absorption. Thus we integrated under these half-bell-shaped parts (up to the maximum position), multiplied the result by a factor of 2 and consider as the full spectrally-integrated excitonic absorption, K. The obtained values of K are plotted in Fig. 3 against the corresponding impurity concentrations N. Figure 3 summarizes the data on the variation of the integrated absorption with the impurity concentration. In the set of studied samples, N varies in wide limits, from N = 1⋅ 10 13 cm -3 to about N = 5⋅ 10 17 cm -3 . For instance, in terms of electro-physical properties, such a range spans from semi-insulating to semi-metallic behaviour of bulk GaAs. But throughout the entire range, the dependence K(N) obeys a single power law, which corresponds to the straight line in a double logarithmic scale. A purely phenomenological two-parametric fit of the experimental trend by the function Table 1). Straight line shows the prediction of a diffusive model formulated here (Eq. 11). Discussion The observed dependence of the integrated absorption on the impurity concentration is somewhat unexpected. Indeed, any kind of "impurity absorption" would be most naturally expected to demonstrate a linear dependence on N. A similar dependence of the integrated absorption on N was predicted for EPs by Akhmediev 23 and reproduced in the recent theoretical work 24 . On the other hand, the excitonic absorption is an intrinsic phenomenon which might have been simply independent on N. These naïve expectations implicitly rely on the ideas of a ballistic propagation of photons through a matter where they rarely experience acts of inelastic scattering thus transferring photon energy to the crystal lattice. In contrast to that, in the presence of strong elastic scattering, the effective trajectory of each exciton-polariton propagating through the crystal may get significantly longer, which leads to the "slow light" phenomenon 25 . The scattering of exciton-polaritons by charged impurity centers and free carriers [26][27][28] are expected to induce switching from the ballistic regime to the diffusive regime of polariton propagation. In what follows we show that in the diffusive propagation regime, the integrated absorption is governed by the characteristic time spent by diffusing polaritons in the crystal slab, which in turn depends non-linearly on the impurity concentration. The propagation of EPs in a semiconductor slab containing a sufficiently high concentration of the diffusive centers can be described by a classical diffusion equation: where n(ω, x, t) is the EP density, D(ω) is the frequency dependent diffusion coefficient, τ(ω) is the EP non-radiative lifetime which accounts for the absorption of light, x stands for the coordinate in the direction across the slab. The diffusive propagation time of an EP through the crystal slab of the thickness L can be found from Eq. (4) according to Ref. [25] ω ω ω If the EP non-radiative lifetime is short enough (the strong absorption case), so that The diffusion coefficient is dependent on the mean free path of the exciton polariton l(ω) as Here v gr (ω) is the EP group velocity which is defined by the dispersion of EPs in a bulk GaAs crystal, which is well-known and independent on the impurity concentration. In contrast, l(ω) is dependent on the impurity concentration. A mean free path of a plane wave packet is proportional to the average distance between the scattering centers, so that This relation is characteristic for EPs which are light waves with a specific non-linear dispersion and a coherence length frequently amounting to several μ m 29 , while the average distance between impurity centers is of the order or less than 10 nm (see Table 1). The probability to find an impurity at the wave front of the plane wave is inversely proportional to the mean distance between impurities, which yields the relation (9). The proportionality (9) can be formally derived in many ways, but the simplest argument proving its validity is the dimensionality argument. If the lateral coherence length of propagating EPs is much larger than any characteristic length of the scattering problem, it can be assumed infinite. The relation between two remaining characteristic lengths, which are the mean free path and the mean distance between impurities, can be nothing but proportionality in this case, that readily yields the functional dependence (9). Scientific RepoRts | 5:11474 | DOi: 10.1038/srep11474 Substitution of (9) into Eq. (7) ne can see that the diffusion model accurately reproduces the experimentally found functional dependence of the integrated absorption on the concentration of impurity centers in a semiconductor. This shows that, in the presence of impurities, propagation of EPs has a diffusive character. Unlike photons at neighboring energies (higher and lower), EPs travel through the slab along a polygonal line rather than ballistically. The absorption of ballistic photons is nearly independent of the impurity concentration, as can be seen from Fig. 2. At the low energy end (1.510 eV) it tends to zero, while at the high energy end (1.520 eV) it saturates at the value defined by the interband absorption. Note that if EPs were classical particles whose coherence length is much less than the mean distance between impurities (and who would thus scatter on an isolated impurity center every time), the mean free path would depend on the impurity concentration in a different way: l(ω)~N −1 (cf. Eq. 9). This relation would yield, instead of Eq. (11), K N 1 2 / , in contradiction with the experimentally observed trend. Clearly, exciton-polaritons behave rather like extended wave-packets than like classical particles when propagating through a crystal with a sufficiently high impurity concentration. The strong absorption limit (6) applies only to relatively slow exciton-polaritons which are characterized by a high exciton component. These polaritons spend longer time inside the crystal slab and have better chances to decay non-radiatively than fast photon-like polaritons. They most essentially contribute to the integrated absorption K. Conclusions In conclusion, the proof of diffusive propagation of exciton polaritons in semiconductor slabs has been obtained by a systematic study of the integrated absorption in a series of 15 GaAs samples. The dependence of the integrated absorption on the concentration of impurity centers unambiguously shows that the absorption of light is mostly through slow exciton-like exciton-polaritons which experience multiple scattering acts while propagating across the slab. This observation is important for understanding of mechanisms of the energy transfer between light and matter in semiconductors. The analytical model developed here works remarkably well at the liquid helium temperature. We expect that at higher temperatures the inelastic scattering processes would move a part of excitons outside the light cone, which may strongly affect the integrated absorption behavior.
4,158
2015-01-12T00:00:00.000
[ "Physics" ]
The role of infiltrating lymphocytes in the neo-adjuvant treatment of women with HER2-positive breast cancer Background Pre-treatment tumour-associated lymphocytes (TILs) and stromal lymphocytes (SLs) are independent predictive markers of future pathological complete response (pCR) in HER2-positive breast cancer. Whilst studies have correlated baseline lymphocyte levels with subsequent pCR, few have studied the impact of neoadjuvant therapy on the immune environment. Methods We performed TIL analysis and T-cell analysis by IHC on the pretreatment and ‘On-treatment’ samples from patients recruited on the Phase-II TCHL (NCT01485926) clinical trial. Data were analysed using the Wilcoxon signed-rank test and the Spearman rank correlation. Results In our sample cohort (n = 66), patients who achieved a pCR at surgery, post-chemotherapy, had significantly higher counts of TILs (p = 0.05) but not SLs (p = 0.08) in their pre-treatment tumour samples. Patients who achieved a subsequent pCR after completing neo-adjuvant chemotherapy had significantly higher SLs (p = 9.09 × 10–3) but not TILs (p = 0.1) in their ‘On-treatment’ tumour biopsies. In a small cohort of samples (n = 16), infiltrating lymphocyte counts increased after 1 cycle of neo-adjuvant chemotherapy only in those tumours of patients who did not achieve a subsequent pCR. Finally, reduced CD3 + (p = 0.04, rho = 0.60) and CD4 + (p = 0.01, rho = 0.72) T-cell counts in 'On-treatment' biopsies were associated with decreased residual tumour content post-1 cycle of treatment; the latter being significantly associated with increased likelihood of subsequent pCR (p < 0.01). Conclusions The immune system may be ‘primed’ prior to neoadjuvant treatment in those patients who subsequently achieve a pCR. In those patients who achieve a pCR, their immune response may return to baseline after only 1 cycle of treatment. However, in those who did not achieve a pCR, neo-adjuvant treatment may stimulate lymphocyte influx into the tumour. Supplementary Information The online version contains supplementary material available at 10.1007/s10549-021-06244-1. Introduction HER2-positive breast cancer accounts for approximately 20% of all breast cancers and prior to the clinical development of trastuzumab, had the worst outcome of any breast cancer subtype [1]. However, the development of trastuzumab and the subsequent clinical trials which have tested newer HER2-targeted therapies (including lapatinib and pertuzumab) in combination with trastuzumab, have significantly improved the outcomes of women with early-stage HER2-positive breast cancer [1]. Trastuzumab, a humanized monoclonal antibody, is known to have both cytotoxic and immunological effects on tumour cells [1,2]. In the last decade studies have identified that the localized immune environment plays an important role in determining the outcome of women with non-metastatic HER2-positive breast cancer [3,4]. In fact studies have shown pretreatment tumour infiltrating lymphocytes (TILs) [5] and more recently stromal lymphocytes (SLs) [4] have been shown to be independent predictive markers of future pathological complete response (pCR). Whilst many studies 1 3 have correlated baseline lymphocyte levels with the likelihood of subsequent pCR, very few have studied the impact of HER2-targeted therapy on the immune environment of the tumour itself. In the TCHL clinical trial (NCT01485926), which assessed TCH (docetaxel, carboplatin, and trastuzumab) and TCHL (TCH and lapatinib) in stage II-III HER-2-positive breast cancer patients, we obtained core biopsy samples from the primary tumour from consenting patients at pretreatment and at 20-days post-cycle 1 of trastuzumabbased treatment. Using these tumour samples, we conducted TIL analysis and assessed the impact of a single dose of TCH/L chemotherapy treatment on the numbers of infiltrating lymphocytes in breast tumours. For the first time, our study identifies that immune contexture is significantly modulated in breast tumours after only 1 cycle of TCH/L chemotherapy, and this may provide clues as to how and why some patients achieve a subsequent pathological complete response (pCR). Patient population and samples TCHL (ICORG10-05) (NCT01485926) is a Phase-II neoadjuvant study run by Cancer Trials Ireland (formerly All Ireland Co-Operative Oncology Research Group (ICORG)) assessing TCH (docetaxel, carboplatin, and trastuzumab) and TCHL (TCH and lapatinib) in stage II-III HER-2-positive breast cancer patients [6]. Full details of the trial are available at www. clini caltr ials. gov. pCR was determined in the TCHL clinical trial by the absence of invasive carcinoma. Of the 88 patients enrolled we were able to obtain lymphocyte information for 68 patients. Of those 68 patients, 20 had a core biopsy taken by an interventional radiologist, 20-days post-cycle 1 of either TCH/TCHL therapy (Ontreatment samples). Samples were snap frozen and stored at −80 °C until required. Full clinicopathological details of patients involved in this study are including in Table 1, and Fig. 1 represents a consort diagram of samples used in the analysis. Sample processing Baseline tumour biopsies obtained prior to neo-adjuvant chemotherapy were formalin fixed and paraffin embedded (FFPE). Haematoxylin and Eosin (H&E) staining was performed on 3 µM sections of biopsies and assessed for invasive tumour epithelial cellularity by a Histopathologist. Only samples with greater than 10% tumour cellularity were used for further analysis. On-treatment samples were embedded in optical coherence tomography and the samples were cryosectioned. A single 3 µM section was taken for H&E staining and analysis, and the adjacent ten 10 µm sections were cut and stored in a chilled cryovial. Following this, a second 3 µM section was then cut for H&E staining. Cut sections were stored at −80 °C. Immunohistochemistry (IHC) and TIL counting H&E staining was performed on a Thermo Shandon Varistain Gemini stainer using Harris haematoxylin (CellPath, [7]. As per the recommendations of the TIL working group [7,8] which stated that TILs at the invasive edge or intra-tumoural TILs can still be included for research purposes, we proceeded with a research study to assess the impact of TCHL treatment on TILs in HER2positive breast cancer. To that end, four random areas the size of 1 high power microscope field (between 100,000 and 100,500uM 2 ) were selected in each case. CD45 + cells were counted in each of the four areas. Cytokeratin AE1/3 was used to assess the location of tumour cells relative to the CD45 + cells in each of the areas counted. These IHC stains were completed on FFPE baseline biopsy samples (n = 68/88) and on fresh frozen (FF) biopsies taken 20-days post-cycle 1 (Day-20) of TCH/TCHL (n = 20/88). A lymphocyte was counted as a TIL if it was observed to be in direct contact with an invasive tumour epithelial cell [7]. A stromal lymphocyte (SL) was determined if it was dispersed in the stroma, with no contact between the tumour epithelium and the lymphocyte [7]. Overall Lymphocyte count (OL) was the combined TIL and SL count. TIL analysis was independent of treatment groups. In samples where the tumour had completely regressed following treatment, the number of lymphocytes were assessed by counting four random high power fields. In the instances of no residual tumor in on-treatment biopsy samples, it is important to note that the biopsy samples were small. Whilst we report no residual tumor it may be that any residual tumor was so scattered and minimal, that it was not captured in the small biopsy. T-cell IHC and image analysis We had previously shown from MCP counter analysis [9] a small subset of TCHL patient samples that increased levels of T-cells were associated with response to TCHL-based therapy [10]. We had sufficient material from 13 patients who had matched pre and on-treatment biopsies to perform T-Cell IHC and image analysis. 3 µm serial tissue sections were cut using a Leica RM2135 microtome. IHC analysis was carried out on a Bond-III immunostainer (Leica Biosystems, Newcastle, UK). Primary antibodies for CD3 (Leica, NCL-L-CD3-565), CD4 (Leica, NCL-CD4-368) and CD8 (Leica, NCL-CD8-4B11) were diluted in Bond Primary Antibody Diluent (Leica, AR9352) at 1/40, 1/100 and 1/100, respectively. Pre-treatment of samples was carried out on the Bond-III using Bond Epitope Retrieval Solution I (Leica, AR9961) for 20 min (CD3, CD8) and using Bond Epitope Retrieval Solution II (Leica, AR9640) for 20 min (CD4). Detection and visualization of stained cells was achieved using the Bond Polymer Refine Detection Kit (Leica, DS9800) with Bond DAB Enhancer (Leica, AR9432). Tissues were counterstained with haematoxylin and cover slipped. The CD3, CD4 and CD8 stained slides for 13 cases (pre-treatment and on-treatment) were scanned at 40X using a Philips 2.0 scanner and the whole section analysed using the open access image analysis software QuPath [11]. The positive cell detection tool was used to measure the number of positive cells per square millimeter of tissue and compared against the assessment of a Histopathologist. Two comparisons were made using both QuPath and the Histopathologist: Firstly, for each antibody, the number of positive cells in the pre-treatment biopsy was compared to the post-treatment biopsy and secondly the number of CD4 + and CD8 + cells were compared between the pretreatment biopsy and the post-treatment biopsy. Due to the large number of positive cells in most samples the pathologist score could not be given as a numerical value but was noted as a comparative statement between the samples being analyzed. The QuPath results for all samples were then compared to the pathologist score to ensure accuracy of the software, and QuPath results were then used for quantitative analysis. Statistical analysis The non-parametric Wilcoxon signed-rank test was used to determine if there was a significant difference between pathological complete response (pCR) and no-pCR for the three comparison groups (TILs; SLs and overall lymphocytes). The test was paired when comparing baseline and on-treatment groups. The paired test was also used when comparing pre versus On-Treatment CD3 + , CD4 + and CD8 + counts. T-cell markers and tumour content were correlated using the non-parametric Spearman's rank correlation. Tumour content versus T-cell markers was plotted and loess regression was used to fit a smooth line to illustrate the relationship between the two variables. P-values of less than 0.05 were considered statistically significant. Pre-treatment TIL levels correlate with a better pCR rate We determined the number of both SLs and TILs in the baseline pretreatment FFPE tumors of 68/88 patients who were recruited to the TCH/L trial (Fig. 2a, b). Our study demonstrated that patients who achieved a pCR at surgery post-chemotherapy had significantly higher numbers of TILs (p = 0.05) in their baseline pre-treatment tumour samples, relative to those patients who did not achieve a pCR postchemotherapy (Fig. 2c). We also observed that pre-treatment SL counts may be predictive of a better chance of achieving a pCR post-chemotherapy but did not reach statistical significance (p = 0.08). While larger studies have shown estrogen receptor status is predictive of pCR, it did not have an impact on rates of pCR in the TCHL study (p = 0.2141) [12]. Correlation between on-treatment lymphocyte counts and pCR We have previously shown that tumour epithelial cells are undetectable in the day-20 On-treatment biopsies of some patients who go on to achieve pCR at subsequent surgery [13]. Tumour biopsy samples were obtained from 20 patients 20-days after they had undergone cycle 1 of neo-adjuvant chemotherapy treatment (On-treatment samples). Analysis of both SLs and TILs in these On-treatment tumour biopsy samples identified that, in contrast to the pre-treatment tumour biopsies, SL, TIL and OL counts are not significantly different between the two groups defined by pCR versus no-pCR at subsequent surgery (Fig. 1d). In our pCR group we observed that after 1 cycle of therapy 70% (7/10) of biopsies had no residual tumour remaining (< 5% residual tumour). When we compared TIL counts in the pCR group we observed a non-significant trend whereby TIL numbers were lower in the biopsies with no residual tumour relative to the remaining biopsies where residual tumour remained (p = 0.14). Of the 20 available on-treatment biopsy samples, eight (which includes a sample from the no-pCR group) had no residual tumour left in the biopsy after 20-days of starting neo-adjuvant treatment. Upon excluding these cases in which the immune response is possibly already subsiding, we observed that OL (p = 9.09 × 10 -3 ) counts were significantly higher in on-treatment tumour biopsies from patients who subsequently achieved a pCR relative to those who failed to achieve a pCR at subsequent surgery (Fig. 2e). When we stratified the lymphocyte counts into either TILs or SLs, patients who achieved a subsequent pCR after completing neo-adjuvant chemotherapy treatment had significantly higher SL counts (p = 9.09 × 10 -3 ) in their On-treatment tumour biopsy samples than those patients who did not achieve a subsequent pCR, but this effect was not seen for TILs (p = 0.1). Level of lymphocytes increase with neo-adjuvant TCHL chemotherapy treatment Of the 20 fresh frozen On-treatment patient samples available, 16 had matched baseline infiltrating lymphocyte information allowing for an analysis of changes in infiltrating lymphocyte levels in paired pre-and on-treatment samples. Examination of these 16 samples (pCR n = 9 vs No-pCR n = 7), irrespective of residual tumour status, determined that 1 cycle of neo-adjuvant TCHL treatment was associated with changes in levels of infiltrating lymphocytes in patient tumours when they were stratified on the basis of subsequent pCR. There was no consistent significant difference in TIL, SL or OL levels between baseline and day-20 tumour biopsies when the tumours in the group that attained a pCR at subsequent surgery were analysed (Fig. 3a). However, in patients who did not achieve a subsequent pCR, we observed a trend from the matched baseline to the day-20 On-treatment samples whereby lymphocyte numbers in the tumours significantly increased (TILs, p = 0.05; SILs, p = 0.08; OLs, p = 0.05) (Fig. 3b, c). Neo-adjuvant TCHL treatment reduces number of tumour-related T-cells Given the key role of T-cells in regulating the immune response, we performed IHC analysis on 13/16 paired preand on-treatment fresh frozen samples for which we had sufficient material. The T-cell markers CD3 (pan T-cell), CD4 (T helper cells) and CD8 (cytotoxic T-cell) were examined. No distinction was made between stromal and tumour infiltrating T-cell populations. When analysing all patients, we observed that levels of CD3 + , CD4 + or CD8 + T-cells did not significantly increase or change from the baseline to the On-treatment tumour biopsy samples (Supplementary Figure 1). To further analyse the effect of neo-adjuvant treatment on T-cell numbers we observed the changes in CD3 + , CD4 + and CD8 + T-cells in the matched baseline and day-20 On-treatment tumour biopsy samples of individual patients (Fig. 4). Interestingly, in those patients who achieved a subsequent pCR, we found a decrease in levels of CD4 + or CD8 + T-cells in 4/5 patients at day-20. However, in those patients who did not achieve a subsequent pCR, only 4/8 patients had a decrease in CD4 + T-cells at day-20, whilst 5/8 had a decrease in CD8 + T-cells. (Four random areas the size of a high power microscope field (between 100,000 and 100,500μM 2 ) were selected in each case for TIL analysis.) c Correlation between baseline counts of TILs, SLs and OLs with pCR status in TCHL trial patients (n = 68/88). d Correlation between pCR status and lymphocyte counts in On-treatment biopsy samples obtained at 20-days or after 1 cycle of neo-adjuvant treatment (n = 20). e Correlation between pCR status and lymphocyte counts in those On-treatment biopsy samples where residual tumour remains after 20-days of neo-adjuvant treatment (n = 13). p-values are calculated using a Wilcoxon signed-rank test and a p-value < 0.05 was considered significant. PCR-Pathological complete response; No pCR no pathological complete response, TIL tumour infiltrating lymphocyte, SL stromal lymphocytes, OL overall lymphocytes 1 3 A reduction in tumour volume correlates with decreased numbers of CD4 + and CD3 + T-cells As outlined above, neo-adjuvant TCH/L-based treatment results in a reduction of tumour volume after 1 cycle of treatment, and this tumour reduction correlates with a greater chance of a patient achieving a pCR [13]. However, we aimed to further understand if there was a correlation between loss of lymphocytes, either CD3+ , CD4 + or CD8 + T-cells in the day-20 On-treatment tumour biopsy and a reduction in tumour volume in the biopsy (Fig. 5). Using a Spearman rank correlation, we found a significant correlation whereby a reduction of CD3 + (p = 0.04, rho = 0.60) and CD4 + (p = 0.01, rho = 0.72) T-cells was associated with decreased residual tumour content post-1 cycle of treatment. We also observed a similar positive trend for CD8 + T-cells, but the results did not reach statistical significance (p = 0.08, rho = 0.52). We, however, did not see a positive trend for OLs (p = 0.1, rho = 0.50). Discussion Long-term outcomes for women with HER2-driven earlystage breast cancer have significantly improved since the advent of HER2-targeted therapies including trastuzumab and lapatinib [1]. In our study, we wanted to determine the impact, not just of TILs on likelihood of pCR, but also of neo-adjuvant TCH/L treatment on the levels of TILs in patient samples. Therefore, uniquely, as part of the TCHL trial we were able to obtain on-treatment biopsy samples taken after 20-days of neo-adjuvant chemotherapy. Using these on-treatment samples we aimed to define the impact of neo-adjuvant treatment not only on infiltrating lymphocyte counts but also specifically on T-cell numbers. The goal being to determine if changes in lymphocyte populations might influence a patient's chance of achieving a subsequent pCR. TILs are proven to have positive prognostic implications in the outcome of early-stage breast cancer [14]; with elevated levels of TILs associated with a greater chance of a Fig. 3 Comparison of the changes in lymphocyte counts in individual patients in baseline pre-treated biopsy samples and those with matched biopsy samples after 20 days of chemotherapy in patients who a those who achieved a pCR (n = 9) versus b those failed to achieve a pCR (n = 7). c Average lymphocyte counts observed between patients who achieved a pCR and those who failed to achieve a pCR. p-values are calculated using a paired Wilcoxon signed-rank test and a p-value < 0.05 was considered significant. pCR-Pathological complete response; No pCR no pathological complete response; Pre baseline biopsy; On on-treatment biopsy; TIL tumour infiltrating lymphocyte; SL stromal lymphocytes; OL overall lymphocytes; Red bars -pCR; Blue bars -no pCR patient achieving a pCR (n = 1256) [3]. This effect occurs regardless of the type of neo-adjuvant anti-HER2 agent or chemotherapy used [3]. In our study, we aimed to determine for research purposes the impact of neoadjuvant treatment on TILs in HER2-positive breast cancer as per the TIL working group [7] and Vinayak et al. [8]. In the TCHL study, we found that baseline numbers of OLs in the tumour were not a significant predictive indicator of pCR (although there was a clear trend (p = 0.0634)). Importantly, we classified these lymphocytes in accordance with Salagado et al. [7], being dependent on their proximity to the tumour and then defined them as either TILs (in contact with tumour) or SLs (not in contact with tumour epithelial cells). We determined that, at diagnosis, it is a higher number of TILs that most determine the likelihood of achieving a pCR. The lack of correlation between SLs and likelihood of pCR in our study is in contrast to that observed in other HER2-positive or triple negative breast cancer studies, where SLs are an independent predictive marker of pCR [15,16]. The discrepancy between these results could be reflective of the relatively small sample size in all studies, and would have to be examined in a larger cohort. In a limited number of the day-20 on-treatment biopsies available, we determined SL and TIL levels in patients who achieved a pCR (but had residual tumour after 1 cycle of therapy) and those who failed to achieve a pCR at subsequent surgery. We determined that SL numbers were significantly increased at day-20 in the pCR group. However, we also found, by comparing matched lymphocyte levels between baseline and on-treatment tumour biopsy samples, that levels of lymphocytes do not increase in the group that achieve a pCR at subsequent surgery. This was in contrast to the non-pCR group, where both TIL and OL counts were significantly increased in the tumour after 1 cycle of neoadjuvant chemotherapy treatment. Our findings support the hypothesis that in patients who achieve a pCR the immune microenvironment which already surrounds the tumour at baseline likely plays an important role in response to subsequent therapy. Hamy et al. [17] identified that increased numbers of SLs at surgery are associated with a worse demonstrated that on-treatment stromal TIL numbers were higher (but not significantly p = 0.066) in the pCR group relative to the non-pCR group [18]. Therefore the results of both the TRIO-US B07 study and our TCHL study identifies that treatment may quickly increase the numbers of lymphocytes around a tumour, in particular in patients who do not achieve a subsequent pCR, and thus also suggests that analysis of lymphocyte numbers after the start of a patient's treatment may provide a good indication as to how a patient's tumour is likely to respond to treatment. Interestingly, in the only other neo-adjuvant HER2-positive breast cancer study to collect matched on-treatment biopsy samples (PAMELA), Nuciforo et al. [19] found that in patients with HER2-positive breast cancer who achieved a pCR, 15 days of treatment with dual HER2-blockade (but no chemotherapy) resulted in a significant increase in the level of TILs, and this effect was associated with an increased chance of achieving a subsequent pCR. However, in the PAMELA study, in contrast to our study, they did not determine SLs or TILs as separate populations, but only counted OLs. In our clinical study, patients were also treated with chemotherapy along with trastuzumab ± lapatinib, which may have a different impact on the immune contexture within tumours. Many studies have identified the importance of baseline lymphocyte numbers as a positive prognostic factor in determining subsequent pCR [20][21][22]. To date, no study has looked at the impact of neo-adjuvant treatment on immune contexture within tumours, particularly T-cell the plots. Loess regression was used to fit the smooth line to the data (red) and the dotted lines show the 95% confidence intervals levels, and how this correlates with future pCR. The pan T-cell marker CD3 indicates the T-cell numbers present in the biopsy samples as opposed to the CD45 IHC antibody which identifies a broad array of hematopoietic immune cell types including T-cells, NK-cells, B-cells and macrophages/ monocytes (but not erythrocytes and platelets) [23]. CD4 and CD8 identify two important general T-cell subsets. CD4 + T-cells can play an important role in directly killing tumour cells, influencing the active immune response within the tumour microenvironment, and increasing the activity of B-cells and cytotoxic CD8 + T-cells in secondary lymphoid organs [24]. CD8 + T-cells are antigen-specific, cytotoxic cells that are a major effector cell of the adaptive immune response [25]. Exhausted CD8 + T-cells are the target of immune checkpoint inhibitor drugs such as pembrolizumab, nivolumab, and ipilimumab that are producing remarkable responses across multiple cancer types [26]. When we compared matched baseline and on-treatment samples in our small cohort of samples, we observed a reduction in numbers of CD4 + and CD8 + T-cells, in particular in tumours in the pCR group. Indeed, overall we found a significant correlation between a reduction in CD3 + and CD4 + T-cell numbers and a reduction in tumour cell content in tumour biopsies after 1 cycle of treatment. The latter is correlated with likelihood of subsequent pCR at surgery, as we have also shown previously [13]. Reduced TILs or in our case reduced CD4 + T-cell numbers around the tumour could be a direct result of chemotherapy treatment [17,[27][28][29][30], but the association with reduced tumour cell content here suggests an already diminishing immune response in those tumours that are exquisitely sensitive to neo-adjuvant treatment. That the change in CD8 + T-cells is not as dramatic may reflect a different clearance dynamic between the T-cell subsets following tumour elimination. Supportive of this result, the TRIO-US B-07 study showed that levels of CD8 + T cells were lower in those tumours which had reduced immune content, likely as a result of reduced tumour burden [18]. However, the TRIO-US B-07 study did not correlate this result with subsequent pCR. Our results, however, are based on a small cohort of samples therefore further classification of subsets of CD4 + T-cells, such as the immune dampening CD4 + FOXP3 + regulatory T-cells (Tregs), in a larger population of on-treatment HER2 + breast cancer biopsy samples in the future is warranted to provide greater insight [31]. Our analysis sheds light on the modulation of the immune response that occurs early during neo-adjuvant chemotherapy. In on-treatment tumour biopsy samples, lymphocyte counts increase after 1 cycle of neo-adjuvant therapy (in particular in tumours that do not end in pCR at subsequent surgery), but a reduction in T-cell counts occurs in some tumours which correlates with a lower tumour burden in day-20 on-treatment tumour biopsies. The latter is associated with a higher likelihood of pCR at subsequent surgery [13]. The results of our study and the TRIO-US study indicate that even after 1 cycle of treatment, the immune system may have already 'played its role' in responding tumours. Our results are limited by small tumour numbers but highlight the need to study the early impact of neo-adjuvant treatment in a larger population to confirm these exciting initial findings. These studies could be expanded to assess the impact of dual anti-HER2 antibody therapy (including both trastuzumab and pertuzumab) on immune contexture. Funding Open Access funding provided by the IReL Consortium. This study was supported by funding from the Irish Cancer Society's research Centre BreastPredict (CCRC13GAL), and NECRET -the Northeast Cancer Research and Education Trust. Supplementary Information Data availability All data will be made available under reasonable request. Declarations Conflict of interest I can confirm that the authors included have no conflicts of interest in regard to this study. Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent All patents included in this study were consented and recruited to the TCHL (ICORG10-05) (NCT01485926) Phase-II clinical trial. Ethical approval was obtained from University College Cork, Ireland. Consent for publication I can confirm that the authors have consented to the publication of this study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
6,325.2
2021-05-13T00:00:00.000
[ "Medicine", "Biology" ]
Cosmological Observatories We study the static patch of de Sitter space in the presence of a timelike boundary. We impose that the conformal class of the induced metric and the trace of the extrinsic curvature, $K$, are fixed at the boundary. We present the thermodynamic structure of de Sitter space subject to these boundary conditions, for static and spherically symmetric configurations to leading order in the semiclassical approximation. In three spacetime dimensions, and taking $K$ constant on a toroidal Euclidean boundary, we find that the spacetime is thermally stable for all $K$. In four spacetime dimensions, the thermal stability depends on the value of $K$. It is established that for sufficiently large $K$, the de Sitter static patch subject to conformal boundary conditions is thermally stable. This contrasts the Dirichlet problem for which the region encompassing the cosmological horizon has negative specific heat. We present an analysis of the linearised Einstein equations subject to conformal boundary conditions. In the worldline limit of the timelike boundary, the underlying modes are linked to the quasinormal modes of the static patch. In the limit where the timelike boundary approaches the cosmological event horizon, the linearised modes are interpreted in terms of the shear and sound modes of a fluid dynamical system. Additionally, we find modes with a frequency of positive imaginary part. Measured in a local inertial reference frame, and taking the stretched cosmological horizon limit, these modes grow at most polynomially. Introduction In the absence of an asymptotic spatial or null boundary, the construction of gauge invariant observables subject to the constraints of diffeomorphism redundancies in a theory of gravity becomes a challenging task.One is often led to relational notions [1][2][3] (for a review on recent work see [4]) whereby a given physical phenomenon is measured in relation to some other semiclassical feature.For instance, in inflationary models of the early Universe, we can measure the time-dependence of physical phenomena with respect to the slow classical roll of the background inflaton field.From a more quasilocal perspective, one might imagine decorating spacetime with a worldline [5][6][7][8], perhaps slightly thickened into a worldtube, and use this as a reference frame for ambient phenomena.This perspective appears to be of particular value for an asymptotically de Sitter spacetime [9][10][11][12][13][14][15][16], where not only are the Cauchy spatial slices potentially compact, but quasilocal entities are moreover surrounded by a cosmological event horizon rendering most of the expanding portion of spacetime physically obscure.A drawback of the relational approach is that it often necessitates the presence of a semiclassical feature in spacetime, making the general picture away from the semiclassical or perturbative regime difficult to control. An alternative, complementary, route may be to study the gravitational theory on a manifold endowed with a quasi-auxiliary timelike boundary Γ, much like we do when considering gravitational physics in anti-de Sitter space, and try to make sense of general relativity in such a setting.This setup has been the focus of recent work in mathematical relativity [17][18][19][20][21][22], accompanied by [23][24][25] (see also [26,27] for related work).Of particular interest to our work is the proposal of [18,21] that in four spacetime dimensions certain conformal data along Γ lead to a well-posed initial boundary value problem.In [18,21] it is further established that generically, the Dirichlet problem in general relativity-whereby one fixes the induced metric along Γ-suffers from potential existence and nonuniqueness issues for both Euclidean and Lorentzian signature.Concretely, the conformal boundary conditions of interest fix the conformal class of the induced metric, [g mn | Γ ] (conf) , and the trace of the extrinsic curvature, K, along Γ whilst also specifying standard Cauchy data along a spalike surface Σ intersecting Γ at its boundary. In this paper, we explore the conformal boundary conditions of [18,21] for general relativity with a positive cosmological constant Λ [28][29][30].We consider the problem in both Euclidean and Lorentzian signatures.In Euclidean signature, our main goal is to compute the semiclassical approximation of the gravitational path integral and consider its interpretation from the point of view of Euclidean gravitational thermodynamics [31,32].In the absence of a boundary, the natural Euclidean geometry for general relativity with Λ > 0 is the sphere [31], void of any external data such as the size of a thermal circle, and one is led to path integrate fields on top of it.In adding a boundary to our Euclidean manifold, as pointed out in [12,33,34] among other places, one has the possibility of providing a novel perspective to the sphere path integral and its rich though elusive physical content [35]. 1 In Lorentzian signature, one is led to the question of dynamical features of de Sitter space, known to be dynamically stable at the classical level [38], in the presence of a timelike boundary.There are two timelike surfaces of particular interest.One of these is the worldline limit, whereby the spatial size of Γ becomes small in units of Λ.The other is the cosmological horizon limit, whereby Γ approaches the cosmological de Sitter horizon.The former limit is of interest in describing the theory of a quasilocal entity, whilst the latter is of interest if one wishes to describe physics from the perspective of a stretched horizon [39][40][41][42][43]. Organisation and summary of main results In section 2, we present the general framework, and provide an explicit definition of the conformal boundary conditions.As noted, our gravitational theory is endowed with a positive cosmological constant Λ = +(D − 1)(D − 2)/2ℓ 2 in D spacetime dimensions. In sections 3 and 4, we consider the problem in Euclidean signature for D = 3 and D = 4 spacetime dimensions respectively.The boundary of our manifold is taken to have an S 1 × S D−2 topology.In the standard treatment of semiclassical black hole thermodynamics [31] with Dirichlet boundary conditions, one defines the canonical ensemble by fixing the size of the boundary S 1 to be the inverse temperature β and the radius of the spatial sphere to be fixed to some size r.In our treatment, we will instead fix the conformal class of the boundary metric.As such, we fix a conformal version of the inverse temperature, β ≡ β/r.The other boundary data we fix is the trace of the extrinsic curvature, K.We refer to this ensemble as the conformal canonical ensemble.Gravitational solutions in the conformal canonical ensemble include patches with no horizons, referred to as pole patches, patches with cosmological horizons, referred to as cosmic patches, and patches with black hole horizons, referred to as black hole patches.Upon tuning β and K, one finds that the space is filled with a pure de Sitter spacetime, and we refer to this as a pure de Sitter patch. The complete thermal phase space of static and spherically symmetric solutions at a given β > 0 and Kℓ ∈ R is provided in both D = 3 and D = 4 spacetime dimensions.Below we summarise the main results, with an emphasis on the conformal thermodynamics of the pure de Sitter patch. Three-spacetime dimensions.Our analysis in D = 3 spacetime dimensions naturally builds on recent developments on dS 3 [12,[44][45][46].We find that upon imposing conformal boundary conditions: • Both a pole and a cosmic patch exist at any value of β and Kℓ. • The entropy of the cosmic patch is given by the Gibbons-Hawking entropy of the cosmological horizon, and the specific heat is positive for all β and Kℓ. • The thermodynamic quantities take the form of a two-dimensional conformal field theory. Viewed as such, we identify a c-function which decreases monotonically as one goes from the worldline limit, where Kℓ → −∞, to the stretched horizon, where Kℓ → +∞. • There is a phase transition at βc = 2π (for all values of Kℓ).At β > βc , the cosmic patch is the thermally preferred solution.In the stretched horizon limit, the cosmic patch is thermally stable, while in the worldline limit it is metastable. The thermodynamic picture in D = 3 contrasts that of the canonical ensemble, obtained by imposing Dirichlet boundary conditions.In the canonical ensemble, the specific heat of the cosmic patch is always negative. Four-spacetime dimensions.We now summarise the situation in D = 4 spacetime dimensions, which naturally builds on previous work in [33,34,[47][48][49].We find that upon imposing conformal boundary conditions: • A pole patch solution exists for any value of β and Kℓ.On the other hand, cosmic/black hole horizon patch solutions only exist for certain values of β and Kℓ.When they do exist, we identify three distinct solutions for a given β and Kℓ-one pole patch and two horizon patches, which can be of the cosmic or black hole type. • The entropy of the solutions with horizons is always given by their respective horizon area formulas.The solution with the larger horizon area always has positive specific heat, while the one with a smaller horizon area always has negative specific heat. • When the cosmic patches have positive specific heat, one can take a large temperature limit. In this limit, the entropy goes as N d.o.f./ β2 , and thus resembles that of a conformal field theory in three dimensions.We identify Further taking the large Kℓ limit of the above expression yields N d.o.f.≈ 8π 3 G N K 2 , a behaviour identified in [25] for black holes in Minkowski space subject to conformal boundary conditions. • We identify pure dS 4 patches with positive specific heat.These solutions exist when the tube is positioned sufficiently near the cosmological horizon, starting from r tube ≈ 0.259ℓ.Depending on Kℓ, these solutions are either metastable or globally stable.The pure dS 4 patch is thermally stable in the stretched horizon limit, at least among the spherically symmetric sector.Near the worldline regime, pure dS 4 patches have negative specific heat.The full phase diagram is presented in figure 10. We can contrast the thermodynamic behaviour above to that of the canonical thermal ensemble stemming from Dirichlet boundary conditions [33,34,[47][48][49].In the latter, the specific heat of the pure dS 4 patch is always negative. In section 5, we consider the four-dimensional Lorentzian picture.The theory is placed on a manifold with timelike boundary of R × S 2 topology.In the Lorentzian case, one must further supply standard Cauchy data along the initial time spatial slice Σ. Employing the Kodama-Ishibashi method [50], we present the linearised gravitational dynamics about the pure de Sitter solution.The linearised solutions split into vector and scalar modes concerning their transformation properties under SO(3).We are mainly interested in two limits: one in which the boundary is close to the worldline observer, which we call the worldline limit; and a second one, in which the boundary becomes close to the cosmological horizon, which we call the stretched horizon limit.In each case, the main results of our analysis are: • In the worldline limit of the cosmic patch, we retrieve a set of modes that approximate the quasinormal modes of the static patch [51], whilst also uncovering a family of modes in the scalar sector with a negative imaginary part.The latter modes have a Minkowskian analogue uncovered in [25]. • In the stretched horizon limit, our modes degenerate into a variety of modes.The low-lying vector modes match a set of modes identified in [52] as a type of linearised shear mode for an incompressible non-relativistic Navier-Stokes equation.The scalar modes take either the form of a sound mode with diverging speed of sound as we approach the horizon limit, or a pair of modes with ωℓ = ±i.We provide an understanding of these two modes from a purely Rindler perspective and note that in a local inertial frame the exponential behavior becomes polynomial. Additional technical details are provided in the various appendices. General framework We consider vacuum solutions to general relativity with positive cosmological constant Λ = +(D − 1)(D − 2)/2ℓ 2 in D = 3, and D = 4 spacetime dimensions.In Euclidean signature, the action I E is given by where G N is the Newton's constant, Γ = ∂M, g mn denotes the induced metric at Γ, and the trace of the extrinsic curvature K is given by Here, n = nµ ∂ µ is an outward pointing unit normal vector associated with the boundary, and L n denotes a Lie derivative with respect to nµ .We adopt the notation in which Greek indices µ = 0, ..., D − 1 are used for spacetime indices and m = 0, ..., D − 2 are used for spacetime indices tangent to the boundary. The constant α b.c. in (2.1) depends on the choice of boundary conditions.In most of the paper we will consider conformal boundary conditions, in which we fix the conformal class of the induced metric and the trace of the extrinsic curvature at the boundary, With this set of boundary conditions, the initial boundary value problem in general relativity is proven to be well-posed in Euclidean signature [18,23] and conjectured to be well-posed in Lorentzian signature [21,24].This is in contrast to Dirichlet or Neumann boundary conditions, where general relativity does not permit a well-posed initial boundary value problem for generic boundary data.On occasion, it will be useful to contrast results between different boundary conditions.One has for Dirichlet boundary conditions , for Neumann boundary conditions , to ensure the variational principle is well-defined.Note that for Dirichlet boundary conditions, this gives the standard Gibbons-Hawking-York term [31,53] and that conformal and Neumann boundary conditions have the same action in D = 3 [54]. Regardless of the choice of the boundary term, the equations of motion satisfied in the interior manifold are the Einstein field equations Conformal thermodynamics Following [25], we would like to study the thermodynamic behaviour of solutions subject to conformal boundary conditions, but now in the presence of Λ > 0. For this, we take the topology of the boundary to be S 1 ×S D−2 , and consider the following boundary data, where ω is an unspecified function that in principle could depend on boundary coordinates, 2 and dΩ 2 D−2 is the round metric of the unit (D − 2)-sphere.The Euclidean time coordinate τ ∼ τ + β parameterises the S 1 factor.The parameter r characterises the size of the S D−2 .Given that only the conformal class of the metric is specified, only the dimensionless parameter β ≡ β/r is geometrically meaningful. To define the conformal canonical ensemble, we consider a partition function Z( β, K) as where g * µν are Euclidean metrics satisfying the Einstein field equation (2.5) and obeying the boundary conditions (2.6).Note that if there is more than one solution with the same boundary data, we sum all of them. This equation may have solutions apart from ω = ω constant (a preliminary numerical analysis indeed suggests solutions periodic in τ ).A similar phenomenon occurs in Lorentzian signature, for higher dimensional cases, and also for Λ = 0. We leave a full analysis of these solutions for future work.Just as a simple concrete example, one can consider Euclidean de Sitter solutions in D = 3.One solution is simply given by choosing ω constant, which gives Kℓ as in (3.4).One could also consider the (Euclidean) de Sitter slicing.In this case, the bulk metric can be conveniently written as which at a constant ρ = ρ0, has the same boundary conditions as in (2.6), but now ω depends on τ .The trace of the extrinsic curvature at the boundary is given by Kℓ = 2 cot ρ0, so one can choose ρ0 so that both solutions have the same boundary data.Nonetheless, the time-symmetric spatial slice with τ = 0, which has vanishing extrinsic curvature, has a different proper area than the constant ω solution.Moreover, the above metric is not periodic in τ . A Lorentzian version of these configurations is obtained by taking τ → it.A subset of solutions to (2.7) will appear at the linearised level, and we analyse them in appendix E. These additional solutions need not spoil the uniqueness properties of the Lorentzian conformal boundary conditions, as they have distinguishable Cauchy data. According to the Gibbons-Hawking prescription [31], we interpret Z( β, K) as a leading contribution to the thermodynamics partition function in the G N → 0 limit.Since we do not fix the Euclidean time periodicity but rather the dimensionless ratio β, we interpret this as a thermal system in a conformal canonical ensemble at a fixed conformal temperature β−1 . Given the partition function Z( β, K), one can compute different thermodynamic quantities.For instance, the conformal energy, conformal entropy, and specific heat at fixed K are given by Regular Euclidean solutions.For certain ranges of β and K, the Einstein field equation may give rise to a solution g * µν which contains a Euclidean horizon.In analogy to the Dirichlet case, requiring the solution to be regular at the horizon fixes its size in terms of β and K.We will consider g * µν that are both static and spherically symmetric, taking the explicit form where ω is a constant and f (r), a function of r only.(Although it would be interesting to explore the existence of saddles subject to conformal boundary conditions with less restrictive symmetry properties than (2.11), we will postpone such an analysis to future work.)Note that at the boundary r = r with an outward3 normal vector n = f (r)∂ r , so this metric satisfies conformal boundary conditions (2.6).We further assume that f (r) has a simple root at r = r + , so that (2.13) Then, close to r + , its near horizon geometry (to leading order) is given by, where ρ ≡ 2e ω r−r + f ′ (r + ) .This geometry has a conical singularity near r = r + unless one identifies τ ∼ τ + β with Thus, regularity near the horizon fixes the horizon radius r + in terms of boundary data β and K. dS conformal thermodynamics We first study conformal thermodynamics of three-dimensional gravity with Λ = +1/ℓ 2 > 0. A family of static Euclidean solutions to (2.5) is given by where τ ∼ τ + β and ϕ ∼ ϕ + 2π.The choice for this particular parameterisation of the solution will soon become evident.The parameter ω is an unspecified constant and directly controls the physical size of the boundary, namely r tube = e ω r.The cosmological horizon is located at r = e −ω r c and has a physical radius r c > 0. There is no black hole horizon in the present setup. 4However, for r c ̸ = ℓ, there is a conical defect located at the origin r = 0. Note that one can recover the standard dS static patch coordinates (τ static , r static , ϕ static ) via the identification It is straightforward to verify that due to the choice of parameterisation (3.1), the metric automatically satisfies the first boundary condition in (2.6) at r = r.Requiring that the trace of the extrinsic curvature at the boundary is constant, further fixes the parameter ω in terms of the boundary data, where the ± corresponds to two different spacetime regions of interest, which we discuss below.We also note that, assuming that ω does not depend on τ , we find that (3.1) is the most general solution to the Einstein field equation in three dimensions. We consider two classes of solutions, which we call the pole and the cosmic patch [12].The first one is a patch of spacetime which does not contain the cosmological horizon, while the second one does.In Lorentzian signature they would correspond to the regions shown in figure 1. Below we study the two solutions and their corresponding thermodynamic quantities, separately. Pole patch In the first class of solutions, the cosmological horizon is fixed to be at r c = ℓ, leading to the absence of the conical defect.The spacetime region of interest is r ∈ [0, r], with the boundary at r = r ≤ ℓ.We call this spacetime the pole patch of dS.This solution can be obtained by choosing the minus sign in (3.3). Imposing that the boundary has a constant trace of the extrinsic curvature K leads to which can be inverted to obtain The dimensionless parameter Kℓ ∈ R controls the size of the boundary.For Kℓ → +∞ the boundary locates near the origin, whilst for Kℓ → −∞ it is located near the cosmological horizon.When Kℓ = 0, the boundary is located exactly at Since the cosmological horizon is not part of the pole patch, the parameter β is free, and there is a pole patch solution for all values of β and K. Pole patch thermodynamics.By evaluating (2.1) with α b.c.= 1 and D = 3 on the pole patch solution, the on-shell Euclidean action in terms of the boundary data becomes, Since the action depends linearly on β, one immediately finds that S conf = C K = 0 and that which is independent of β.Note that small fluctuations of the energy can be written as, Following [58], we treat the pole patch of dS as a reference configuration and the on-shell action (3.6) as a subtraction term.Therefore, the conformal energy (3.7) plays the role of a vacuum energy.From now onwards, we will compute subtracted quantities such that the energy of the pole patch solution with trace of extrinsic curvature K vanishes. Cosmic patch We now consider the second class of static geometries (3.1), which contain the cosmological horizon and hence are dubbed as cosmic patches of dS.This is achieved by choosing the plus sign on (3.3) and considering the region r ∈ [r, e −ω r c ].As a consequence, the conical defect at r = 0 is not part of the cosmic patch. Regularity of the geometry near the cosmological horizon imposes that the inverse conformal temperature of the cosmic patch is given by which is always greater than zero.The conformal temperature β−1 becomes zero as the boundary approaches the origin.On the other hand, the conformal temperature diverges to infinity as the boundary approaches the cosmological horizon. Requiring that the boundary has a constant trace of the extrinsic curvature K, fixes which can take any real value.Contrary to the pole patch, the limit of Kℓ going to positive and negative infinity now corresponds to the limit of the boundary approaching the cosmological horizon and the origin, respectively.This is expected as the normal vector now points in the opposite direction. Using (3.9) and (3.10), we can express r c and r tube in terms of the boundary data β and Kℓ, Interestingly, both r c and r tube depend linearly on the conformal temperature β−1 .This fact implies that the cosmological horizon r c is a monotonically increasing function of the conformal temperature, which contrasts with the Dirichlet problem, where one finds an opposite behaviour, see appendix A. Additionally, for any positive β and real K, one always finds that 0 < r tube < r c .We show rc β ℓ and r tube β ℓ as functions of Kℓ in figure 2. Cosmic patch thermodynamics.To compute thermodynamic quantities, we evaluate the cosmic patch solution on-shell in the Euclidean action (2.1).It is useful to define a regulated quantity by subtracting the pole patch action (3.6) with the same boundary data.Then, the associated regulated action corresponding to the cosmic patch solution is given by We emphasise that the pole action that we subtract has the same trace of extrinsic curvature K.We further note that The conformal energy and conformal entropy are given by The entropy S conf agrees with the Gibbons-Hawking entropy A horizon /4G N of the cosmological horizon.We can also compute the specific heat at constant K, which is given by The specific hear is positive for all allowed values of β and K, which means that this configuration is thermally stable under small thermal fluctuations.This is in contrast to the specific heat of the cosmic patch with Dirichlet boundary conditions, which is always negative, as reviewed in appendix A. Moreover, note that C K grows linearly with the conformal temperature.This behaviour resembles that of a two-dimensional conformal field theory at finite temperature.This observation can be sharpened upon expressing β in terms of E conf , whereby the conformal entropy and specific heat can be written as where which is a monotonically-decreasing function of Kℓ displayed in figure 3. 5 Note that the entropy in (3.15) resembles the Cardy formula, describing the growth of states of energy E conf in a two-dimensional conformal field theory of central charge c conf [61].By considering large positive/negative Kℓ limits, 2G N , as Kℓ → −∞ . (3.17As comparison, we included the plot of the analogous c BTZ conf , for the BTZ black hole with conformal boundary conditions.The dS and AdS radii are chose to be equal.In the limit where Kℓ → ∞ both central charges coincide, as the tube in both cases gets very close to the horizon and they both exhibit Rindler behaviour.In dashed lines, we show the position of the conformal boundary in the AdS case, where we recover the Brown-Henneaux central charge. Finally, using (3.15), we find a first-law type of relation in which where µ K can be understood as the chemical potential associated to K, Pure dS 3 patch We now discuss a particular solution, denoted as the pure dS 3 solution.The solution arises from tuning the conformal temperature of the system such that r c = ℓ, leading to a pure dS 3 with a boundary.For the cosmic patch, this happens at the conformal temperature β = βdS , which is a function of Kℓ given by βdS (Kℓ (3.21) We can recover the standard dS temperature when the tube becomes small, βdS r tube → 2πℓ as r tube → 0 . (3.22)This is the worldline limit of the solution.It also corresponds to a small conformal temperature.On the other hand, we can consider the stretched horizon limit, which is defined by taking r tube → ℓ.This is equivalent to a high conformal temperature limit, βdS → 0 as r tube → ℓ . (3.23) Now we can use (3.13) and (3.14) to calculate the thermodynamic properties of pure dS 3 .The conformal entropy and specific heat at constant K are equal and independent of r tube , We stress again that the specific heat C K is positive for all values of r tube .The conformal energy reads Note that in the worldline limit, the energy reduces to the energy c conf /12.One may wonder why the energy does not go to zero as we take worldline limit.The reason is that we are considering subtracted energies. To reproduce the full static patch thermodynamics from a cosmic patch solution, we must include the pole patch to complete a full static patch.Note that this pole patch is not the same that we are using to define the regulated action, as this one has the same β but the opposite trace of the extrinsic curvature, −K.Now it is straightforward to check that, for β = βdS , a combination of a pure de Sitter patch with K and a pole patch with −K (without the subtraction terms) indeed reproduces the Gibbons-Hawking result, as the right-hand side is the exponential of the Gibbons-Hawking entropy for the de Sitter horizon. For the energy, we can define a regulated action for this pole patch with opposite K, In the pure dS 3 solution, the conformal energy of the pole patch with −K is negative and it is exactly opposite to the conformal energy of the cosmic patch with K such that as we expect for the full static patch of de Sitter space. Phase diagram Now we can combine the results from pole patch and cosmic patch thermodynamics.Recall that in D = 3, both solutions exist for all values of β and K.This means that one may write the partition function of the total system in the semiclassical limit as where E, reg ( β, K) are given by (3.6) and (3.12), respectively.The sign of I (cosmic) E, reg ( β, K) therefore determines which configuration is stable/meta-stable.It can be shown that, independently of Kℓ, there is a critical inverse temperature β = 2π ≡ βc , for which I (cosmic) E ( β, K) changes sign, see figure 4.There is a first-order phase transition at β = βc . • In the low-temperature regime with β > βc , is positive for all values of Kℓ which implies that the pole patch is thermodynamically favoured.Given it has positive specific heat, the cosmic patch is then metastable. • In the high-temperature regime with β < βc , I (cosmic) E, reg becomes negative, so it becomes thermodynamically favored and a stable configuration.It is interesting to analyse the phase structure of conformal thermodynamics at fixed Kℓ.Consider the conformal energy along the path of lowest free energy at fixed Kℓ.Using (3.7) and (3.13) and expressing them in terms of the central charge c conf , we find that The discontinuity of E conf at β = βc reflects a first-order phase transition. For the conformal entropy, we find a behaviour similar to the Hawking-Page transition of the AdS black hole [62,63].For temperatures lower than β−1 c , the conformal entropy is zero.There is a discontinuity in the entropy at the critical temperature β−1 c , after which, in the high temperature regime, the entropy is precisely given by the Gibbons-Hawking entropy. Pure dS 3 phase structure.For the pure dS 3 solution, we must constrain the inverse temperature to the dS inverse temperature (3.20).In this case, Kℓ = 0 corresponds to r tube = ℓ/ √ 2. • Kℓ > 0 implies that c conf < 3ℓ 2G N .In this regime, the dS temperature is higher than the critical temperature, βdS < βc .As a consequence, in this regime, the pure dS 3 has free energy lower than the pole patch and hence is thermodynamically favoured. 2G N , we find that βdS > βc , so the pure dS solution is only metastable.Lastly, at c conf = 3ℓ 2G N , the phase transition and dS 3 temperature coincide.The full phase diagram, including the curve of pure dS 3 solutions, is depicted in figure 5.There is a critical conformal inverse temperature at βc = 2π, marked in black.For β < βc (shaded in green), the cosmic patch is the most favourable configuration.For β > βc (in white), the cosmic patch is metastable.The darker green curve shows pure stable (solid) and metastable (dashed) dS3 solutions, that follow relation (3.20).Worldline and stretched horizon limits of pure dS3 are further indicated. A two-sphere perspective As a final remark before moving on to the four-dimensional case, we consider a two-sphere rather than toroidal boundary topology.As our coordinate system, we take where for the full three-sphere we have ρ ∈ (−ℓ, ℓ), θ ∈ (0, π), and ϕ ∼ ϕ + 2π.We take the conformal boundary to be located at constant ρ = ρ 0 .The induced metric has the conformal structure of the unit S 2 metric.Requiring further that the boundary has a constant K fixes where we have used a unit normal vector n = ℓ √ ℓ 2 −ρ 2 ∂ ρ .By computing the on-shell action (2.1) for this solution, we can approximate the path integral as The above expression is real valued.Unlike the thermodynamic expression (3.13) and (3.15), the above expression does not immediately take the form of the two-sphere path integral of a twodimensional conformal field theory of central charge c conf .A similar observation will hold for Euclidean AdS 3 with a two-sphere boundary subject to conformal boundary conditions. dS conformal thermodynamics In this section, we study conformal thermodynamics of four-dimensional gravity with Λ = +3/ℓ 2 for the following family of static and spherically symmetric Euclidean solutions where τ ∼ τ + β, θ ∈ (0, π), and ϕ ∼ ϕ + 2π.Similarly to the three-dimensional case, the parameter ω controls the size of the boundary, namely r tube = e ω r. In D = 4, one further has the Euclidean Schwarzschild-de Sitter solution corresponding to a black hole placed inside the cosmological horizon.The parameter µ is related to the size of the cosmological horizon r c through For µ = 0, we obtain an empty de Sitter solution with cosmological horizon at r c = ℓ.For µ > 0, there is a black hole horizon located at r = e −ω r bh with horizon radius r bh .The cosmological horizon in this case is smaller than the one without the black hole, r c < ℓ.Note that the size of the two horizons are related by Both horizon sizes coincide when r c = r bh = ℓ √ 3 .This is known as the Nariai radius, which also serves as a lower bound of r c .For µ < 0, there is a naked singularity located at r = 0, and the cosmological horizon is greater than the de Sitter length, r c > ℓ. As in dS 3 , we note that the de Sitter static patch coordinates (τ static , r static , θ static , ϕ static ) can be recovered via the rescaling Pole patch We begin with the solutions with r c = ℓ and take the spacetime region of interest to be r ∈ [0, r]. As a consequence, the pole patches contain the worldline at r = 0 without any horizon. Imposing that the boundary has a constant trace of the extrinsic curvature K leads to By inverting this equation, we find that the physical size of the boundary can be written as a function of Kℓ as The parameter Kℓ can take any real value.The limit of large positive and large negative Kℓ corresponds to pushing the boundary to the origin and the cosmological horizon, respectively. Since the pole patches do not contain any horizon, the parameter β is free and can take any positive value.Hence, the pole patch exists for all values of β and K. Pole patch thermodynamics.We now evaluate (2.1) with α b.c.= 1 and D = 4 on the pole patch solution.The on-shell Euclidean action in terms of the boundary data reads, By taking the limit Kℓ → ∞, we find that , retrieving the result in flat space obtained in [25].This is expected as this limit corresponds to a boundary size that is parameterically small compared to the cosmological horizon. As the action depends linearly on β, one immediately finds that S conf = C K = 0 and that which is independent of β.Note that small fluctuations of the energy can be written as Curiously, the coefficient in front of δK is independent of ℓ.As in D = 3, we treat the pole patch of de Sitter as a reference configuration, and the on-shell action (4.7) as a subtraction term. Cosmic patch In this section, we consider a class of geometries (4.1) which contain the cosmological horizon.We call these cosmic patches of dS.The spacetime region of interest is taken to be r ∈ [r, e −ω r c ].For ℓ/ √ 3 < r c < ℓ, the full solution has a black hole horizon that lies outside the boundary, so it is not present in the cosmic patch.Similarly, for r c > ℓ, there is a naked timelike singularity at the origin in Lorentzian signature, which would be associated with the presence of negative energy.Again, since this region is not part of the cosmic patch, we also allow for r c > ℓ. Regularity of the geometry near the cosmological horizon fixes the conformal temperature of the cosmic patch to be which is always greater than zero.In this case, the conformal temperature β−1 does not have a lower bound.Specifically, for ℓ/ √ 3 < r c ≤ ℓ, the zero temperature limit can be reached by setting r c = ℓ and taking r tube /ℓ → 0. For r c > ℓ, there are also cosmic patches with zero conformal temperature.They have the boundary located closed to the naked singularity, that is to say r tube /ℓ → 0. The high conformal temperature limit β → 0 can be achieved in different ways, for instance, by taking the near horizon limits, r tube → r bh or r tube → r c . Setting the trace of the extrinsic curvature at the boundary to be constant leads to There is no upper or lower bound on Kℓ.The limit of large positive and large negative Kℓ correspond to taking the boundary to be near the cosmological horizon and the black hole horizon, respectively.In the case of r c ≥ ℓ, there is no black hole and so the large negative limit of Kℓ corresponds to taking the boundary to be near the origin. Unlike the D = 3 case, we could not find analytic expressions for r tube and r c in terms of the boundary data β and K, but we relegate some useful analytical expressions to appendix B. We later present examples of r tube and r c as functions of β at fixed Kℓ in figure 9, together with the black hole patch solutions. Cosmic patch thermodynamics.We start by computing the on-shell action (2.1) of the cosmic patch.We define a regulated action as the on-shell action of the cosmic patch subtracted by the pole patch action with the same β and K.By expressing it in terms of r bh and r tube , we find that where I (pole) E is given by (4.7). The conformal energy and the conformal entropy of the cosmic patch are The conformal entropy S conf agrees with the Gibbons-Hawking entropy of the cosmological horizon, A horizon /4G N .The specific heat at constant K of the cosmic patch is given by r tube ℓ 2 + 8r 2 tube ℓ 4 − 4r 4 tube ℓ 2 . ( It is interesting to remark that in the limit where the boundary approaches the cosmological horizon, the specific heat becomes This positive specific heat is to be contrasted with the negative specific heat that is obtained when Dirichlet boundary conditions are imposed on the cosmic patch [34,48].We will further discuss this fact when we consider the pure dS 4 solutions. Interestingly, the high conformal temperature limit of the specific heat (4.15) at finite Kℓ is given by This takes the form of the specific heat of a three-dimensional conformal field theory.Under this interpretation, the putative number of degrees of freedom goes as which is a monotonically decreasing function of Kℓ, displayed in figure 7. Further taking Kℓ → +∞ yields N d.o.f.→ 8π 3 /G N K 2 matching the Λ = 0 result in [25]. Finally, we find that the thermodynamic quantities satisfy a first-law type of relation where µ K , similarly to the three-dimensional case, is interpreted as the chemical potential associated to K, Note that the first term in µ K looks identical to the one that appears for the pole patch in D = 4, see (4.9). Black hole patch We now consider a class of geometries (4.1) which contain the black hole horizon.We refer to these as black hole patches of dS 4 .These solutions exist as long as the cosmological horizon radius takes values between the dS length and the Nariai radius, ℓ √ 3 < r c < ℓ.The spacetime region of interest is then given by r ∈ [e −ω r bh , r] where r bh is related to r c through (4.3).For convenience, in this section, we will always express r c in terms of r bh . Given that these regions are complementary to the cosmic patches (see figure 6) many results for these solutions are closely related to those in the cosmic patch, upon replacing r c → r bh .Here we point out the main differences between the two patches and relegate the explicit formulae to appendix C. It is easy to obtain the boundary data from the results of the cosmic patch.The inverse conformal temperature is the same as in (4.10), but with r c → r bh and an extra minus sign.The trace of the extrinsic curvature is minus the expression that appears in (4.11), again with r c → r bh .As opposed to the cosmic patch, in this case, the conformal temperature β−1 has a lower bound β−1 min = 2π, which occurs in the Nariai limit, by setting r tube = ℓ/ √ 3 and taking r bh → ℓ/ √ 3 from below. Below this conformal temperature, the black hole patch solution does not exist.For larger conformal temperatures, β−1 > β−1 min , there is a one-parameter family of black hole patches. Regarding the trace of the extrinsic curvature, the limit of Kℓ approaching negative infinity corresponds to pushing the boundary to be near the cosmological horizon.For Kℓ going to positive infinity, the boundary is pushed near the black hole horizon.The behaviour of Kℓ as a function of r c and r tube is exactly opposite to the cosmic patch since the normal vector points in the opposite direction. Black hole patch thermodynamics.Similarly, thermodynamic quantities can also be obtained from those of the cosmic patch.In particular, the regulated action I (bh) E, reg is the same as in the cosmic patch, but with r c → r bh .The conformal energy is also the same as in (4.13), but with a minus sign in front of the first term and the replacement of r c → r bh .The entropy is now given by which agrees with the Bekenstein-Hawking entropy A horizon /4G N where the horizon in this formula now corresponds to the black hole horizon.The specific heat can also be obtained from (4.14), with the replacement r c → r bh .In particular, it is also positive as the tube approaches the black hole horizon.Explicit expressions and further interesting limits are shown in appendix C. Pure dS 4 patch As in the dS 3 case, we can recover the pure dS The conformal entropy is constant regardless of r tube and is given by the Gibbons-Hawking entropy of the cosmological horizon, as expected. It is interesting to note the behaviour of the specific heat at constant K. Close to the worldline, C K in (4.25) is negative.In fact, in the worldline limit, the specific heat converges to zero from below, C K → −4πr tube ℓ/G N .This is similar to what happens for the Dirichlet case [34,48] where We plot both specific heats in figure 8.A notable feature is that for the case of conformal boundary conditions, the specific heat diverges as r tube → r 0 , with r 0 given by6 For r tube > r 0 , we find that the specific heat is positive and approaches a constant C K → 2πℓ 2 /G N as we take the stretched horizon limit.The pure dS 4 patch with a conformal boundary sufficiently close to the de Sitter horizon is thus thermally stable.The conformal energy of the pure dS 4 is given by which is positive for all 0 < r tube < ℓ with a lowest value given by In terms of the boundary data, the minimum energy (4.30) is obtained precisely when Kℓ = 0.Both in the worldline and in the stretched horizon limit, the energy is divergent.In particular, 3r tube G N , in the worldline limit and , in the stretched horizon limit. As in the pure dS 3 discussion, one could consider a pole patch which, together with the cosmic patch of pure dS 4 , completes the full Euclidean static patch geometry, which is the four-sphere.This is achieved by considering a pole patch which has the same β but an opposite trace of the extrinsic curvature −K.As in D = 3, it is straightforward to check that at the saddle point level In the above, the Z are computed with the bare action, without subtracting the pole on-shell action. The expression parallels a similar observation for two-dimensional near Nariai geometries [64], and suggests that the thermodynamic content of the empty static patch is purely entropic.It would be interesting to understand the above expression at the one-loop level or higher. For the black hole patch of pure dS 4 , we find that, by setting r bh = 0 in (C.4) and (C.6), the thermodynamic quantities are trivial, E conf = S conf = C K = 0, as can be expected. Nariai patch In addition to the pure dS 4 solution, there is another interesting geometry that can be reached by tuning the conformal temperature β−1 to a particular value set by the trace of the extrinsic curvature Kℓ.We call these Nariai patches, which exist as the size of the cosmological and black horizon become close to each other. To find the Nariai temperature β−1 N , we consider the following limit.Let ρ ≡ 1 ϵ r tube − ℓ √ 3 be a dimensionless parameter describing deviation of r tube from the Nariai radius.The Nariai geometry is obtained by setting r c = ℓ √ 3 + ϵ and taking ϵ/ℓ → 0 while keeping ρ fixed.Let us first consider the Nariai solution from the cosmic patch perspective.The cosmological horizon and black hole horizons are, respectively, located at ρ = 1 and −1.Using (4.10) and (4.11), the Nariai limit fixes These equations can be inverted analytically to obtain βN in terms of Kℓ, We note that βN exists for all values of Kℓ ∈ R.This is yet another difference with the Dirichlet problem, where to obtain the Nariai solution one needs to fix the value of r tube to be very close to the Nariai radius. Nariai patch thermodynamics.Using (4.13), the conformal energy of the Nariai solution is given by The conformal entropy and specific heat, evaluated at constant K and β = βN , are respectively given by The specific heat C K becomes positive (negative) as one pushes the boundary to the cosmological (black hole) horizon.In particular, C K changes sign at ρ = ρ 0 where 7 We can also have black hole solutions in the Nariai patch.These are simply obtained by changing ρ → −ρ in the expressions above. The Dirichlet thermodynamics of configurations near the Nariai geometry, from the perspective of a dimensionally reduced theory, was studied in [64,65] where it was shown that the cosmic Nariai patch always has negative specific heat. 7This is given by the only solution to the cubic equation 4ρ 3 0 − 9ρ0 + 2 = 0 with 1 > ρ0 > −1, In this section, we discuss the thermodynamics of four-dimensional spacetime with Λ > 0 by combining the results from the pole patch, cosmic patch, and black hole patch solutions.In the G N → 0 limit, the partition function of the total system is generally given by a sum of all possible patches which have the same boundary data β and K, A pole patch solution exists for all β ∈ R + and Kℓ ∈ R. The omitted terms are additional contributions stemming from the co-existing cosmic/black hole patch solutions, i.e. e −I (cosmic) E, reg or e −I (black hole) . The number of these terms and the details of the solutions depend on the value of the boundary data, as we will discuss below.For patches with positive specific heat, the one with lowest regulated action is thermodynamically stable; otherwise, they are thermodynamically metastable.Patches with negative specific heat are thermodynamically unstable. As opposed to D = 3, solutions with horizons do not exist for all values of β and Kℓ.At low temperatures, only one pole patch solution exists.To separate the phase space regions with no horizon patches, we define the inverse conformal temperature β0 (Kℓ).Note that it depends on the value of Kℓ, so that at a given Kℓ, horizon patches only exist for conformal temperatures such that β ≤ β0 .The curve β0 (Kℓ) can be found numerically and is shown in figure 10.The rest of the phase diagram can be decribed as follows: • Exactly at β0 (Kℓ), there are two solutions: one pole patch and one horizon patch.The latter is a cosmic patch if Kℓ ≲ 0.405.Otherwise, it is a black hole patch.Note the transition happens at the Kℓ in which the Nariai patch specific heat changes sign.In both cases, the horizon patch has positive regulated action and, therefore, it is always sub-dominant. • For lower inverse temperatures, β < β0 (Kℓ), we always find three solutions for any given β and Kℓ.There is always one pole patch and two horizon patches.The horizon patches can be either cosmic or black hole patches, but always the horizon patch with larger horizon size has positive specific heat, while the one with smaller size, has negative specific heat.This can be confirmed by observing that the large (small) horizon patch has a horizon radius which is an increasing (decreasing) function of the conformal temperature, as we show in figure 9.Moreover, if the horizon radius is larger (smaller) than the Nariai radius r N = ℓ/ √ 3, the corresponding horizon patch is a cosmic (black hole) patch.There exists a smooth transition between black hole patch and cosmic patch as one varies the conformal temperature. • At a given critical conformal temperature that depends on the value of Kℓ, there is a firstorder phase transition, similar to the Hawking-Page transition.We call this temperature βc (Kℓ) and show it numerically in figure 10.For β < βc , the large horizon patch solution dominates over the pole patch, while the opposite happens for β > βc . Consequently, for β0 > β > βc , the large horizon patch is metastable, while for β < βc , the large horizon patch becomes stable and the dominant configuration.If a small horizon patch exists, then it is always subdominant.We display various plots of the regulated action, conformal energy, and specific heat as a function of the inverse conformal temperature at fixed Kℓ in appendix D, see figure 12. Pure dS 4 phase structure.For the pure dS 4 solution, we constrain the inverse conformal temperature to be given by the dS inverse temperature (4.22).We note that Kℓ ≈ 0.256 is The number of different solutions co-existing at a given point in the phase diagram depends on whether the point lies above or below the β0 curve (dot-dashed black curve).Above that curve, only one pole patch solution exists.Below the β0 curve, apart from a pole patch solution, there co-exist two additional cosmic/black hole patches, one with negative and one with positive CK .The curve of critical inverse conformal temperature βc is shown in thick black, above which the pole patch is thermodynamically preferred.In the region bounded by β0 and βc curves, shaded in green (yellow) halftone, the cosmic (black hole) patch is metastable.For βc > β, the cosmic (black hole) patch is stable with the associated region shaded in solid green (yellow).The dark green and purple curves represent pure dS4 and Nariai patches.Both curves are divided into three segments: stable, metastable, and unstable, which are shown as thick, dashed and dotted curves, respectively.The (meta)stable Nariai curve marks the separation of the (meta)stable cosmic patch and black hole patch regions. • For Kℓ < −7.202, the dS temperature lies in the intermediate temperature regime implying that the corresponding pure dS 4 is sub-dominant.We find that these pure dS 4 have negative specific heat and are thus unstable.The worldline limit is included in this regime.At Kℓ ≈ −7.202, the dS temperature coincides with β−1 0 .• For −7.202 < Kℓ < 0.256, the dS temperature remains in the intermediate regime, but now the associated specific heat becomes positive.Therefore, these pure dS 4 patches are metastable.At Kℓ ≈ 0.256, the dS temperature coincides with the critical temperature, β−1 c .• For 0.256 < Kℓ, the dS temperature is higher than the critical temperature.As a consequence, the pure dS 4 has regulated action lower than the pole patch.It has also positive specific heat.We therefore find that the pure dS 4 , in this regime, is thermodynamically stable.We note that the strechted horizon is included in this case. Nariai phase structure.To obtain the Nariai solution, we constrain the inverse conformal temperature to the Nariai inverse temperature (4.33). • For Kℓ < 0.405, the Nariai temperature is higher than β−1 0 temperature.In this regime, the Nariai solution has negative specific heat, so it is thermally unstable. • For 0.405 < Kℓ < 2.239, the Nariai temperature lies in the intermediate temperature regime.The corresponding Nariai solution has positive specific heat and positive regulated action.This means that the Nariai soution, in this regime, is metastable. • For 2.239 < Kℓ, the Nariai temperature is higher than the critical temperature.We find that the Nariai solution now becomes thermodynamically stable. Linearised dynamics So far our treatment has been largely based on a quasi-equilibrium Euclidean picture.The aim of our final section is to complement the Euclidean analysis with a Lorentzian analysis.Concretely, we will consider solutions to the four-dimensional linearised Einstein equations equipped with a positive cosmological constant Λ > 0. As boundary conditions, we will once again consider conformal boundary conditions for the induced metric g mn and mean curvature K on a topologically R × S 2 timelike boundary Γ.Our treatment parallels that for Minkowski space [25], here extended to the case of Λ > 0. A portion of our linearised analysis was already treated in [52], in the context of the fluid-gravity correspondence applied to de Sitter horizons. 8 Basic setup We will consider the linearised Einstein equations about the static patch metric, The timelike boundary Γ is located at r = r ∈ (0, ℓ).As in [52], we are primarily interested in dynamical features of the cosmological horizon, but we also report on the dynamical features of the pole patch below.As such, the spacetime region of interest is taken to be the Lorentzian cosmological patch r ∈ (r, ℓ), t ∈ R, and θ ∈ (0, π), ϕ ∼ ϕ + 2π.The induced metric on Γ is given by Using an inward-pointing normal vector n = − f (r)∂ r , the extrinsic curvature and its trace are given by, We denote linearised perturbations about the background (5.1) as where the background metric ḡµν is given in (5.1).The equation of motion for h µν is obtained by expanding (2.5) to first order in ε.Further demanding that the conformal boundary data remains invariant under arbitrary perturbation h µν implies that where γ(x) is an arbitrary function, which will depend on the initial data of the linearised metric h µν , and ḡmn | r=r is the induced metric (5.2).By contracting the first expression in (5.5) with ḡmn , one may write the first boundary condition in a form that does not contain γ(x) as Using (2.2), the variation of the trace of the extrinsic curvature to first order in ϵ is given by where D n denotes the covariant derivative with respect to the boundary metric ḡmn .In the following analysis, we will take (5.6) and (5.7) as the conformal boundary conditions for linearised gravity with Λ > 0. We must also impose conformal boundary conditions on the space of allowed diffeomorphisms ξ µ .Finally, we require that ξ r | r=r = 0, such that the allowed diffeomorphisms do not move the location of the boundary. Kodama-Ishibashi method.Following the treatment of Kodama and Ishibashi [50,68], due to the spherical symmetry and time-translation invariance of the background, we can split our linearised solutions into vector and scalar perturbations, denoted by h µν and h µν respectively.Our details and conventions follow directly those in appendix C of [25].As such, our treatment will be brief in what follows and mostly focused on presenting the main results for Λ > 0. The SO(3) content of h (V ) µν is captured by the vectorial spherical harmonics, V i , which are transverse eigenfunctions of the unit two-sphere Laplacian acting on vectors, with eigenvalues k V = l(l + 1) − 1 for l = 1, 2, . . .The SO(3) content of h (S) µν is captured by the scalar spherical harmonics, S, which are transverse eigenfunctions of the unit two-sphere Laplacian with eigenvalues k S = l(l + 1) for l = 0, 1, 2, . . .We note that the l = 0 and l = 1 modes require a separate treatment and we discuss them in appendix E. Together, h (V ) µν and h (S) µν encode the two propagating degrees of freedom of the four-dimensional metric at the linearised level. In the absence of timelike boundaries, the Kodama-Ishibashi formalism is gauge invariant and reduces the linearised Einstein equations to a set of 'master equations' governing the vectorial and scalar master fields Φ (V ) and Φ (S) which are directly linked to h (V ) µν and h (S) µν .It proves convenient for our analysis, as it did in [25], to select a gauge where the linearised boundary conditions (5.6) and (5.7) act only on h (V ) µν and h (S) µν respectively.This gauge choice is indeed possible, and in this gauge the components of our metric perturbation read (5.8) In the above the indices m and n denote indices with respect to (t, r), while the index i denotes indices on the two-sphere. Vector perturbation The master equation for Φ (V ) , for given angular momentum l ∈ Z + , is given by where ∇ 2 denotes the Laplacian on a two-dimensional de Sitter space with curvature +2/ℓ 2 .The solutions can be expressed as hypergeometric functions (see for instance [66]).For a given frequency, they take the form The boundary condition (5.7) is automatically satisfied while the boundary condition (5.6) imposes Upon scanning numerically for solutions in the complex frequency plane, we find that all vector modes satisfying the conformal boundary conditions have a negative imaginary part, and are therefore dissipative, decaying at late times. Worldline limit.In the worldline limit, where Kℓ → −∞, we find two sets of modes.One set is found to be a small deformation of the quasinormal modes of the de Sitter static patch, whose analytic form (in the worldline limit) is given by [51] ω qnm ℓ = −i(l + n + 1) where n ∈ N. As for the exact quasinormal modes, the modes we find are also purely negative imaginary and their size is of the order of the de Sitter length ℓ.The other set of modes have a real part also and are the counterpart of the vectorial Minkowski modes uncovered in [25].For each l ≥ 2, the second set is a discrete tower of modes with increasing negative imaginary parts, and their size scales with the size of the worldtube r, rather than the de Sitter length. Cosmological horizon limit.In the cosmological horizon limit, where Kℓ → +∞, the structure of the modes is altered.The purely imaginary modes degenerate into a set of modes that approach (5.12) These modes were identified in [52] where they were interpreted as shear modes of a linearised incompressible non-relativistic fluid dynamical behavior near the horizon, paralleling other considerations of the fluid/gravity relation [66,[69][70][71][72].For each l ≥ 2, the second set is a discrete tower of modes with increasing negative imaginary parts, and their size scales with the de Sitter length. Scalar perturbation The master equation for Φ (S) is given by The solutions can be expressed as hypergeometric functions, and take the form The boundary condition (5.6) is automatically satisfied while the boundary condition (5.7) imposes where (5.16) Upon scanning numerically for solutions in the complex frequency plane, we find that some scalar modes satisfying the conformal boundary conditions have a positive imaginary part. Worldline limit.In the worldline limit, where Kℓ → −∞, we find two sets of allowed frequencies. One set corresponds to a small deformation of the scalar quasinormal modes in the pure static patch, which are of the order of the de Sitter length scale ℓ.The deformed quasinormal mode frequencies have a negative imaginary part, and are hence dissipative.The other set of allowed frequencies is borrowed from the analogous modes in Minwkoski space uncovered in [25] and are of the order of the worldtube size r ≪ ℓ.For this set of modes, we find a pair of allowed frequencies with positive imaginary part for each l. Cosmological horizon limit.In the cosmological horizon limit, where Kℓ → +∞, the structure of the modes is altered.There is still a collection of fluid-type modes, but they take a relativistic dispersion relation.Upon expanding r near ℓ in (5.14), implementing the conformal boundary conditions, and scaling ω with 1/K, we find the analytic expansion (5.17) It is natural to interpret the above modes as the sound mode counterpart to the fluid dynamical shear modes in (5.12).It is worth noting, however, that they scale differently with Kℓ, such that in the strict horizon limit only the shear modes survive.This is one of the reasons the sound modes, whose speed of sound becomes infinite in this limit, does not appear in the previous analyses of [52,66].In addition to the sound modes, upon taking the strict horizon limit, Kℓ → +∞, of the scalar solutions (5.14) for each l, and implementing the conformal boundary conditions, any solutions with modes with positive imaginary frequency coalesce either to ωℓ = +i or ωℓ = −i.We show these in figure 11.One can identify these modes in a Rindler analysis, subject to conformal boundary conditions.Concretely, we take the Rindler metric to be with (t, x, y) ∈ R 3 and z ∈ (0, z 0 ].We then perform a straightforward analysis of the linearised Einstein equations, with vanishing Λ, subject to conformal boundary conditions at z = z 0 .One observes that the following configuration (chosen for simplicity to have spatial momentum entirely along the x-direction) solves the linearised Einstein equations with Λ = 0, subject to conformal boundary conditions at z = z 0 , for a selection of complex frequencies.In the limit kz 0 → 0 the allowed frequencies coalesce to ωz 0 = ±i.This mode coincides with the linearised de Sitter mode with ωℓ = ±i.Upon expressing the Rindler solution (5.19) in terms of a local inertial time coordinate, one notes that it grows at most polynomially.As such, although growing in time, the exponential growth of (5.19) is more tame than an ordinary unstable mode.In fact, to leading order at small kz 0 , the Rindler mode is locally a pure diffeomorphism. As shown in appendix F, the leading contribution to the de Sitter scalar modes (5.14) with ω = ±iℓ, in the stretched cosmological horizon limit, are locally pure gauge.This leads to a double suppression effect whereby the physical contribution of the modes is not only small due to the linearised nature of h µν , but also due to a suppression factor that goes as 1/K 2 ℓ 2 . Dynamics of the pole patch, briefly One can also consider linearised gravity subject to conformal boundary conditions in the pole patch.Here we further impose that the gravitational solutions are smooth throughout the whole interior.The pole patch has been explored as a potential candidate for static patch holography in [40,41,44,73,74] among other places. Worldline limit.For Kℓ → −∞ the size of the timelike boundary is small in units of the de Sitter length ℓ.Here our analysis matches the Minkowski analysis presented in [25], where it was observed that the allowed vector modes frequencies are all real-valued, whilst the scalar mode frequencies permit a subset of modes ω (S) with positive real imaginary frequency for each l.At large l, these modes were numerically found to scale as ω (S) r ≈ ±l + icl 1/3 , where c is an order one number.Thus, the pole patch in the thin world tube limit mimics the Minkowskian picture. Cosmological horizon limit.For Kℓ → +∞ the timelike boundary of the pole patch approaches the cosmological horizon.In this regime, the analysis differs from the thin worldline limit.Nonetheless, we observe the presence of scalar modes with complex frequencies of positive imaginary part for each l.These modes, similarly to the cosmic patch, coalesce onto ω pole ℓ = ±i.Moreover, we find two additional sets of modes.The first set is a pair of real valued scalar modes for each l.Numerically, they are found to linearly depend on l, with the following large l behaviour ωℓ ≈ ± 0.88l √ 2Kℓ .The second set is an infinite tower of real valued modes, for each l, which are numerically found to be evenly spaced.This set of modes appears in both the vector and scalar sector, and their large l behaviour is found to be ωℓ ≈ 3n log | √ 2Kℓ| , with n = 0, 1, 2 . . . We can synthesise, in short.We have provided evidence that, subjected to conformal boundary conditions, the stretched horizon limit of the pure de Sitter patch is a thermodynamically stable portion of spacetime containing a cosmological horizon.Dynamically, the majority of linearised gravitational perturbations about this portion of spacetime decay at late times, save one mode for each l of total angular momentum.These modes have a purely imaginary frequency ωℓ = +i, and are moreover retrieved from a Rindler analysis.We take this latter property as an indication that they are not endemic to de Sitter space, but rather a universal property of near horizon physics subjected to conformal boundary conditions.Their fate, whose behavior as measured by a local inertial clock is at most polynomial in time, is a remaining obstacle in obtaining a portion of spacetime with Λ > 0 that is both thermodynamically stable, as well as dynamically stable at the linearised level.Perhaps to tame this mode we must impose an additional boundary condition, always ensuring that in doing so we do not overly restrict any interesting dynamics.Or perhaps we must relax the condition of a constant K.A careful examination is left to the future. Regularity of the geometry near the black hole horizon determines the conformal temperature of the black hole patch to be which is greater than zero.The conformal temperature β−1 has a lower bound β−1 min = 2π, which occurs in the Nariai limit, by setting r tube = ℓ/ √ 3 and taking r bh → ℓ/ √ 3 from below. Below this conformal temperature, the black hole patch solution does not exist.For larger conformal temperatures, β−1 > β−1 min , there is a one-parameter family of black hole patches.To reach the high conformal temperature regime, β → 0, one can take the near horizons limit, i.e. either r tube → r bh or r tube → r c , or the small black hole limit r bh /ℓ → 0. Requiring that the boundary has a constant trace of the extrinsic curvature K fixes (C.2) There is no upper or lower bound on Kℓ.Similarly to the pole patch, the limit of Kℓ approaching negative infinity corresponds to pushing the boundary to be near the cosmological horizon.For Kℓ going to positive infinity, the boundary is pushed near the black hole horizon. Black hole patch thermodynamics.The regulated action is given by I The conformal entropy agrees with the Bekenstein-Hawking entropy A horizon /4G N where the horizon in the formula corresponds to the black hole horizon. Taking a small black hole limit, we find that , as r bh /ℓ → 0 .(C.5) Provided that r bh /2G N is the physical mass of the small black hole, we can see that E conf is indeed the energy as measured by the conformal clock defined on the boundary. The specific heat at constant K of the black hole patch is given by (C.7) The first limit corresponds to a small black hole and, in that case, the specific heat is negative.On the opposite limit, when the black hole size is comparable to the boundary size, then the specific heat is positive.D Plots of the regulated action, E conf , and C K in D = 4 In this appendix, we give numerical examples of the regulated action, conformal energy, and specific heat for dS 4 conformal thermodynamics as functions of β for various values of Kℓ.These are displayed in figure 12 for Kℓ = −7.5, −1.5, and 6.0.These values of Kℓ are chosen to show pure dS 4 patches which are unstable, metastable, and stable, respectively.E l = 0 and l = 1 modes In this appendix, we consider linearised dynamics of gravitational l = 0 and l = 1 modes.Following the analysis in [25], these modes are locally pure diffeomorphisms that become physical by the presence of the timelike boundary with fixed boundary data.In particular, we are interested in linearised perturbation h µν of the form, where K mn and ḡmn are the extrinsic curvature (5.3) and induced metric (5.2) of Γ, respectively.The covariant derivative D m is that associated to the induced metric ḡmn . At the linearised level, the perturbation (E.1) is subject also to gauge redundancy in the form of the diffeomorphism for an arbitrary vector field ξ ′ µ .Due to the presence of the boundary, the vector field ξ ′ µ must preserve the boundary data and location of the boundary leading to (E.2) and ξ ′r | r=r = 0, respectively.This means that a large number of the perturbations (E.1) obeying (E.2) can be gauged away by some suitable diffeomorphism (E.3).The exception is when the perturbation (E.1) is constructed from the vector field ξ µ which disturbs the location of the boundary, i.e.When evaluated on the cosmic (black hole) patch, the curve colour is green (yellow).The pure dS4 and Nariai solutions are marked in dark green and purple dots, respectively.For the plots of CK , we also display the conformal answer N d.o.f./ β2 as a black dashed curve. We therefore take (E.2) and (E.4) as boundary conditions for a physical metric perturbation (E.1).l = 0 modes.Choosing the h tr = 0 gauge, the general spherically symmetric (l = 0) vector field ξ µ satisfying (E.2) and (E.4) is given by where the frequency ω (l=0) ± r = ±i 2 − r 2 ℓ 2 is purely imaginary.In the worldline limit, where r/ℓ → 0, we match the result of l = 0 modes found in [25].In the strechted horizon limit, where r/ℓ → 1, we find that these pair of modes coalesce to ω where the frequency ω (l=1) ± ℓ = ±i is pure imaginary and r-independent.Unlike the l = 0 modes, the metric perturbation constructed from (E.6) is vanishing everywhere implying that (E.6) is a Killing vector of the background dS 4 .Near the worldline, where r/ℓ → 0, these modes reproduce three translations and three Lorentz boosts of the flat spacetime.In the strectched horizon limit, where r/ℓ → 1, we find that these modes become combinations of translations and Lorentz boosts in a local inertial frame near the boundary.This means that, in the ϵ → 0 limit, the diffeomorphism (F.1) preserves the conformal boundary data but not the location of the boundary.In particular, in the local inertial frame, these modes become angle-dependent radial/time translations. < l a t e x i t s h a 1 _ b a s e 6 4 = " S j 0 B N E y F b 7 L A m 7 o b L f c 5 R b 3 1 y h 8 = " > A A A B 9 X i c b V D L S g M x F L 2 p r 1 p f V Z d u g k V w V W b E 1 0 Y o u n F Z w T 6 g H U s m z b S h m c y Q Z J Q y 9 D / c u F D E r f / i z r 8 x 0 8 5 C W w 8 E D u f c y z 0 5 f i y 4 N o 7 z j Q p L y y u r a 8 X 1 0 s b m 1 v Z O e X e v q a N E U d a g k Y h U 2 y e a C S 5 Z w 3 A j W D t W j I S + Y C 1 / d J P 5 r U e m N I / k O k 6 p 5 X z + 5 O K 7 X r v I 4 i H M A h H I M L F 1 C D W 6 h D A y g o e I Z X e E N P 6 A W 9 o 4 / Z a A H l O / v w B + j z B / N z k t M = < / l a t e x i t > r = r < l a t e x i t s h a 1 _ b a s e 6 4 = " H 7 3 4 c t A C T C K k x u q i w K B x y L G i Q c 8 = " > A A A B / H i c b V D L S g N B E J z 1 G e N r N U c v g 0 H w F H b F 1 z G Y i 8 c I 5 g H J E m Y n k 2 T I z M 4 y 0 y s u S / w V L x 4 U 8 e q H e P N v n C R 7 0 M S C h q K q m + 6 u M B b c g O d 9 O y u r a + s b m 4 W t 4 v b O 7 t 6 + e 3 D Y N z p P z 4 r w 7 H / P W F S e f K a E / c D 5 / A C R T l R o = < / l a t e x i t > Cosmic patch < l a t e x i t s h a 1 _ b a s e 6 4 = " G H + 7 X e g u U c 4 R S 4 L Q A 5 C c X R c n M r I = " > A A A B + n i c b V D J S g N B E O 1 x j X G b 6 N F L Y x A 8 h R l x O w a 9 e I x g F k i G 0 N O p S Z r 0 L H T X q G H M p 3 j x o I h X v 8 S b f 2 M n m Y M m P i h 4 v F d F V T 0 / k U K j 4 3 x b S 8 s r q 2 v r h Y 3 i 5 t b 2 z q 5 d 2 m v o O F U c 6 j y W s W r 5 T I M U E d R R o I R W o o C F v o S m P 7 y e + M 1 7 U F r E 0 R 2 O E v B C 1 o 9 E I D h D I 3 X t U g f h E b N a L I E m D P l g 3 L X L T s W Z g i 4 S N y d l k q P W t b 8 6 v Z i n I U T I J d O 6 7 T o J e h l T K L i E c b G T a k g Y H 7 I + t A 2 N W A j a y 6 a n j + m R U X o 0 i J W p C O l U / T 2 R s V D r U e i b z p D h Q M 9 7 E / E / r 5 1 i c O l l I k p S h I j P F g W p p B j T S Q 6 6 2 P W u m T l M / v k D 6 z P H 5 N u l D g = < / l a t e x i t > Pole patch < l a t e x i t s h a 1 _ b a s e 6 4 = " S j 0 B N E y F b 7 L A m 7 o b L f c 5 R b 3 1 y h 8 = " > A A A B 9 X i c b V D L S g M x F L 2 p r 1 p f V Z d u g k V w V W b E 1 0 Y o u n F Z w T 6 g H U s m z b S h m c y Q Z J Q y 9 D / c u F D E r f / i z r 8 x 0 8 5 C W w 8 E D u f c y z 0 5 f i y 4 N o 7 z j Q p L y y u r a 8 X 1 0 s b m 1 v Z O e X e v q a N E U d a g k Y h U 2 y e a C S 5 Z w 3 A j W D t W j I S + Y C 1 / d J P 5 r U e m N I / rFig. 1 : Fig.1: Penrose diagram of dS space, with a timelike boundary at r = r.On the left static patch, the shaded region corresponds to the pole patch, while on the right, it corresponds to the cosmic patch. Fig. 3 : Fig.3: The quantity c conf as a function of Kℓ.This central charge decreases monotonically as a function of Kℓ.As comparison, we included the plot of the analogous c BTZ conf , for the BTZ black hole with conformal boundary conditions.The dS and AdS radii are chose to be equal.In the limit where Kℓ → ∞ both central charges coincide, as the tube in both cases gets very close to the horizon and they both exhibit Rindler behaviour.In dashed lines, we show the position of the conformal boundary in the AdS case, where we recover the Brown-Henneaux central charge. . 20 ) By using (3.10) with r c = ℓ, we can re-express this in terms of the physical size of the boundary r tube as βdS (r tube ) = 2π ℓ 2 r 2 tube − 1 . 1 Fig. 4 : Fig. 4: The regulated on-shell action for the cosmic patch solution, as a function of boundary data β.The dashed vertical line indicates the critical inverse temperature βc = 2π. Fig. 5 : Fig.5: Phase diagram of conformal dS3 thermodynamics.In each point of the diagram, there co-exist a pole and a cosmic patch solution.There is a critical conformal inverse temperature at βc = 2π, marked in black.For β < βc (shaded in green), the cosmic patch is the most favourable configuration.For β > βc (in white), the cosmic patch is metastable.The darker green curve shows pure stable (solid) and metastable (dashed) dS3 solutions, that follow relation(3.20).Worldline and stretched horizon limits of pure dS3 are further indicated. r) , r static = e ω r , θ static = θ , ϕ static = ϕ .(4.4) We now study these solutions on a four-manifold with an S 1 ×S 2 boundary subject to the conformal boundary data (2.6).Solving the Einstein equation in terms of boundary data, we obtain multiple expressions for e 2ω .The exact expressions in the different ranges of parameters are provided in appendix B. There are three different classes of solutions denoted by the pole patch, the black hole patch, and the cosmic patch.Examples of them are displayed in figure 6. < l a t e x i t s h a 1 _ b a s e 6 4 = " S j 0 B N E y F b 7 L A m 7 o b L f c 5 R b 3 1 y h 8 = " > A A A B 9 X i c b V D L S g M x F L 2 p r 1 p f V Z d u g k V w V W b E 1 0 Y o u n F Z w T 6 g H U s m z b S h m c y Q Z J Q y 9 D / c u F D E r f / i z r 8 x 0 8 5 C W w 8 E D u f c y z 0 5 f i y 4 N o 7 z j Q p L y y u r a 8 X 1 0 s b m 1 v Z O e X e v q a N E U d a g k Y h U 2 y e a C S 5 Z w 3 A j W D t W j I S + Y C 1 / d J P 5 r U e m N I / k 6 h D A y g o e I Z X e E N P 6 A W 9 o 4 / Z a A H l O / v w B + j z B / N z k t M = < / l a t e x i t > r = r < l a t e x i t s h a 1 _ b a s e 6 4 = " b + h 5 8 o N 1 x b R c T / u E q H O P e K n 6 9 4 g = " > A A A C B 3 i c b V D L S s N A F J 3 4 r P U V d S l I s A h u L I n 4 2 g h F N y 4 r 2 A c 0 s U y m N + 3 Q y S T M T I Q S s n P j r 7 h x o Y h b f 8 G d f + O k z U J b D 1 w 4 n H M v 9 9 7 j x 4 x K Z d v f x t z 8 w u L S c m m l v L q 2 v r F p b m 0 3 Z Z Q I A g 0 S s U i 0 f S y B U Q 4 N R R W D d i w A h z 6 D l j + 8 z v 3 W A w h J I 3 6 n R j F 4 I e 5 z G l C C l Z a 6 5 p 6 4 h P v 0 y I 1 C 6 O P M D b E a B A I P U 5 F 1U 3 + Q d c 2 K X b X H s G a J U 5 A K K l D v m l 9 u L y J J C F w R h q X s O H a s v B Q L R Q m D r O w m E m J M h r g P H U 0 5 D k F 6 6 f i P z D r Q S s 8 K I q G L K 2 u s / p 5 I c S j l K P R 1 Z 3 6 o n P Z y 8 T + v k 6 j g w k s p j x M F n E w W B Q m z V G T l o V g 9 K o A o N t I E E 0 H 1 r R Y Z Y I G J 0 t G V d Q j O9 M u z p H l c d c 6 q p 7 c n l d p V E U c J 7 a J 9 d I g c d I 5 q 6 A b V U Q M R 9 I i e 0 S t 6 M 5 6 M F + P d + J i 0 z h n F z A 7 6 A + P z B w O V m h I = < / l a t e x i t > r = e !r b h < l a t e x i t s h a 1 _ b a s e 6 4 = " d Y 4 / s m s X s 0 W V 9 h P Y 7 u T F 5 q v B V I w = " > A A A C B n i c b V D L S s N A F J 3 4 r P U V d S l C s A h u L I n 4 2 g h F N y 4 r 2 A c 0 s U y m N + 3 Q y S T M T I Q S s n L j r 7 h x o Y h b v 8 G d f + O k z U J b D 1 w 4 n H M v 9 9 7 j x 4 x K Z d v f x t z 8 w u L S c m m l v L q 2 v r F p b m 0 3 Z Z Q I A g 0 S s U i 0 f S y B U Q 4 N R R W D d i w A h z 6 D l j + 8 z v 3 W A w h J I 3 6 n R j F 4 I e 5 z G l C C l Z a 6 5 p 6 4 h P v 0 y I 1 C 6 O P M D b E a B A I P U 5 F 1 U 5 J 1 z Y p d t c e w Z o l T k A o q U O + a X 2 4 v I k k I X B G G p e w 4 d q y 8 F A t F C Y O s 7 C Y S Y k y G u A 8 d T T k O Q X r p + I 3 M O t B K z w o i o Y s r a 6 z + n k h x K O U o 9 H V n f q e c 9 n L x P 6 + T q O D C S y m P E w W c T B Y F C b N Fig. 6 : Fig. 6: Penrose diagram of the Schwarzschild de Sitter spacetime.The boundary is given by r = r.The shaded blue area corresponds to a cosmic patch, while the yellow one, to a black hole patch.The pole and the pure dS4 patches can be obtained when rc = ℓ. Fig. 7 : Fig. 7: The number of degrees of freedom N d.o.f. as a function of Kℓ.The number of degrees of freedom decreases monotonically as a function of Kℓ. 4 21 ) solution for a particular family of conformal temperatures.Using the boundary data of the cosmic patch (4.10) and (4.11), the conditions for having pure dS 4 are β = βdS ≡ 2π ℓ 2 − r From these, it follows that r c = ℓ.One can further solve for βdS in terms of Kℓ, βdS = π 2 K 2 ℓ 2 + 8 − Kℓ .(4.22) Consider now the worldline limit r tube → 0. The standard dS temperature is recovered when βdS r tube → 2πℓ as r tube → 0 .(4.23)The stretched horizon limit, corresponds to the high conformal temperature limit in which βdS → 0 as r tube → ℓ .(4.24) Now we can use (4.13) and (4.14) to calculate the thermodynamic properties of pure dS 4 .The conformal entropy and the specific heat at constant K of the pure dS 4 are given by Fig. 8 : Fig.8: A plot of the specific heat of the pure de Sitter patch for conformal (green) and Dirichlet (yellow) boundary conditions.For the Dirichlet case, the specific heat is never positive. Fig. 9 : Fig.9: Plots of r horizon /ℓ and r tube /ℓ as a function of β at fixed Kℓ.The solid curves correspond to cosmic patches, while the dashed ones, to black hole patches.r horizon denotes the radius of the black hole (cosmological) horizon when considering the black hole (cosmic) patch.The two horizontal black dashed lines correspond to the dS4 patch (upper line) and Nariai patch (lower line). Fig. 10 : Fig. 10: Phase diagram of conformal dS4 thermodynamics for static and spherically symmetric configurations.The number of different solutions co-existing at a given point in the phase diagram depends on whether the point lies above or below the β0 curve (dot-dashed black curve).Above that curve, only one pole patch solution exists.Below the β0 curve, apart from a pole patch solution, there co-exist two additional cosmic/black hole patches, one with negative and one with positive CK .The curve of critical inverse conformal temperature βc is shown in thick black, above which the pole patch is thermodynamically preferred.In the region bounded by β0 and βc curves, shaded in green (yellow) halftone, the cosmic (black hole) patch is metastable.For βc > β, the cosmic (black hole) patch is stable with the associated region shaded in solid green (yellow).The dark green and purple curves represent pure dS4 and Nariai patches.Both curves are divided into three segments: stable, metastable, and unstable, which are shown as thick, dashed and dotted curves, respectively.The (meta)stable Nariai curve marks the separation of the (meta)stable cosmic patch and black hole patch regions. (a) l = 2 (b) l = 10 Fig. 11 : Fig.11: Density plot of absolute value of log e −4ωℓi F l (Kℓ , ωℓ) in the complex ωℓ plane for l = 2 and l = 10, where F l (Kℓ , ωℓ) is defined in(5.15).In both plots, Kℓ is fixed to be 40.Both the ωℓ ≈ ±i and ω sound ℓ are displayed in both plots.For l = 10, the sound modes develop a small negative imaginary part. 1 ) 2 K for an arbitrary vector field ξ µ .The perturbation (E.1) automatically satisfies the linearised Einstein field equation.The conditions that this perturbation preserves the conformal boundary data at r = r lead to mn −K 3 ḡmn 1 − r 2 ℓ 2 ξ r + D m ξ n + D n ξ m − 2ḡmn 3 D p ξ p r=r = 0 , 1 − r 2 ℓ 2 ∂ r K − D m D m 1 − r 2 ℓ 2 ξ r + ξ m D m K r=r CK , Kℓ = 6 Fig. 12 : Fig.12: Plots of regulated action, E conf , and CK as a function of β at fixed Kℓ.When evaluated on the cosmic (black hole) patch, the curve colour is green (yellow).The pure dS4 and Nariai solutions are marked in dark green and purple dots, respectively.For the plots of CK , we also display the conformal answer N d.o.f./ β2 as a black dashed curve.
21,641.2
2024-02-06T00:00:00.000
[ "Physics" ]
Generalized Rational Variable Projection With Application in ECG Compression In this paper we develop an adaptive transform-domain technique based on rational function systems. It is of general importance in several areas of signal theory, including filter design, transfer function approximation, system identification, control theory etc. The construction of the proposed method is discussed in the framework of a general mathematical model called variable projection. First we generalize this method by adding dimension type free parameters. Then we deal with the optimization problem of the free parameters. To this order, based on the well-known particle swarm optimization (PSO) algorithm, we develop the multi-dimensional hyperbolic PSO algorithm. It is designed especially for the rational transforms in question. As a result, the system along with its dimension is dynamically optimized during the process. The main motivation was to increase the adaptivity while keeping the computational complexity manageable. We note that the proposed method is of general nature. As a case study the problem of electrocardiogram (ECG) signal compression is discussed. By means of comparison tests performed on the PhysioNet MIT-BIH Arrhythmia database we demonstrate that our method outperforms other transformation techniques. Generalized Rational Variable Projection With Application in ECG Compression Péter Kovács , Sándor Fridli , and Ferenc Schipp Abstract-In this paper we develop an adaptive transformdomain technique based on rational function systems.It is of general importance in several areas of signal theory, including filter design, transfer function approximation, system identification, control theory etc.The construction of the proposed method is discussed in the framework of a general mathematical model called variable projection.First we generalize this method by adding dimension type free parameters.Then we deal with the optimization problem of the free parameters.To this order, based on the well-known particle swarm optimization (PSO) algorithm, we develop the multi-dimensional hyperbolic PSO algorithm.It is designed especially for the rational transforms in question.As a result, the system along with its dimension is dynamically optimized during the process.The main motivation was to increase the adaptivity while keeping the computational complexity manageable.We note that the proposed method is of general nature. As a case study the problem of electrocardiogram (ECG) signal compression is discussed. By means of comparison tests performed on the PhysioNet MIT-BIH Arrhythmia database we demonstrate that our method outperforms other transformation techniques. Index Terms-Variable projection, Model selection, Separable nonlinear least squares, Nonlinear regression, Rational functions, Particle swarm optimization, ECG compression. I. INTRODUCTION A NALYSIS of signals by means of mathematical trans- formations proved to be an effective method in various aspects.For instance, dimensionality reduction methods are strongly related to the problems of compression and noise suppression of the original signal.Moreover, the transform can also be used for extracting features in classification tasks.Many of these transform-domain techniques are generated by fixed basic functions like the trigonometric functions in the Fourier transform, Walsh functions in the Walsh-Fourier transform, mother wavelet function for the wavelet transform, etc.In order to surpass the limitations in compression ratio, reconstruction error, adaptivity and computational complexity of these algorithms, the dictionary based methods such as matching and basis pursuit were proposed by many authors, see e.g.[1], [2].In that case, an overcomplete set of base functions was applied to increase the adaptivity.Also, the wavelet packet transform (WPT), which utilizes different tilings of the time-frequency plane, was introduced by Coifman et al. [3].Although these led to improved adaptivity, the performance remained limited because of the lack of free parameters. There is a trade-off between overfitting and underfitting, which is controlled by the number of parameters, i.e., the model order.Information theory provides many order selection rules, which quantify the optimized models according to the Bayesian information criterion (BIC), Akaike information criterion (AIC), etc.In this case, the BIC and/or the AIC of the candidate models are precalculated for various parameter setups, then the best among them is selected according to the given criterion [4].Basis pursuit can also be applied to make an initial assumption on the number of basic function, which is followed by an additional optimization step to find the best nonlinear parameters. The orthogonal systems of rational functions play distinguished role and proved to be very effective in several areas and applications.Although their properties, like simplicity and the high variety of such systems, make them promising candidates for adapting transform domain technique there are only few examples for their application in signal processing.Moreover, the methods used in those examples do not employ the capacity of such systems completely. In this paper we systematically develop a new method that fully utilizes the versatility of rational systems.To this end, we start from the well-known variable projection functional [5] and generalize it by introducing a new cost function.In the generalized variable projection method we integrate system dimensionality into the set of free parameters.In contrast to previous approaches, we jointly optimize both the accuracy and the complexity of the model.By means of rational systems we demonstrate that the increased computational demand is manageable, and the generalized method is efficient.Toward this we set up a new architecture space that supports the structure of the parameter space (number of poles, multiplicities) of the rational functions.We develop the hyperbolic version, based on the Poincaré model, of the stochastic Multi-Dimensional Particle Swarm Optimization (MDPSO) [6], [7] for the nonlinear optimization problem of system parameters.Finally, ECG compression with the analysis of the parameters and This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/comparisons with the state-of-the-art methods is taken as casestudy.The comparison tests show that our method outperforms other transformation algorithms. Regarding the application, the main advantage of the proposed method is its adaptivity.State-of-the-art ECG processing methods usually fix the function system a priori based on the shape similarity between the ECGs and the basic functions.For this reason, Hermite functions, B-splines, and wavelets became popular in this field.Although the shape of the basic functions correlates very well with the normal ECG morphology, it is difficult to represent abnormal beat classes.The proposed method automatically scales up the number of free parameters in order to approximate abnormal beats with an acceptable level of error. The paper is organized as follows.Section II contains the projection methods and the corresponding function systems.In Section III we develop the generalized variable projection model.In order to solve the corresponding optimization problem we extend the basic and multi-dimensional PSO algorithm using the Poincaré model of the hyperbolic geometry in Sections IV-V.In Section VI we apply our technique for the construction of an ECG compression method using optimized rational functions.In Section VII a comparative study of different adaptive transform-domain based techniques evaluated on the MIT-BIH arrhythmia database [8] is provided.The discussion of the results can be found in Section VIII.Finally, Section IX is a summary of conclusions and future plans. II. PROJECTION METHODS This section serves as background and framework, along with examples, for the construction of generalized variable projection method presented in Section III. A. Non-Variable Projection Method Let us start with the classical non-variable projection method.By projections we will always mean orthogonal projection, which is an important transform-domain technique.It is strongly related to approximation theory in Hilbert spaces.Namely, the space of the appropriate signals is usually considered to be a Hilbert space H with scalar product •, • .Then, a signal f ∈ H is modeled by its orthogonal projection onto a closed subspace S ⊂ H.In practical applications S is a finite N ∈ N + dimensional subspace spanned by a linearly independent function system Θ := { Θ j ∈ H : 0 ≤ j < N }.It is well-known that for any signal f ∈ H the best approximation f ∈ S uniquely exists: is the usual one induced by the scalar product.The orthogonal projection from H onto S, which is a continuous linear operator, will be denoted by P N Θ .Then, a signal f ∈ H is represented in S as follows: where the coefficients c j ∈ R (j = 0, . . ., N − 1) are the solution of the system of linear equations Gc = b.In this G is the identity matrix and c i = f, Θ i is the ith Fourier coefficient of f .A typical example for the space of appropriate real valued signals is H = L 2 w (Ω) with a positive weight function w.Ω is usually an interval, bounded or unbounded, for analog signals and a proper countable set of real numbers for discrete-time signals.Various types of basic functions have been used so far depending on the particular problem.For instance, trigonometric functions are applied in MP3 coding, while wavelets in JPEG 2000 standards.In these cases the shapes of the basic functions are fixed, they cannot be adjusted to the individual signal.This limitation proved to be important, especially in dynamically changing environments, where we need to adjust the system to the signal.For instance, in case of ECGs, normal beats usually dominate the signal, and even the non-adaptive techniques work well on them.On the other hand, abnormal beats can have rather complex waveforms for which those techniques provide poor results.These phenomena raise the need for more sophisticated models, in which the basic functions can be adapted to the shape of the signal due to their free parameters. B. Variable Projection Method The theory of variable projection methods was laid down by Golub and Pereyra in [5].They provided formulas for theoretical and numerical derivation for the related Gauss-Newton type algorithms.Since then these methods have found applications in many areas including neural networks, telecommunications, dynamical systems, etc.A good summary on them are given in [9], [10].We point out that several well-known transformations, e.g.B-splines, orthogonal polynomials, can be understood as variable projections.As a consequence the related results can be interpreted in a unified framework, which has avoided the attention so far.The above-mentioned special models are discussed in the next subsection. Let us now suppose that instead of a fixed function system we have a collection of systems { Θ(a) : a ∈ Γ 1 } where Γ 1 is an index set.We will assume that for every a ∈ Γ 1 the function system Θ(a) is of the form Θ(a) := { Θ j (•, a) ∈ H : 0 ≤ j < N }, where N ∈ N is fixed and the Θ j (•, a)'s are linearly independent.Then the general nonlinear model is where the c j 's are real or complex coefficients.In order to get the best model for a particular analog signal f ∈ H, we need to find the optimal values for c and a, i.e., to minimize the nonlinear functional Let us denote the a dependent Gram matrix by G(a).For a fixed parameter a the optimalization with respect to c is linear and the result is the orthogonal projection denoted by P N Θ(a) f .We note that c(a) can be calculated by solving G(a)c(a) = b(a). Using this fact, the original problem in Eq. ( 3) can be reduced to minimize which following Golub and Pereyra [5] is called variable pojection functional. In applications, we work with discrete signals f ∈ R M and with matrices Θ(a) ∈ R M ×N representing discrete function systems.The formulas developed above for the analog case can easily be adjusted to obtain their discrete versions.Namely, the coefficient vector c(a) ∈ R N can be calculated via c(a) = Θ(a) + f , where Θ(a) + is the Moore-Penrose generalized inverse of Θ(a).Moreover, the discrete variant of the orthogonal variable projection operator is P N Θ(a) = Θ(a)Θ(a) + .Then the discrete variable projection functional is of the form where • 2 stands for the usual Euclidean norm in R M .In [5], Golub and Pereyra showed that the functionals r(c, a) and r 2 (a) have the same global minima in both the analog and the discrete versions.Furthermore, they demonstrated the fact that iterative nonlinear algorithms converge faster on the reduced r 2 (a) problem. C. Examples for Variable Projections Now we show examples that can be understood as special variable projections.Consequently the optimization problems involving them can be reduced to the form given in Eq. (5).Although the theoretical framework is the same in the discussed cases the corresponding algorithms can be different in several respects, such as efficiency, complexity.We demonstrate the pros and cons using the example of ECG compression. 1) B-Splines: We consider the optimization of B-splines B j, (•, a) (n ∈ N + , j = 0, . . ., n − 1, a ∈ R n ) of degree ∈ N, which are defined by where Let us take the B-splines with fixed degree .Then the base functions in the variable projection model are Θ j (t, a) = B j, (t, a), and the free parameter is the vector of free knots a = (a 0 , a 1 , . . ., a n−1 ) T .The problem is to find the optimal locations of the knots.Unfortunately, the reduced functional r 2 (a) has several stationary points in this case, which makes the optimization quite a hard task.This phenomenon, called lethargy problem, was thoroughly studied by Jupp in [11].We note that the B-spline model with free knots was adapted to ECG data compression tasks by Karczewicz and Gabbouj in [12].Here, the optimization process is an iterative method, which removes the least significant knot at each step.Because of its high flexibility both the approximation and compression properties of the spline algorithm are comparable with the recent methods (see e.g.Table IV).On the other hand, the computational cost is high due to lack of orthogonality. 2) Hermite Orthogonal Polynomials: Orthogonal polynomials are widely used in signal processing.In particular, Hermite polynomials have found many applications in ECG compression [13], [14], classification [15], [16], feature analysis [17] and QRS detection [18].Classical Hermite polynomials are defined by the following recursion: where H 0 (t) = 1 and H 1 (t) = 2t.The so-called Hermite functions are constructed from them by dilation: These functions form a complete orthonormal system in L 2 (R). In the Hermite type model of ECG signals, three individual variable projection methods are combined.Each of them is based on a dilated Hermite system Θ j (t, a i ) := ϕ j (t, a i ) (0 ≤ i < 3) and is used to represent the corresponding segments P, QRS, T of the heart beat.The best value of each dilation parameter a i is determined via optimization.It is worth mentioning that in a recent paper [14] on discrete Hermite functions a significant improvement is presented in terms of compression of the QRS complex.In Section VII we enhanced the original algorithm in [13] by combining it with [14] and used it in our comparative study.For the theory of orthogonal polynomials and numerical algorithms we refer to the fundamental books [19], [20]. 3) Wavelets: The discrete wavelet transform (DWT) is one of the most popular transforms in signal processing [21], [22].The construction of wavelet transforms (WT) is based on a so-called scaling function φ for which the translates {φ(t − k)} k∈Z form an unconditional orthonormal basis in the initial subspace V 0 ⊂ L 2 (R).V 0 is supposed to be invariant with respect to integer translations.It generates the multiresolution Then, the original signal is decomposed according to the subspaces V i and W i , where W i is the orthogonal complement of V i with respect to V i+1 .The corresponding bases in V i and W i are respectively where ψ is the so-called mother wavelet induced by φ. In practice we deal with discrete wavelet transform for finite signals, which can be viewed as periodic signals.There the scaling function φ and the mother wavelet ψ are completely characterized by a compactly supported low-pass filter h as follows: where g k = (−1) k h L−1−k is a high-pass filter and L is the filter length.Although the constraints of orthogonality restrict the values of the filter's coefficients, we still have L 2 − 1 degreeof-freedom to choose h k (see e.g.Sect.5.9 in [23]).We note that, for ECG modeling [24], [25] the filter dimension is usually L = 6.Then there are two free parameters a 1 and a 2 which determine the filter's coefficients h k [23]: Now we insert the DWT into our framework by defining the vector index j = (i, k).Then, for a fixed L = 6 dimension we have Θ j (t, a) := ψ j (t, a), where a = (a 1 , a 2 ) T .4) Rational Functions: In our last example we consider rational function systems, which can be viewed as the generalization of polynomial systems [26].To this order let C stand for the set of complex numbers, for the unit circle (or torus).For a sequence a = {a n ∈ D} n∈N the elements of the corresponding orthogonal system, which is called Malmquist-Takenaka (MT) system [27], [28], can be given in an explicit form as follows: where B(z, a) is the so-called Blaschke function defined by The parameter a is called inverse pole, where 1/a is the pole in the usual sense. For the finite-dimensional version let a = (a 0 , . . ., a n−1 ) T ∈ D n be a vector of distinct inverse poles with multiplicities m = (m 0 , . . ., m n−1 ) T ∈ N n + .Then we will consider the MT system that corresponds to the inverse pole vector: where We note that although the MT system itself depends on the order of the inverse poles, the generated subspace and so the projection is invariant with respect to it.In this setting a signal f belonging to the Hardy space H 2 (D) is modeled by taking Θ j (t, b) = Φ j (e it , b) in Eq. ( 1) with which are called the MT -Fourier coefficients.Applying appropriate discretization algorithms such as those in [29], [30], the integral above can be substituted by finite sums (see Theorem 2. in [31]).We note that this model was effectively used in QRS modeling [32], system identification [33], EEG seizure classification [34]- [36], sleep stage classification [37], etc. III. GENERALIZED VARIABLE PROJECTION METHOD Let us start with the variable projection model and the corresponding functional given in Eq. ( 4).We note that in this model the dimension N of the subspace is a priori fixed.Our aim is to develop a generalization by dropping this constraint, i.e., adding a new free parameter related to the dimension of the subspace.Toward the definition of the generalized variable projection method we define the new index set as In simple cases Γ 2 = N represents the dimension itself.This is the situation in Section II-C on Hermite functions.In that case increasing N results in nested subspaces and so in a better minimum of the nonlinear functional Eq. ( 4).Consequently the optimization would terminate at the highest possible value of the dimension.On the other hand, high dimensions are not desired in real applications because it increases the complexity of the model.For controlling the dimension we introduce a penalty function Λ(N ) that is monotonically increasing.For the rest of the paper we always assume that f is of zero mean with unit variance.Then the generalized variable projection functional including the penalty term is defined as follows: ) There are however more complex cases when the parameters in Γ 2 are not simply dimensions, and the subspaces are not embedded into each other.In order to address this problem we modify the definition above to obtain the final form for the generalized variable projection functional (10) where Θ(a, d) := { Θ j (•, a, d) ∈ H : 0 ≤ j < N(d) } and Λ(d) increasing in d measures the complexity of the corresponding system in some sense.It may depend not only on the system but also on the specific task. It is easy to see that the variable projection functional can be considered as a special case in which Γ 2 has exactly one element.We note that the main advantage of the generalized method is the simultaneous optimization with respect to the system and the dimension parameters.On the other hand, it makes sense only if all of the following conditions hold for the system: 1) it is flexible enough but easy to parametrize; 2) the complexity function is properly designed; 3) an efficient optimization can be constructed.(11) We want to underline the fact that the rational functions are satisfy all these conditions.We will demonstrate the feasibility of rational function systems for generalized variable projection via signal compression problems.One of the key questions is to find an effective optimization algorithm.A generalized PSO type algorithm will serve our purpose.In the next section we establish the construction of it by starting from the basic version. A. Basic PSO Algorithm The basic PSO algorithm was introduced by Eberhart and Kennedy [38] as a population based stochastic optimization technique.In case of n dimensional search space, the method is initialized by a random population where S ∈ N + denotes the size of the swarm and every x k is a potential solution for the optimization problem.The x k 's correspond to inverse pole configurations in our problem.For every x k ∈ R n let y k ∈ R n denote the personal and let y ∈ R n denote the global best solutions achieved so far.In each step, both the position and the velocity of the particles are updated in the following way: where the learning factors c 1 , c 2 are predefined constants and r 1 , r 2 ∈ (0, 1) are uniformly distributed random numbers.The inertia weight w was introduced later [39] in order to control the overall behavior of the swarm.For instance, one can favor exploration in the first few steps by increasing the value of w .Arbitrary large jumps are usually inhibited in the search space. To this end, the velocities and the positions are restricted to a certain interval defined by the parameters, V max , X min , X max .We note that following the standard we use this algorithm by setting c 1 := 1.5, c 2 := 2.Moreover, w is linearly decreasing from 0.8 to 0.2.For other strategies of the parameter selection and convergence analysis we refer to [40], [41]. B. Hyperbolic PSO Algorithm for Single-Pole Problems In this section we develop the hyperbolic variant, inspired by single-pole rational optimization, of the PSO method.In this case the particles contain only two coordinates, i.e., the real and imaginary parts of the inverse pole.If the algorithm terminates in the kth optimal particle, then the optimal inverse pole is x k,1 + ix k,2 .Furthermore, as we know from Section II-C4, the inverse poles of the MT system must belong to the open unit disc D. Hence, the search space is D. This implies the idea to use the Poincaré model of the hyperbolic geometry to keep the particles within the search space.According to this idea, we will replace the arithmetic operators in Eq. ( 12) by their hyperbolic variants.We note that in this way the constrained optimization problem converts to non-constrained one.Moreover, by using proper mappings it can be applied to regions more general than the unit disc.[42] serves as a general reference work in this section. 1) Hyperbolic Scaling: Using the terminology of Euclidean geometry, the vector scalar multiplication of the hyperbolic space can be defined in a similar way.Namely, it means the scaling of a hyperbolic vector by keeping its direction.In this case, the geodesics of this space are represented by arcs of circles that are orthogonal to the torus.We recall the definition of the hyperbolic metric for which (D, ρ) is a complete metric space.This metric space is invariant with respect to the Blaschke transforms B(t, a) := B(t, a), where a := (a, ) ∈ D × T .We will use the fact that the hyperbolic segments can be defined via B(t, a), which maps the interval [0, p] onto the hyperbolic segment connecting w 1 , and w 2 , where Now the hyperbolic vector − −− → w 1 w2 can be defined as a directed segment with B(0, a) = w 1 and B(p, a) = w 2 .Let us consider the scaling of a hyperbolic vector − −− → w 1w2 by the factor λ ∈ R. The new endpoint w λ of the solution vector − −− → w 1 w λ is In summary, the hyperbolic scaling λ − −− → w 1 w2 := − −− → w 1 w λ can be evaluated with w λ = B(s λ , a) as Eqs.( 13)-( 14) for any λ ∈ R. 2) Hyperbolic Addition: It turned out that the right way to define the hyperbolic addition is based on the compositions of Blaschke functions.Namely, it can be shown that the collection of Blaschke transforms B := {B(•, a) : a ∈ D × T } is closed for composition.Moreover, (B, •) is a subgroup of the well-known Möbius transformations, which maps the unit disc onto itself.In particular, if a 1 = (w 1 , 1) and a 2 = (w 2 , 1), then B(•, a 1 ) • B(•, a 2 ) = B(•, a), where a = (w, ) with = 1 + w 1 w 2 1 + w 1 w 2 and w = w 1 + w 2 1 + w 1 w 2 .The latter formula can be interpreted as a vector addition in the hyperbolic space for vectors with initial point at zero and endpoints at w 1 , w 2 (see Sections 3.4-3.5 in [43]).Following this, we will use the operations Then the hyperbolic PSO (HPSO) is defined by replacing •, +, − in Eq. ( 12) with their hyperbolic variants , ⊕, .This algorithm can be applied directly in the single-pole case, i.e., when there is only one inverse pole, in other words n = 1 in Eq. ( 7) and its multiplicity is fixed. C. Hyperbolic PSO Algorithm for Multi-Pole Problems The multi-pole problem is to determine the optimal inverse pole combination a = (a 0 , . . ., a n−1 ) T ∈ D n with fixed multiplicities m = (m 0 , . . ., m n−1 ) T ∈ N n + .For most of the heartbeats three poles are well separated according to the natural segmentation (P, T waves, QRS complex).Because of the localization property of basic rational functions the interference between terms with different poles is relatively small [44].Therefore, we perform the optimization separately on each complex coordinate min a i r 2 (a 0 , . . ., a i , . . ., a n−1 ) (i = 0, . . ., n − 1).(15) Then the solution of the multi-pole problem is reduced to successive applications of the single-pole optimization.It is a natural consequence of the hyperbolic model that the swarm cannot leave the unit circle during the algorithm.This makes the constraints X min , X max used in the original Euclidean algorithm unnecessary. In [45] we showed that the HPSO outperforms the well-known Nelder-Mead simplex algorithm in terms of reconstruction error and stability.The latter was proved to be important, especially when the MT system is used in classification problems [35]- [37]. V. GENERALIZED RATIONAL VARIABLE PROJECTION WITH OPTIMIZATION BY MULTI-DIMENSIONAL HPSO In this section we give a non-trivial example for generalized variable projection inspired by a real applications, namely by signal compression.Keeping our eye on conditions (11) in Section III, we first choose a proper system.This will be the system of rational functions; the reasons behind it were provided in Section II-C4. A. Cost Function The first term in Eq. (10), which measures how well the signal is represented by the projection, is set.Furthermore, it is natural to assume that the penalty term is strongly connected with the compression ratio in this case.According to this we specify the cost function as the linear combination of the approximation error and the reciprocal compression ratio (RCR) as follows: where f ∈ R M is a discrete signal with M samples, and Recall that f is of zero mean with unit variance, n is the number of distinctive poles and N is the sum of multiplicities (see Eq. ( 7)).The approximation error is computed as the usual percent root mean square difference (PRD).We point out that the second term containing RCR plays two roles.Namely, n + N is obviously inversely proportional to the compression ratio on the algorithmic level.On the other hand, RCR can be viewed as the measure of complexity (dimension) of the system.The consequence of the penalty term is that the gain in PRD must reach a certain level in order to move to a higher dimension.The role of the regularization parameter α ∈ (0, 1] is to customize the method to different applications.The proper value of α for ECG compression is given in Section VI-C2.Since α is constant in a given problem, the cost function can be divided by 100 • α to obtain The minimization of the cost function is designed to find the optimal pole configuration and the corresponding optimal pole positions.In order to do that we need to define an architecture space designed especially for the ECG signal compression problem. B. Architecture Space The pole configurations are described by the parameter vector m in Eq. (7).We note that the structure of the configuration set is rather complicated in the sense that the subspaces generated by different configurations are not comparable even if the numbers of the inverse poles along with the total dimensions of the subspaces, which are the sums of the multiplicities, are the same (see e.g. the cases (2, 6, 2) and (2,4,4)).This is a more complicated situation than the case of nested subspaces, which naturally induces a linear ordering and leads to the problem in Eq. ( 9).In the next step we will simplify the parameter set, i.e., we assign a virtual dimension d to each parameter vector m, in such a way that it follows the system complexity n + N .Based on our experience we chose a set of 30 configurations that are worth considering in the ECG compression problem.Table I contains the pole configurations m, system complexity n + N and the virtual dimension d.We note that there is however another expectation concerning dimensions.Namely, it is natural to expect that the increase of d results in better PRD.Figs. 1 show that taking records 117 and 119 from the dataset [8] our ordering defined purely on the basis of system complexity n + N behaves adequately in this respect.It is easy to see that by introducing the virtual dimension d, Eq. ( 17) can even formally be considered as an example for the generalized variable projection functional in Eq. (10) with M , where the connection between d and n + N is given in Table I. In former works [32], [46], [47], it turned out that three inverse poles are sufficient for accurate representation of the heartbeats.These inverse poles are well separated and their multiplicities reflects the natural segmentation (P, T waves, QRS complex) of the heartbeat.Therefore, the inverse pole that corresponds to the QRS complex usually dominates the approximation, i.e., its multiplicity is higher than the others.The second and third most significant inverse poles correspond to the T and the P wave, respectively.Of course, in case of abnormal heartbeats some of these waves can be missing.These observations justify the pole configurations in Table I that we chose especially for ECG compression. C. Multi-Dimensional HPSO After having defined the cost function, i.e., the generalized variable projection functional, we turn to the problem of optimization.To this order we take the so-called multi-dimensional (MD) PSO algorithm introduced by Kiranyaz et al. [6], and we adapt it to our case.MDPSO is a generalization of PSO, which along with the process of adaptation was established in Section IV.PSO based algorithms are constructed for static environments, but many practical problems change dynamically.This motivated the generalization of PSO to MDPSO, in which the dimensions are not fixed a priori.Then the optimization becomes a mixed integer nonlinear programming (MINLP) problem.The native structure of the swarm was extended by dimensional parameters.Thus, the particles can seek both positional and dimensional optima.The MDPSO was originally developed to evolve Artificial Neural Networks (ANN) for supervised learning [48], where the weights and bias of the network should be determined in order to minimize the classification error.We emphasize that since there is no penalty term concerning the structure of the ANNs, this optimization problem is still a variable projection method (see Section 3.2 in [48] and Section 1 in [9]).The original MDPSO algorithm [6] is the following Position updates: Dimension updates: where [.] is the integer rounding operator.The main changes compared to PSO given in Eq. ( 12) are the dimensional indices d k , d k , d ∈ I = {d min , . . ., d max }, which denote the current, personal and global best dimensions, respectively.In this case, every particle has a certain position and velocity in each dimension.For instance, x d k k denotes the position of the kth particle at the dimension d k ∈ I.The dimensions are kept within I by using so-called clamping operator.Note that in this algorithm the dimension parameter is a natural number assigned to every ANN structure. Following the reasoning given in Section IV we adapt MDPSO to rational systems by replacing the arithmetic operations in the position update equations Eq. ( 18) by their hyperbolic variants.We call this modification MDHPSO and provide its pseudocode Alg. 1 in the Appendix.Recall that the vector m in Eq. ( 7) is the natural characterization of the complexity of the rational system.This was converted in the sequence 1, . . ., 30 in Table I.The real explanation of this conversion was to make our 2. Approximation of biomedical signals with α = 0.5 by using the first four channels of the record slp02a from the MIT-BIH/slpdb database [8]. optimization problem compatible with MDHPSO.In this way we made sure that our generalized rational variable projection constructed for ECG meets all the three conditions in (11). Fig. 2 shows examples where the multi-dimensional rational variable model along with MDHPSO is applied to different types of signals such as blood pressure (BP), respiration (RESP), ECG and EEG.It is transparent that the method automatically adjusts the number of the inverse poles and the coefficients to the complexity of the signal.For this reason, only two inverse poles are used to represent the BP and RESP signals with 6 and 8 coefficients, and dimensional parameters d = 1, 4 respectively.In case of ECG and EEG signals the optimal dimensions of the architecture space are increased.The number of inverse poles changes from two to three, the number of coefficients are 12 and 18, and the dimensional parameters are d = 14, 23.We note that the solution produced by the MDHPSO can be refined by fast local methods such as Gauss-Newton algorithms staying within the optimal dimension. VI. ECG COMPRESSION Biomedical monitoring of the human body is one of the most important tools for patient's diagnosis.It is for instance principal for proactive prevention of diseases.We note that long-term recordings such as ECG Holter-monitoring or 24 hours multi-channel electroencephalogram (EEG) recordings generate a large amount of data.This explains the need for compression in such cases.Of course the variable projection can also be used for various tasks other than compression including features extraction for classification, person identification, etc.Here we choose the task of compression of ECG to demonstrate the efficiency of our method. A wide range of algorithms have been proposed in this field.They can be classified into three categories [49], namely, parameter extraction algorithms, direct time-domain methods, and transform-domain techniques.Here we are considering the latter one, which can be interpreted as projections of the signal to low dimensional subspaces.Existing methods use sinusoids, wavelets, wavelet packets, Walsh functions, orthogonal polynomials, splines, principal components, etc. [13], [50]- [53].For instance, Karczewicz and Gabbouj [12] proposed an algorithm that approximates the signal by linear combinations of B-splines with free knots.Although the algorithm is highly adaptive the computational cost is quite large due to the lack of orthogonality.In order to overcome this problem, orthogonal polynomials (in particular Hermite functions) were applied [13], [15], [16].As a consequence of orthogonality, the coefficients of the Hermite representation can be easily calculated by using scalar products of the corresponding Hilbert space.Efficient implementations for both the continuous and the discrete cases are presented in [14], [17].In contrast with the B-splines, Hermite-based compression schemes contain only one free parameter, namely the dilation of the base functions.Analogously, parametrized orthogonal wavelets can also be applied for signal compression.Then a wavelet decomposition with L/2 − 1 degrees-offreedom is obtained, where L is the length of the wavelet filter.Theoretically the number of free parameters is infinite, but for a large number of parameters the formulas become unmanageable.For this reason, in most of the signal processing applications the length of the filter is restricted to L = 6, which means only two degrees-of-freedom (see e.g.[24], [25], [54]).We note that only a few methods utilize rational functions [32], [46], [47], [55], [56], and even in those cases, the complexity of the models is fixed a priori.We will show that our approach is essentially different from them, and the proposed algorithm outperforms these methods. A. Preprocessing Stage 1) Beat Detection/Normalization: The compression method is based on successive evaluations of the MDHPSO algorithm on each heartbeat.The QRS complexes should be detected first to identify the heartbeats [57].Then we follow [58] to get the segmentation.Namely, the original signal is cut at every 130th sample before each QRS peak.In the next step the linear correction is applied to avoid jumps at the endpoints: where f = (f 0 , f 1 , . . ., f M −1 ) T is the discrete signal segment and M is the number of samples.Then we apply normalization For the reconstruction of f the values f 0 , f M −1 and f * 2 should be stored as well. 2) Hilbert Transform: We will apply a rational transform, given in Section II-C4, for ECG records.Recall that the signal in Eq. ( 8) belongs to H 2 (D).This implies that the real function representing a heartbeat should be extended to a complex valued function in H 2 (D).It can be preformed by means of the wellknown Hilbert transform.Therefore we will employ the discrete Hilbert transform H to f to obtain F := f + iH f . B. Compressing Stage For the compression of F obtained during preprocessing we will apply the multi-dimensional generalized rational variable projection method along with MDHPSO developed in Section V. Note that the original MDPSO was successfully applied in optimization problems related to dynamical environments [6].From this point of view, the problem of ECG compression is similar due to the physiological behavior of the human heart.Although, the ECG signals are characterized by strong interbeat correlation, the segments are influenced by several factors including the respiration rate.Namely, the heart rate (HR) increases during inhalation and decreases during exhalation.In some cases only a few coefficients are needed for a good approximation.In other cases more coefficients are required to store the significant diagnostic information.Typical examples are the abnormal heartbeats, in which sudden changes are present. 1) Basic Algorithm: After the preprocessing stage, the MDHPSO algorithm and the corresponding rational projection are executed on each heartbeat.The result is an optimal inverse pole configuration, which is quantized and stored together with the related coefficients.The architecture space in Table I is also saved in the header of the compressed file.It serves as a look-up table for the pole configurations.The structure of the compressed file and the block diagram of the method can be seen in Table II and in Fig. 5. 2) Aligned Algorithm: As mentioned above, ECG signals have strong interbeat redundancy caused by cardiac cycles.Taking advantage of this phenomenon, compression results can be highly improved in certain situations.For this reason, we apply the average beat subtraction technique [59].In this approach, the length of the heartbeats are equalized by zero-padding after segmentation and beat alignment.Then, the average of the first 30 beats in a record is approximated by using the pole configuration of the highest dimension of the architecture space.It provides an accurate representation of the mean cycle which is subtracted from all the segments of the ECG.Finally, the basic compression algorithm is applied on the residual signal.Although, the parameters of the average beat should also be stored in the header, higher accuracy/compression ratio is expected due to the interbeat correlation. C. Parameter Estimation 1) Initial Swarm: In order to speed up the convergence of the optimization, we use a starting estimate for the system parameters.Namely, we keep the optimal pole configurations of the previous segments in the initial swarm of the MDH-PSO algorithm.More precisely, MDHPSO is initialized with a random swarm in which the first anb particles contain the optimal pole configurations of the previous anb beats.anb is the average number of beats in a respiratory cycle computed by anb = [HR/brc], where brc denotes the number of breaths counted in a minute.In our implementation we set anb = 5. 2) Calibration of Parameter α: We investigated the influence of the regularization parameter α in the cost function f cost on CR and PRD.Our goal was to achieve the highest CR by keeping PRD within the "very good range" according to Table III (cf.[60]).To this end, we took 24 records (see e.g.Table IV) of the MIT-BIH Arrhythmia database from Physionet [8].Both the basic and the aligned versions of the proposed algorithm were applied on the signals by setting α = j/10, (j = 1, . . ., 10).By Fig. 3 we conclude that 0.5 meets our requirements for these records.Indeed, we evaluated the normalized quality scores (QSN) in Eq. ( 21), which indicates that the optimal value of α is around 0.5.Although the performance is slightly better in terms of QSN for α = 0.6, the corresponding PRD degrades to the "good" quality range.Therefore, we performed the tests for α = 0.5.The results in Table IV show that this value is a good choice generally. 3) Quantization: The optimal inverse poles and the corresponding coefficients are linearly quantized and stored in the compressed file.In order to find the optimal number of bits for the representations, we use the records 117 and 119.These are extremely irregular ECGs, which are widely used for evaluating the performance of ECG compression algorithms (see e.g.[25], [58]).We need k bits to represent the arguments ∠a j , ∠c j and the absolute values |a j |, |c j | if the quantization steps are 2π/2 k and 1/2 k , respectively.Indeed, each beat is divided by its 2 norm, so the energy of the signal is equal to 1.As a consequence of the orthogonality of the MT system and the Parseval's theorem, the constraint |c j | ≤ 1 holds.Moreover, a j ∈ D, so 1/2 k is a proper quantization step for both |a j | and |c j |.In order to find the optimal quantization step, we executed the MDHPSO 100 times on the first four minutes of the records 117 and 119 for each quantization step.The average reconstruction errors of the runs for k = 2, . . ., 10 is shown in Fig. 4. It is evident that 4 bits are enough to represent the angles and the absolute values of the inverse poles.Similarly, it is not worth to store the coefficients using more than 7 bits, since it does not improve the PRD significantly.The number of bits assigned to the quantities included in the compressed file are displayed in Table II. We note that the quantization of the inverse poles is especially important, since it directly influences the optimization.In order to demonstrate this effect, we examined the compression performance on the same 24 records as in Table IV by using two quantization configurations: p3c7 and p4c7.The abbreviaton pxcy denotes the number of bits we used to store both the absolute values and angles of the poles (p) and the coefficients (c), e.g.p4c7 denotes the optimal setup.As was to be expected, the sub-optimal setup p3c7 yielded worse reconstruction error (PRD) and compression ratio (CR) compared to p4c7.The reason is twofold.On one hand, storing the parameters on a better resolution decreases the PRD.On the other hand, in case of p3c7, it turned out that the low resolution of the inverse poles were compensated by an increased number of coefficients.In the optimal setup p4c7, the optimization choose less complex rational function systems to minimize the cost function which means less coefficients to store, i.e., better CR.This phenomenon can be seen in Fig. 6, where we displayed the histograms of the best dimension indices at the final iteration of the optimization.The figure shows that the optimization for p4c7 terminated more frequently in lower dimensions than its counterpart p3c7. VII. EXPERIMENTS For comparison tests we used the MIT/BIH arrhythmia database [8], which has been widely used for testing ECG compression techniques.The dataset contains 48 half-hour long ECG recordings, which was digitized to 11-bit resolution.We note that in many papers the tests were performed for only a small portion (couple of records, minutes) of the whole set.Here we take those 24 records suggested in [61], and compress the first channel of the entire signal.It results in a total of 12 hours long raw ECG data which was used to compare the proposed algorithm and those described in Section II-C.We note that we implemented the recent versions of the corresponding algorithms.Namely, the Hermite expansions of the QRS complexes were calculated by discrete orthogonal polynomials [14] and the mother wavelet parametrization was applied along with wavelet packets optimization [24].In the latter case, the wavelet domain was represented via the modified run-length coding recommended in [25]. Although the reconstruction error is usually expressed in terms of PRD, we cannot ignore the distortion measures designed especially for ECG signals.Unfortunately, only a few distortion criteria are available in ECG compression for performance evaluations.One of them is the so-called weighted diagnostic distortion (WDD) measure [62].On one hand, it correlates very well with mean opinion scores (MOS) of the clinical experts.On the other hand, there is no standard code for computing the WDD.Besides, the measure is unstable due to the requirement of accurate classification for characteristic features of the ECG signal.For this reason, we will use another diagnostic distortion measure called the wavelet-based weighted (WW) PRD [60].In that case, the signal is decomposed into five sub-bands which are weighted regarding their cardiological significance.Let us denote the coefficient vectors of the jth wavelet level of the original and the reconstructed signals with zero mean by c j and ≈ c j , respectively.Then the WWPRD can be defined as follows: The sequence of weights are w = 6 27 , 9 27 , 7 27 , 3 27 , 1 27 , 1 27 which were heuristically assigned to each sub-band emphasizing their diagnostic significance.We note that both the PRD and the WWPRD measures provide five quality groups corresponding to different ranges of the values, which are summarized in Table III (see Table VII in [60]).According to [63], the PRD should be computed for zero mean signals.Then it is not affected by the mean of the signal.In order to distinguish the two types of PRD we will use the following notations: where ≈ f is the reconstructed signal, and PRDN stands for the normalized PRD, i.e., PRD of a signal with zero mean. We performed the comparison tests in two stages.First, the proposed algorithm, both the basic and the aligned variants, were executed by using the optimal parameters given in Section VI-C.Second, we took the four other variable projection methods discussed in Section II-C: B-splines [12], Hermite functions [13], [14], wavelets [25], wavelet packets [24], [64], [65].In order to make a fair comparison, we executed these four algorithms iteratively till their PRDNs became close to those of our basic method for each record.We increased the number of coefficients gradually, and we stopped the iteration if the PRDN of our method (the values in the fourth column of Table IV) or the maximum number of coefficients was reached.As a consequence, sometimes the PRDN was not even close to that of our method (see e.g. the Hermite representation for the record 102).One can see the results of the experiment in Table IV, where CR denotes the ratio of the number of bits of the original and the compressed files.Note that we used the normalized definition of the PRDN which is independent of the mean of the signal.In addition, we calculated the PRDN for each beat and presented their average for the in Table IV. VIII. DISCUSSION First, we compare the basic and the aligned variations of the proposed compression algorithm.As it was shown in [59], the regularity of ECG cycles facilitates the application of beat subtraction techniques.This complies with the significant improvement that can be seen for more than half of the records: 100, 103, 105,115,201,202,205,209,213,214,215,217,219,232.Namely, in these cases the mean of the differences in PRDNs and WWPRDs are 0.41% and 0.62%, and the compression ratio is 1.38 times better in the aligned case, on the average.These results agree with those in [59], especially for the records 103, 201, 202, 213, 214, 215, 217, 219 (see e.g.Fig. 8 in [59]).The standard deviations of PRDN, WWPRD and CR for the aligned and the basic algorithms are close to each other and quite low.This indicates the stability of the algorithms.Moreover, according to Table III, the average PRDN and WWPRD of the reconstructed ECG signals fall within the range of very good and good quality for the basic and the aligned variation of the proposed method.On the other hand the aligned method is not always preferable, see e.g. the records 104, 207, 208 in Table IV.In these cases, the average beat estimation is poor due to the high presence of abnormal beats and irregular cardiac cycles.According to [59], the compression performance of the simple average beat subtraction technique highly depends on the similarity between the estimated average beat and the beats to be compressed.Therefore, a big improvement can be achieved for those recordings that present one type of beats in a regular rhythm.In order to analyze the performance of the aligned algorithm, we computed the average Pearson correlation coefficient ρ x,y between the estimated average beat and the beats to be compressed.As we discussed, the rhythm is also important, hence we calculated the standard deviation σ RR of the R to R peak distances.Since the compression performance is proportional to 1/σ RR and ρ x,y , we analyzed these measures along with the normalized quality score differences: where QSN is defined in Eq. (21).Fig. 7 shows that the average beat subtraction improved the compression performance for almost every record, except 104, 207 and 208.In case of 207 and 208, both 1/σ RR and ρ x,y are low, i.e., the average beat estimations are bad and the rhythms are irregular.In Fig. 8, we displayed the first 30 beats of the record 207.The set comprises two completely different beat types, thus their simple average will be a bad estimate of any of them.Record 104 is another example, which contains a lot of paced (No 1373) and fusion (No 664) beats.The difference between these two beat types can be seen in Fig. 9(a).Also, Fig. 9(b) shows the first 30 beats of the record 104.Although the estimated average beat is quite similar to the paced beats, it is still very different from the fused ones.This is why the resulting QSN is lower for the aligned method, despite of the high 1/σ RR and ρ x,y values.We note that there are other more sophisticated beat subtraction techniques including average beat code books.Based on the results on regular ECG records, we expect the proposed method can be further improved by using these techniques.This will be a part of future work.Now, we compare the proposed method with the other four ECG models discussed in Section II-C.We note that also these algorithms can be modified by beat subtraction techniques.Therefore, in order to make a fair comparison, we exclude the aligned method from the analysis.Moreover, as mentioned in Section VII, every method was adjusted to have PRDNs similar to the basic method.This is reflected in the "Reconstruction error" section of Table IV.Comparing the WWPRDs we conclude that the basic method is superior to the others both in terms of average and standard deviation.The same holds for most of the individual records as well.It is worth mentioning that the averages of both PRDN and WWPRD fall in the very good quality category.The best values for the standard deviations indicate the reliability of getting high quality compressed signals with CR 1:15.It is evident that the highest CR, which was our ultimate motivation, was achieved by our algorithm.The CR values for the others, except for the B-spline case, are significantly lower.In terms of WWPRD and CR the B-spline method turned to be the second best.We note that it has much higher computational cost compared to the basic method due to the iterative knot removal algorithm and the lack of orthogonality. The variable projection via Hermite functions is one of the least effective methods in sense of PRDN, and WWPRD.In fact, the desired level of PRDN could not be reached, within the predefined low compression ratio in several cases.This is due to the segment based representation.We get a good approximation of a wave/spike if the main peak is close to the middle of the segment.On the other hand, if the predictions of the endpoints of the P, QRS, T waves are not good enough, then the approximation error is higher.The three free dilation parameters available cannot compensate the effect of bad segmentation. We note that the optimization of AWT with respect to the two free parameters always terminated near the values a 1 = 1.3598, a 2 = −0.7821.The average differences are less than 7.7 • 10 −3 and 1.1 • 10 −2 , respectively.These are the parameters This complies with the experience in [24].The idea behind the optimal behavior of the db3 wavelets for ECG signals lie in the facts that they have three vanishing moments (see e.g.[23], [66]), and ECG curves are quite smooth signals.The latter one explains the tendency of using B-splines, orthogonal polynomials in this field. The optimization of the adaptive wavelet packet transform (AWPT) is carried out in two steps.Following [24], we selected the best basis from the wavelet packet tree with depth 7 according to the Shannon entropy.Then, we changed the parameters of the mother wavelet using Eq. ( 6).As shown in Table IV, the CR values of AWPT are worse than those of AWT.This is due to the fact that the optimal parameters were again close to the wavelet basis db3.In contrast to the simple AWT, we have to store the best wavelet basis as well, which explains the low CR values.Our observation again coincides with the results of [24].In that work, the same adaptive wavelets and wavelet packets were applied as in Section II-C for compressing ECG and EMG signals.Although they achieved a high improvement for EMG signals, the results on ECG curves were very close to the db3 wavelet basis.Hence, they concluded that the gain in performance is strongly depended on the signal type.We call the attention to the fact that the diagnostic distortion (WWPRD), which was not examined in previous works like [24], [64], [65], is significantly higher for AWPT than for AWT. We also compare the proposed algorithm to state-of-theart methods that are based on various approaches including wavelets [25], [67], [68], wavelet packets [74], wavelet encoding [69], [71], discrete cosine transform [70], image compression [73], and deep neural networks [72].In order to evaluate the performance of these methods the so-called quality scores were introduced in [61]: QS = CR/PRD, QSN = CR/PRDN.(21) The higher the QS or QSN, the better the performance.In Table V, we summarized the results for those records that are common in the cited papers and in our work.The results show that the aligned variation of the basic algorithm outperforms all the others.In addition, even the basic procedure is very close to the competing methods.Particularly, the QS is very high for [55], [56], which are also utilizing rational functions.However, in those works, the complexity of the model, i.e., the number of inverse poles n, and the subspace dimension N were fixed a priori, while these parameters are found automatically in our algorithm.It is also worth mentioning that most of the competing methods in Table V apply entropy based lossless compression techniques as a final step.These algorithms can be utilized for the proposed method as well, and it would further improve our results.Although the proposed generalized rational variable projection algorithm is quite complex numerically the time complexity is manageable.Let N it stand for the number of iteration of the MDHPSO, and recall the notations for the swarm size S and for the number of dimensions |I|.At the initialization step we need to evaluate the cost function in Eq. ( 16) at each particle and for each dimension.It is followed by S number of function evaluation for each iteration.Therefore the overall number of function evaluation is equal to S • |I| + S • N it .In our experiments the algorithms were implemented in MATLAB.An Intel(R) Core(TM) i7-6700 @ 3.40 GHz CPU was used.The size of the swarm and the number of iteration were set to 30 and 20 in the optimization, respectively.Then the average execution time of a 30 minutes long ECG record is about 23 minutes for both the basic and the aligned algorithms.The only competitive algorithm in terms of WWPRD and CR is the B-spline method, for which the average execution time is about 91 minutes.Note that the B-spline method utilizes a knot removal algorithm in which the initial number of knots is proportional to the number of samples in a beat.Hence, high sampling rate increases the execution time.In contrary, the computational complexity of our method depends on the number of function evaluations and the number of iterations only, which are predefined manually.In addition, one can reduce the number of relevant inverse pole configurations in Table I based a priori information of specific class of signals.The execution time can be further decreased by applying parallel implementations of the PSO and the MDHPSO algorithms, but these are beyond the scope of this work. IX. CONCLUSION In the first part of this paper we considered a mathematical model called variable projection and showed that several known signal processing methods can be understood as special case of that.Then we studied the rational function systems in this framework.We generalized the rational variable projection model by adding the model complexity as free parameters.For the optimization of the system parameters we developed the multi-dimensional hyperbolic variant of the well-known PSO algorithm.Based on our experience we believe that our method can be effectively used for various problems in signal processing including data fitting, feature extraction, segmentation, etc.In this work the problem of compressing ECG signals was chosen for demonstrating its efficiency.It turned out that the generalized rational variable projection algorithm outperforms the previously known ones.The MATLAB implementations of all the algorithms included in this paper and the test results are available at the website [75], [76].We emphasize that the proposed algorithm is of general nature.Namely it is flexible and can be adjusted to different types of signals like EEG [35]. Fig. 4 . Fig. 4. Quantized arguments and absolute values of the inverse poles and the coefficients. Fig. 7 . Fig. 7. Analysis of the compression performance of the basic and the aligned algorithm. Fig. The first 30 Fig. The first 30 beats in the record 207. TABLE I ARCHITECTURE SPACE OF THE MDHPSO Fig. 1.Average PRDs of each pole configuration for the records 117, 119 of the PhysioNet MIT-BIH Arrhythmia dataset [8]. TABLE II DATA STRUCTURE OF THE COMPRESSED FILE TABLE III PREDICTION RANGES OF PRDN AND WWPRD Fig. 3. Influence of the parameter α on the PRDN, CR, and QSN. TABLE V COMPARISON OFPROPOSED METHODS WITH THE STATE-OF-THE-ART of the Daubechies wavelet db3.This means that the advantage of the adaptivity of this method is negligible in ECG signals.
14,054.6
2020-02-28T00:00:00.000
[ "Computer Science", "Engineering" ]
Quaternion to Euler angles conversion: A direct, general and computationally efficient method Current methods of the conversion between a rotation quaternion and Euler angles are either a complicated set of multiple sequence-specific implementations, or a complicated method relying on multiple matrix multiplications. In this paper a general formula is presented for extracting the Euler angles in any desired sequence from a unit quaternion. This is a direct method, in that no intermediate conversion step is required (no quaternion-to-rotation matrix conversion, for example) and it is general because it works with all 12 possible sequences of rotations. A closed formula was first developed for extracting angles in any of the 12 possible sequences, both “Proper Euler angles” and “Tait-Bryan angles”. The resulting algorithm was compared with a popular implementation of the matrix-to-Euler angle algorithm, which involves a quaternion-to-matrix conversion in the first computational step. Lastly, a single-page pseudo-code implementation of this algorithm is presented, illustrating its conciseness and straightforward implementation. With an execution speed 30 times faster than the classical method, our algorithm can be of great interest in every aspect. Introduction When dealing with 3D orientation problems, many different formalisms can be used to describe a given rotation [1], each of which has its own set of advantages and shortcomings. Arguably the most direct representation of a 3D rotation is a matrix R 2 SO(3), where SO(3) is the group of invertible 3 × 3 matrices such that det(R) = 1 and RR T ¼ R T R ¼ I, where I is the identity matrix. These rotation matrices represent direct linear transformations such that, with v 2 R 3 : Apart from being simple to use, a rotation matrix also has the advantage of being continuous, and a simple matrix multiplication can be used to compose rotations: R = R 2 R 1 is the rotation matrix corresponding to a rotation by R 1 followed by a rotation by R 2 . 3D rotation matrices have some numerical shortcomings, however. For example, as many as 9 numbers (and 6 constraints) are required to represent a 3 degree of freedom rotation, and it can be difficult and computationally costly to orthogonalize a rotation matrix numerically [2] (i.e., to check that the matrix has its determinant equal to 1 and its inverse equal to its transpose, which is necessary to compensate for the accumulated floating point errors). However, it is possible to parametrize this rotation matrix with a smaller set of numbers [3]. One of the most usual set of parameters are the Euler angles. The approach consists in decomposing the 3D rotation matrix into the product of three rotations: Where R θ e is a rotation by the angle θ around the axis e, and the consecutive axes are orthogonal (e 1 � e 2 = e 2 � e 3 = 0). The advantages of using Euler angles include the fact that only three numbers have to be stored, and due to their familiarity, they can be more easily understood, which explains why they are still being so widely used, even in cases where other forms of representation may be more appropriate. The use of Euler angles also has several disadvantages. For example, they are discontinuous and it is difficult to directly compose two 3D rotations expressed in Euler angles. Euler angles are also affected by the phenomenon commonly called "gymbal lock": when two axes become aligned, making the system underdetermined, special care has to be taken. In addition, since there are 12 possible axis sequences (24, when considering the difference between "intrinsic" and "extrinsic" rotations), the correct sequence has to be checked in the case of each application. An arguably preferable parametrization are quaternions. A quaternion is a hypercomplex number defined by one real part and three distinct imaginary parts (which can also be regarded as the vector part). When the norm of a quaternion is equal to 1, quaternions are a useful and efficient representation of 3D orientation: they can be composed as easily as rotation matrices, they are continuous, and they are easily constructed from the axis-angle representation. In addition, quaternions can be normalized trivially, which is much more efficient than having to cope with the corresponding matrix orthogonalization problem. For these reasons, most 3D graphical applications and rotation engines carry quaternions under the hood. Besides these advantages, Euler angles are still being preferred by many authors: Euler angles are the most familiar concept to most engineers and researchers. In addition, in the case of many problems in which there exists only one degree of freedom, angles can suffice. To be able to perform fast calculations with quaternions and at the same time analyze rotations using angles, it might be necessary to have an efficient method of converting the one set of parameters to the other. Calculating the corresponding quaternion (or rotation matrix) for a given set of Euler angles is trivial. Extracting the Euler angles is much harder, however. One of the following two methods has generally been used up to now. The first method consists in adopting a different set of formulas for each possible angle sequence [4], which is difficult to implement and debug. The second method is that described in [5]. SciPy [6], for example, a widely used scientific library for the Python programming language, implements this method. It converts rotation matrices into Euler angles and involves many different matrix multiplications, including the inverse trigonometric functions required, which are naturally computationally costly. In addition, if rotations are stored in the form of quaternions (as is usually the case in many of the 3D rendering software tools dealing with rotations), an additional conversion step from quaternions to rotation matrices is necessary. Since many robotic, graphic and other high-level applications involve the use of quaternions (even if they are hidden from the user), it can be necessary to have a concise, efficient method for the conversion between quaternions and Euler angles. The direct conversion formula from quaternions to Euler angles presented here requires fewer computational steps and less expensive computational resources. Moreover, this conversion formula is much simpler to implement and debug, making it a great option for any new applications needing to implement this kind of conversions. Quaternion algebra summary In this section, the key properties of quaternions are summarized. It is assumed in this work that we are dealing with the classical Hamilton quaternions. Since the definitions concerning quaternion algebra are not perfectly consistent in the literature [7], some of the main notations and definitions used in this study are then presented. Quaternions form a non-commutative division algebra denoted by H, which extends the complex numbers. A quaternion q 2 H consists of four components: Where q r ; q x ; q y ; q z 2 R. All the properties of quaternions can be obtained using its fundamental property, as given by Hamilton: Using the above properties, the product of two quaternions q and p can be expressed by the Hamilton product: For the sake of simplicity, quaternions will be written here as 4 × 1 vectors (with the scalar q r as the first element): is the the imaginary/vector part of q. The Hamilton product between two quaternions in 4-vector form will be denoted by: Defining the conjugate q � ¼ q r À q " # and the absolute value as jqj ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi the inverse q −1 of q is given by: And for any quaternion q: If q is a unit quaternion, which means that |q| = 1 and q −1 = q � , it can be used to represent the rotation between two reference frames. Denoting v A and v B a vector v in frames A and B, respectively, and q ¼ q B A the unit quaternion corresponding to the rotation from A to B: The equivalent rotation matrix is given by: And the equivalent quaternion for a rotation of an angle θ around an axis e is given by: Formula development In the section, the formula for the conversion between a quaternion and any of the 6 proper Euler angle sequences is derived, and then an adaptation for the 6 remaining Tait-Bryan sequences is demonstrated. Singularities. There are two different singularities in these expressions. When θ 2 = 0, we have s 2 = 0 and θ − is undefined. When θ 2 = π, we have c 2 = 0 and θ + is undefined. In both cases, one degree of freedom is lost and we can argue that θ 1 (or alternatively, θ 3 ) loses its geometrical meaning. We can then either set θ 1 to zero, or keep it fixed in its latest value (for example, when updating an estimator, for the sake of continuity). Defining: Takingŷ 1 to be some constant (zero or otherwise), we can calculate: y 3 ¼ 2 atan2ðb; aÞ Àŷ 1 ; when y 2 ¼ 0 General formula for θ 1 and θ 3 in the absence of singularities. If θ 2 6 ¼ 0 and θ 2 6 ¼ π/ 2, multiplying z + and z − yields: On similar lines, multiplying z + and the conjugate of z − yields: The angles can then be obtained using: Or, more simply, from Eq 25: Or: It is worth noting that Eq 32 requires fewer operations than Eq 30: only 2 calls to atan2, one addition and one subtraction, but a final wrapping step may be required in order to either keep the angles either in (−π, π] or [0, 2π). General formulas for calculating θ 2 . From Eq 24, we know that: And we can use any of the following equivalent formulas obtained directly from the definition: Where the factor n 2 = a 2 + b 2 + c 2 + d 2 = |q| 2 can be ignored if the quaternion is already normalized. Using the properties of inverse trigonometric functions, we can also find the following formula, which avoids the need for a square root: Case 2: Tait-Bryan angles We now define: Where −π/2 < ϕ 2 < π/2. Again assuming that e, e 0 and e 00 are orthogonal unit vectors and e 00 = ε e × e 0 , where ε = (e × e 0 ) � e 00 = ±1, we define: We note that: Which gives: Where: Where y 0 2 ¼ y 2 þ p=2 and y 0 3 ¼ εy 3 , and: PLOS ONE θ 1 0 // For simplicity, we are settingŷ 1 ¼ 0 Many operations are required to convert a quaternion into a rotation matrix. Using the homogeneous formula from Eq 11, for example, if special care is taken in order not to repeat any operations, we have to perform at least 4 2 = 16 floating point multiplications (all the possible products between two different components of the quaternion, plus all the squares of each component), 4 × 3 = 12 multiplications by 2 and 3 × 3 + 6 = 15 additions/subtractions. This conversion step alone is more than enough to make an algorithm based on [5] much slower than the proposed method. In addition, multiple matrix multiplications also have to be computed. By comparison, our algorithm replaces all the conversions and matrix multiplications by a simple permutation of the quaternion elements and in the case of Tait-Bryan angles, only 5 additional additions/subtractions and possibly a sign change are required. Results In this section, a performance comparison between our method and the Shuster method is presented. We adapted the SciPy library in order to compile the algorithm as described in Section 4. A real data set comprising the orientation of a spinning object with 3284 data points was used to compare the efficiency of the two algorithms. The full implementation and data set can be downloaded from [8]. First we noted that both methods give the same results: adding the absolute value of the differences between the two methods in a whole data set gives an error of the order of 10 −12 . The execution times required in our tests for each sequence (and their ratios) are presented in the Table 1. From this test, it can be clearly seen that the method presented here is about 30 times faster. Conclusion The Euler angles are still a useful intuitive 3D orientation parametrization. A fast method of conversion to/from any other set of parameters can therefore be of great interest for displaying or analyzing data, for instance. In this study, we therefore developed a general formula for this conversion which is concise, easy to implement and easy to debug. In addition, the fact that our method is about 30 times faster than the method proposed by [5], which required an intermediate conversion into rotation matrices, we believe that our proposed method can be of great interest. This faster execution time also makes this method suitable for use in embedded real time applications such as inertial measurement units (IMUs). We propose that this method could be adopted as the new standard method for converting quaternions into Euler angles, and we are now planning to contributing to several scientific libraries accordingly. Moreover, a possible further development is to generalize this formula for the Davenport angles [9], a generalization of the Euler angles in which any set of distinct non-orthogonal axes are used.
3,165.4
2022-11-10T00:00:00.000
[ "Computer Science" ]
The Logic Fundamentals of Machine Consciousness: Theory of Tri-State For a long time, the system of scientific methodology has been composed of logic, empirical (falsification), qualitative, quantitative and deterministic, and corresponding thinking tools. However, under the background of complexity science, the category of methodology should be changed, that is, on the basis of traditional methodology, non-classical logic, hierarchy, stereotype (topological invariant) and uncertainty should be added. This is also the main idea behind the “Thoery of Tri-state” in the first part of this paper. The core idea in the theory of “Tri-state” is “Tri-state Logic” (“positive | negative | uncertain state”). The ontology of “Tri-state Logic” aims to reveal the meta space-time movement law of things transforming from one form to another, that is, the coupling of time and space in the development of things, and the orientation and evolution of the continuity of things. The mathematical basis of “Tri-state Logic” is knot theory and dynamics theory. The second part of this paper designs a machine-consciousness model framework based on the “Theory of Tri-state” (Tri-state Logic). Its research starting point is the perspective of cognitive dynamics (cognitive psychology + dynamics), which is very different from the research ideas proposed by Minsky's “The Emotion Machine”. At the same time, this paper also tries to answer Turing's questions from different space-time dimensions, and gives an experimental idea of “kindergarten game” by comparing Turing's “imitation game”. Introduction Generally, the "Theory of Tri-state" is different from the existing natural science theory in terms of what the physical world is and the existence of quantity and quality. It is concerned with the intrinsic problem of why the physical world is so and the possible and impossible evolution of the physical world, and the origin of complex life 3 . It can be called a multidisciplinary meta-research methodology. As this paper is concerned, it is more about the theory of cognitive science which studies the meta level, meta energy and meta information of cognitive ontology. Of course, after all, it is only a scientific hypothesis. More concretely, in chapter 2 of this paper, it is a meta-cognitive theory used to study machine intelligence. In other words, the "Theory of Tri-state" is the methodological basis for the construction of machine consciousness, machine thinking and machine behavior model. The first chapter of this paper expounds the methodology of "Theory of Tri-state", and the second chapter expounds the logical system construction of the machine consciousness model supported by "Theory of Tri-State"(excluding logical system of machine thinking and machine behavior). In short, in chapter 2 of this paper, the "Theory of Tri-state" is like a comprehensive discipline methodology of geophysical research from top down, its research object is the "Troposphere" in atmospheric physics, and this "Troposphere" can be compared to the carrier of "Machine Consciousness". The "Machine Consciousness" is just like the clouds formed in the troposphere at any time due to the complex climate, which may disappear without a trace. The "Machine Thinking and Behavior" is just like the process of lightning and cumulus clouds in the troposphere. Chapter 1 The Framework of "Theory of Tri-state" It constructed by basic concepts, principles, laws, ontology space-time diagram and Tri-state Logic. Firstly, it is the description of core concepts. In philosophy of science, the main idea in "Theory of Tri-state" is the relativity theory of time and space, i.e the universe is not a multidimensional world, it is only a nested energy matrix infinitely. This matrix human call it "Sapce-Time" through the time. The physical world is constructed by the interaction of time and space, time and space are integrated, rather than independent coexistence. Time and space are transformed by "info enzyme", "info enzyme" is the space-time catalyst (in physical chemistry). The intermediate state of the transformation is the existence form of meta consciousness / meta energy. From this fusion and extension forms the continuous evolution of everything. It should be pointed out that, in general theory of relativity, time is a one-dimensional continuum after the decomposition of time and space (3+1) is done, time is a dynamic variable. In quantum mechanics, time is an external parameter, not a kinetic operator. According to canonical quantum gravity theory 4 ,the quantum states do not evolve over time, that is the whole concept of time as a one-dimensional continuum does not exist at all, and time is replaced by the relationship between "partial observable measurement" . In the "Theory of Relative Space-Time" in this paper, "time" does not exist, the "time" ---is an infinitesimal spacing in a universe energy matrix (limit case), and space is topological. As the physical world is concerned, the three-state superposition topological state formed by the entanglement of time and space (Tri-state Logic) has become the origin and evolution of everything in the physical world. The meta-level physical form of Tri-state overlay transformation: expansion, contraction and equilibrium; the meta-level physical form of Tri-state overlay transformation: energy, material and information; the meta-level mathematical physical form of Tri-state overlay transformation: positive, negative and uncertain state; the ontology of all things in the world is reflected by the Tri-state overlay transformation. From the point of view of mathematical physics, the deep understanding of the known physical world to human beings has gone through three stages: Cartesian coordinate system invariant (distance invariant), Einstein's inertial system invariant (speed of light invariant) and Euler topological invariant (Euler Characteristic). Today, we are at the beginning of the third stage. The starting point of "Theory of Tri-state" is not to focus on the phenomenological level of things, but on the deconstruction of the meta-level of things. In other words, it is to explore the deterministic reasons that behind the uncertainty presented by the physical world from a new perspective. As human society enters the 21st century, the exploration of new phase, time crystal, dark energy, dark matter, black holes, the origin of celestial bodies, the origin of celestial life and the origin of the universe has also penetrated into the depth of space-time meta-level. Similarly, the study of human brain in life science has entered the level of bio-topological effect consciousness and meta-cognition. Therefore, it is necessary to carry out the research and construction of meta-level scientific methodology system. This is also an original intention of writing this paper.  Ontological Cognition Emergence, Uncertainty, Equilibrium, Continuity, Self-organization, Non-decomposition. Six Fundamental Laws of "Theory of Tri-state ":  Law of Causality Everything has its premise and reason.  Law of Duality Everything has a definite "positive" or "negative" state, at the same time, there is also an uncertain intermediate state.  Law of Holographic Everything has a definite space-time nesting state.  Law of Periodic Everything only has its own cycle frequency of self-energy.  Law of Emergence In the same space-time, a state of something may be triggered by a continuous process of activity at its own level, leading to another state from a higher level.  Law of Now In the same time and space, something may be in both one state and another at the same time, that is, an indeterminate state. From the perspective of ontology hierarchy, the space-time world is composed of physical layer, phenomenal layer, conceptual layer and abstract layer interlaced with each other; from the perspective of ontological cognitive thinking, the space-time world is composed of the interlaced emergence of the real world, the information world, the conscious world and the thinking world. From the perspective of quantum topology 5 , there are four categorical properties in the topology of the space-time world: 1. Aggregation Coefficient 2. Path Length 3. Folding Order 4. Spin Degree Based on the meta-level logic, the space-time world forms a cluster of structural networks and functional networks with spatial scaling and topological phase transition by self-organizing space-time interactions and transformations(specification fields). Whether it is microscopic particle spin, DNA knot, protein folding, macroscopic cluster turbulence, galaxy rotation; macroscopic cluster turbulence 6 , it is all topological in ontology. If use the language of physics, the phenomenon of the specification field in the space-time world is described by the concepts of phase, symmetry and conservation law, in which Yang-Mills theory reveals specification in-variance. However, in the language of mathematics (differential geometry), the specification field of the space-time world is a complex "manifold" (fibrous plexus), and the intrinsic feature of a manifold is its topological in-variance, that is, Euler Characteristic. Further, in the differential geometry of a fibrous plexus, all spaces are maps of differential manifolds, are differentiable, and have a Jacobian matrix of highest rank everywhere. From this, the integral equation 7 is derived: where C is a numerical factor, o  is a differential form, X represents a basic closed chain of the bottom manifold, X W  equal to the X Euler-Pangaray representation number (topological invariant). Not long ago, a mathematical conjecture derived from 1911 ---the internal square problem (or the Rectangular Peg Problem) has been conquered, which proved that "For every smooth Jordan curve γ and rectangle R in the Euclidean plane, we show that there exists a rectangle similar to R whose vertices lie on γ . The proof relies on Shevchishin's theorem that the Klein bottle does not admit a smooth Lagrangian embedding in C2." 8 The proof method is based on the topological invariant properties of the spatial rotation overlay of the Mobius band in topology. As we all known that, the elementary geometry has proved that any square has an outer circle and an inner tangent circle. Therefore, based on the proof of "Internal Square Problem", if the square and circle are nested infinitely, it can be proved that the meta-structure of an energy initial value is a closed-loop chain. Therefore, it can be inferred that the infinitesimal space-time of the universe is formed by infinitely nested energy spheres in a discrete way, which we call it: the Cosmic Meta-energy Matrix. The so-called evolution of the universe (expansion and contraction) is actually the transformation of space-time aggregation at an infinite level. As a result, in this paper, based on the above integral equation (1) , by dimensional analysis we use operator i instead of C to represent information(clustering coefficient of topology), operator ) (x X to represent meta-energy (strong -weak), and operator T replaces   iT  S (2) Space is equal to the product of time and information. In other words, from point of view of quantum physics, corresponding to the Einstein's mass energy equation, time is mass, space is energy, and the essence of mass energy transformation is the mutual transformation of time and space, and this can also be seen as a new cognition of the physical world, i.e., the arbitrary reciprocal scale factors between space and time: information (i). The core of "Theory of Tri-state"---Tri-state Logic Before discussing the Tri-state Logic, it is necessary to make a very simplified illustration of the logical thinking pattern of human development to this day (see Figure 3 below): Starting from the meta-point on the leftmost side, the four graphs represent naive logic, formal logic, dialectical logic and Tri-state Logic (new). In the mathematical way, naive logic is single valued (numerical value), formal logic is single multi-valued (algebraic), dialectical logic is multi-valued (function), and Tri-state Logic is multiple polymorphous (topology). From the point of view in mathematical physics, the development of naive logic to Tri-state Logic is a space-time conversion from simple discrete energy points and lines to complex continuous energy bodies. In the other words, the binary rules of the law of contradiction, and the law of excluded middle in traditional logic are incompatible with the law of Tri-state Logic. Based on the study of traditional monotonic logic, non-monotonic logic, multivalued logic and modal logic, and in combination with the scientific and philosophical view of the "Theory of Thi-state", we have developed a new form of meta logic, Tri-state Logic (see Figure 4 below): Its dominant property is the causality and symmetry of space-time, and the invisible property is the holography and superposition of space-time. : Cell, an infinite loop nested meta meta logical mode. From the point of view of life science, the complex evolution of cells is the entanglement of living DNA, that is, the writhing number of a curve in Euclidean 3-space, introduced by Calugareanu (1959-61) and named by Fuller (1971) 9 . From the mathematical point of view: In 2011, Peter Scholze proposed a mathematical concept called "perfectoid spaces" 10 , which combines topology, Galois theory, and p-scores, where the p-integer is modular based. For example, we classify integers by a higher power of 3, and 2 3 (9) integers modular to 3, 9 and 27 are stacked layer by layer like a tower, we can build a tower with an infinite number of layers, each three times as many as the one below it, and that pattern will continue, this also formed the "perfectoid spaces" proposed by Schultz. In this paper, the mathematical concept of "perfectoid spaces" can be used to describe the cycle nested superposition mode of "Tri-state Logic". From the physical point of view, Maurray Gell-Mann proposed the quark model ((the simplest three-state representation of the SU(3) group) in 1964, where that each baryon consists of three quarks (or anti-quarks), and each meson consists of two quarks (or anti-quarks), where gluon is the propagating particle with strong interaction between quarks, gluon field is also SU(3) group (symmetric), it has 8 generators (8th order Lie group), and gluon spin is 1. 11 From the evolution of basic particles to the process of Gell-Mann defining Quark to the unification of weak interactions and electromagnetic forces, a complete standard particle model of physics can be seen. In this paper, it can be the display image of "Ti-state Logic" at the level of matter element in the presentation layer. The ontological form of Tri-state Logic (see Figure 5 below) in is inspired by Hans Reichenbach's "Philosophic Foundations of Quantum Mechanics" 12 , the other two are by the ancient Chinese Yang Xiong's "The Book of Taixuan" 13 and Buddhism's three-branch logic. At that year, the emergence of Hans Reichenbach's Thee-Value Logic (also known as quantum logic) was also an attempt to describe and explain the logical basis behind quantum mechanics, but unfortunately, it did not stand. In contrast, the essence of Tri-state Logic is to change the traditional three value logic's "numerical" expression to "state" expression, from the original "true | false | uncertain value" to "positive | negative | uncertain state". It seems to be a transformation from "value" to "state", however, the essence is a meta-logical form closer to the real existence of the physical world. The Tri-state Logic ring is a cycle-by-cycle meta-space operation, which evolves from meta-state, dual state and neutral state. The neutral state is an overlapping state of three states, whose constitutive form is equivalent to the Borromean Rings originating from ancient Hinduism (the three rings do not interact with each other). In the field of philosophical world, the ancestor of Buddhism, Sakyamuni, said that there are three phase of cognition: the transformation, the karma and the truth. The ancient Chinese philosopher Laozi said, "Tao produces one; one produces two; two produces three; and three produces everything." 14 In a 1977 paper by Chenning Yang 15 , the internal relationship between magnetic monopole and ordinary plexus and extraordinary plexus is involved: "Why is electromagnetism without monopoles "trivial"? We cab gain some understanding by looking at a paper loop and a Moebius strip. If they are cut along the dotted lines, each would break into two pieces. Looking at the resultant pieces, we cannot differentiate between the two. The paper loop and the Moebius strip are different only in the way the resultant pieces are put together. For the latter, a twist of one of the resuitant pieces is necessary. The difference between a trivial and a nontrivial bundle resides only in the processes of joining: for the nontrivial bundle, a twist is needed in the joining process... If there is no monopole, S= 1, and the bundle is trivial. If there is a monopole, S = 1, and the bundle is nontrivial. (We may describe the nontrivial nature by saying that a twist of phase is necessary.)" The "S" value mentioned in this paper is derived from the Schrodinger equation solution: "The Schrodinger equation for a electron in the monopole filed is thus where and are, respectively, the wave functions in two regions. The fact that the two vector potentials in these two equations are different by a gradient tells us, by the well-known gauge principles, that and are related by a phase factor transformation From this we can extend to the topology of Tri-state Logic: From the theory of knots, we know that, Figure 6a is a trivial chain with a single direction chirality, no phase twist; Figure 6b is a nontrivial chain ring (Mobius Ring) based on a trivial chain ring, it has different chirality (black arrows pointing forward and backward) at the same time, that is, the phase is twisted. Corresponding to the positive and negative uncertain states of the Tri-state logic, the red triangular loop in the graph represents the uncertain states in the three-state logic loop, and the left and right black arrows represent the positive and negative states in the three-state logic loop. The uncertain state is an intermediate state, which exists in the torsional process interval of topological phase transition. In this paper, one point needs to be very clear: in the micro-world, the topology of Tri-state Logic does not mean that particles at the micro-particle level are topological, but that the space-time fields formed by micro-particles are topological. At the same time, from the perspective of ontology cognition (reality) of the quantum world, we boldly assume that Tri-state Logic is the "meta-logic" of the quantum world (quantum states have reality), which is the logic basis behind the superposition of quantum states, quantum entangled states, and quantum phase transitions. The "meta-states" of the three-state logic correspond to the "Quantum Form"; the "dual state" corresponds to the "entangled state"; and the "neutral state" corresponds to the "superimposed state". It should be pointed out that Harrigan and Spekkens provided a categorization of quantum ontological models in 2010 16 . Pussey et al. 17 proved that if a quantum system satisfies the ontological model of quantum mechanics and the assumption of independent preparation of quantum states, then a quantum state is real ---there is no intersecting compact set of the ontological distributions corresponding to any two non-orthogonal quantum states in 2012, and this conclusion is called the "PBR Theorem". However, from the scientific and philosophical point of view, PBR theorem only reveals the relationship between representation (quantum state) and reality (ontological state) in quantum mechanics, not the reality of quantum state. In order to understand the following description based on the Tri-state Logic mathematical model, it is necessary to briefly recapitulate the superposition principle of quantum state and the principle of entangled state: Principle of Quantum State Superposition Assuming that a quantum object has two definite possible states, 0 or 1, usually written as: |0>, |1>, because the quantum state (written as | ψ >) is uncertain, it is generally not in the |0 or 1 definite state, it can only be in the state of superposition of these two definite states according to some influence, expressed mathematically as: , where α and β are complex and satisfies Principle of Quantum Entangled State Assuming that the quantum object is two electrons with different spin directions --electron 1 and electron 2, its spin properties are mathematically expressed as: , as the tensor product of two quantum states, which is the entanglement of two electrons. The subscripts 1 and 2 indicate that this is the quantum state of electrons 1 and 2, taking the Z component representing the spin up and the Z component representing the spin down. Thus, by the way, it may also be possible to find some clues in the category of "Tri-state Logic" in the question of how the uncertain micro-quantum world has evolved into a definite macro-classical world in the physical world all along. The basic principles of Tri-state Logic based on quantum thought:  The implicit order or explicit behavior, subjective or objective cognition of the objects are entangled with each other.  Space-time ontology is an overlapping state of energy, material and information, and time and space can be transformed and deformed with each other.  Subjective and objective world activities cannot be separated from the space-time context of quantum phase transition. The basic concepts of Tri-state Logic consist of ---the influence of the participants (independent events, non-independent events, neutral events), the hedging of space-time (strength, emergence, balance), and the scaling of space-time (endogenous, synchronous, topological). From the point of view of traditional mathematical logic, the "cell" of Tri-state Logic is neither a Boolean logical algebraic mode nor Frege's logical function mode. In other words, the expression of Tri-state Logic is an abstract artificial language, but it is not composed of the traditional semantic and syntax rules, it is a logical topological mode similar to the natural clover form (see Figure 7 below): As we all know that, according to the famous Li-Yorke Theorem 18 , "Period three implies chaos" is an important theorem for characterizing deterministic chaos. Here, we borrow to use in the concept of "Tri-state Logic", we can clearly see that it is precisely the reverse proof of the "Tri-state Logic" of the ontology structure of clover. There is another physical validation: in 1975, Faddeev proposed a stable soliton solution within the framework of the Skyrme-Faddeev model 19 , whose topological structure can be described by Hopf invariants (Hopf invariants, or Hopf charge, abbreviated as QH), so this kind of topological soliton is called Hopfion. The three-dimensional real space endows Hopfion subgroups with various topological structures, which can form ring, chain and knot structures, the corresponding topological properties are described by homotopy group  3(S2), this topological invariant (i.e. Hopf charge QH) can be understood as linking number. Bogolubsky 20 first proposed such a model in 1988, , that is in a Heisenberg model of cubic lattice, the nearest neighbor interaction of four layers is introduced, where ij J is the nearest neighbor interaction between different lattice points. In this model, a stable Hopfion can be obtained, which can be approximated as a damping magnet model. In 2017 Sutcliffe 21 used this damping magnet model to obtain a variety of stable magnetic Hopsion structures, including the trefoil knot like Hopsion with QH of 10, demonstrating its rich spin structure. The initial evolution state of "Tri-state Logic" can be described by the following two basic equations: Mathematical Expression 1 Derived from the equation of toroidal junction 22 in knot theory (derivation process is brief): q p, is fixed integer; k is arbitrary integer) (4) Figure 8 obtained: Mathematical Expression 2 From the knot theory, the topological invariant of the knot also represents the energy 23 . It is known that the energy number of the knot invariant of a closed loop is: The number of intersections of a knot projected on a two-dimensional plane. (Note: Chern number can be regarded as the integral of some external differential forms on the manifold, which is a topological invariant. The total Bailey curvature in the two-dimensional phase space of quantum hall effect 4 in condensed matter physics can be obtained by where C is an integer, which is called the first kind chern number of ---Shiing-Shen Chern) The expression equation of "Spatial" frequency in Fouier Optical Theory 24 : The above C in the energy expression of knot invariant, if it connects with curve, it is a waveform, on the other words, it is frequency. As a result, the spatial frequency f and c are equivalence in the "Spatial". Thus, the space-time ontology equation of "Tri-state Logic" is derived(derivation process is brief): Mathematical Expression 1 is the description about space coupling and continuity, it is the constitutive equation of object, Mathematical Expression 2 is the description about space causality and energy, it is the ontology equation of object. In the other words, ontology equation can also be regarded as the mathematical expression of human consciousness thinking logic (meta-consciousness equation). (Digression 1: The ontology equation which is based on the Tri-state Logic ( also can be used to prove the Riemann Conjecture concisely. The Riemann Conjecture is expressed as: all the nontrivial zeros of the Riemann ζ function are located on the critical line. The Mathematical Expression: The logic path of proof is following: as a result, it can be concluded that the energy series E will be trended into infinite, and it is distributed in a discrete form in a space overlaid coaxially, thus, an infinite extension over the overlapping critical line. Therefore, it can be solved that all the nontrivial zeros are on the critical line.) (Note: why is there such an episode about the proof of Riemann conjecture in this paper? This is a tribute to Riemann. Because of his view on the natural forces of the universe: natural forces are caused by the distortion of geometric structure. This coincides with the author's point of view, and the above logic proves that the thought behind it is also consistent with Riemann's thought. ) The ontology of Tri-state Logic aims to reveal the law of meta space-time movement of things' transformation from one form to another(quantum phase transition), that is, the coupling of time and space of the development of things, and the evolution of the direction and state of things' continuity. It is universal to all kinds of things under the same constraints, that is, unified meta logic. At the same time, it is also a kind of meta space-time order, a priori order of the world. Tri-state Logic is a kind of space-time meta mode to show the will of the universe, it is based on the evolution of the primitive of things to recognize the current and future meta space-time state of things. According to the basic principle of "Theory of Tri-state", the internal structural attribute of Tri-state Logic is the interwoven network of multiple causality and holographic space-time. The core characteristics of Tri-state Logic are space-time, causality and uncertainty. Among them, space-time includes non-linearity, periodicity and symmetry; causality includes diversity, circularity and nesting; uncertainty includes polymorphism, emergence and superposition. The meta-information inference mode of Tri-state Logic: according to the ontological regularity of meta-levels (Law of Causality, Law of Duality, Law of Holographic, Law of Periodic, Law of Emergence, Law of The Moment), cognitive modeling is based on the four-in-one integration of scale, topology, context and orientation of space-time. During the process of reasoning, the concept of relative space-time is established, thus, multi-timescale, hierarchical, multi-loop, self-organizing, self-adapting, etc. The main forms of expression are analogy, induction, simulation, reflection, association and prediction. In a further step, through the mathematical physical model based on Tri-state logic, the whole process of life activity in life science can be simulated, reproduced or reproduced effectively, and the law of life topology and time-phase change and the form of material, energy and information transformation can be reconstructed. On this basis, artificial intelligence is used to describe the cognitive model of human holographic life system, describing the state of life activity, mental state, health level, disease degree, treatment effect and outcome with qualitative and quantitative positioning. Summary If we examine "Hume's Question" 25 from the perspective of cognitive science,we will find that Hume's proposition of "yes" and "should" is actually the speculation of "fact proposition" and "value proposition", more essentially, it is the speculation of methodology of "natural science" and "social science". Because the cognitive science system before the "Tri-state Logic" lacks a unified cognitive logic ontology, that is, it can not integrate the cognitive research of natural science and social science, and the "Tri-state Logic" ontology is both of them, which has both quantitative causal reasoning and qualitative perceptual understanding. In other words, the core ideology of "Tri-state Logic" is embodied in the evolution of space-time meta structure and situation. Chapter 2 The Logic Fundamentals of "Machine Consciousness" From the point of view of mathematical physical computing, the Tri-state Logic is a topologically computable logical framework system, which consists of linear temporal logic, nonlinear space logic and information chain. Linear temporal logic is based on the extension of Amir Pnueli 26 linear temporal logic, adding a concept definition of "indeterminate time zone"("uncertain state"); nonlinear spatial logic is based on the geometric (knot) invariant (curvature and potential function) polynomial of the nonlinear dispersion equation of Ricci soliton in the soliton theory 27 , and the contraction, stability and expansion modes of Ricci soliton corresponding to three state logic ontology under the condition of complete and non-compact manifold. At the same time, the non-parametric information stream model with the characteristics of "nonlinear time series" is combined. In short, as far as machine intelligence is concerned, the positioning of Tri-state Logic is the description of the underlying abstract logical modal structure of cognitive system (intelligent machine with self-consciousness, thinking and behavior). It can be regarded as the universal logic, this generality also makes it to be a basic support framework for general artificial intelligence (AGI) as well at the moment. We say that the ontology of everything has its structure, but the underlying logic that supports the structure is often unclear. The Theory of Tri-state attempts to analyze and describe the underlying logic of the cognitive system as an ontology, and then achieve the goal of constructing the cognitive system through the cognitive modeling, mathematical model and system model at the upper level. It needs to be particularly clear that the logical basis of "machine consciousness" in this paper refers to the study of meta-logic and meta-cognitive level, which belongs to the study of abstract linguistic thought (model) rather than the implementation description of engineering technology (algorithm). Currently, the implementation of general artificial intelligence requires the construction of a new abstract logic layer based on Tri-state Logic in the system layer of Turing machine which we call it: space-time hippocampus. This logical body is the force (endogenous: internal drive) of the machine generates self-consciousness, and its functions are described in mathematical terms: spatial scaling and topological phase transition (topological excitation of knots); its physical mechanism is described in the language of this paper's relative space-time theory: the synchrony and diachronism of evolution. In the future, quantum computers will have a natural fit with this logic at the bottom. We believe that the machine brain is a physical (energy) system, a dynamic system with definite causality (cognitive science is called cognitive system, complexity science is called complex system). Its operating mechanism can be simulated by mathematical, physical, non-linear differential equations and topological formulas. In the small world of the machine brain, the predictability of the overall behavior of the system depends on the initial premise, information enzyme and final conditions of its physical space-time operation. The object of study of machine consciousness in this paper is human-like consciousness, psychology and intelligence, which can be regarded as a cognitive functionalist paradigm: constructing a new cognitive computing system through cognitive dynamics method based on "Tri-state Logic" for cognitive modeling and simulation.In other words, brain function simulation (Brain-Inspired) is based on the theoretical framework of "Tri-state Logic". It predicts the development of uncertain problems by expressing them as mathematical physical spatial scaling and topological phase transition processes. According to the global neuron workspace theory of S. Dehaene 28 , a contemporary cognitive neuron scientist, the so-called "consciousness", that is, a mechanism for information sharing. The state mechanism in which an organism perceives an external object, its own body and behavior, its inner feelings or thoughts, etc., with explicit content, is called conscious access, or "awareness". It is a spiral rising cycle, not a simple repetition. In this paper, the concept of "consciousness access" corresponds to the concept of "chain winding" in knot theory and the concept of "Helicity" in topological fluid mechanics. Dehaene also proposes three core functions that need to be improved in the consciousness of successful simulation of artificial intelligence, including the need for a global workspace-like information sharing mechanism between programs, brain-like learning mechanism of the program itself, and spontaneous behavior. The structure construction of machine consciousness device in this paper draws lessons from Dehaene's thought to some extent. In general, machines do not have biological power and active mechanisms, but in this paper, machine consciousness is a tightly coupled endogenous system in the machine brain. Its recurrence effect, emergence and uncertainty are determined by the nonlinear dynamic form within the limited physical constraints of the system. The "Endogeneity" ---a specific physical transformation caused by ontology and has the ability to continuously transform any input object. It is not the imitation and accumulation of external existence, but the verification of internal cognition. Machine hardware is the carrier of machine consciousness and machine consciousness is the carrier of machine thinking and behavior. In other words: computer is a logical machine, machine intelligence is based on machine logic. It is well known that dynamics (including non-linear dynamics, non-linear optics, quantum optics, topological fluid dynamics and topological quantum field theory, etc.) studies how a physical system evolves from one state to another over time, how a particular state follows its trajectory and the properties of these trajectories and their relationship. From the mathematical physical point of view, the dynamic system (including the topological quantum system) can be considered as an abstract topological geometric spatial structure (which is called state space in mathematics, phase space in physics, and topological quantum phase transition in topology), where each point in space corresponds to a particular state of the dynamic system. The source of this continuous state change is meta-energy exchange, that is, spatial scaling and topological phase transition of space-time energy field. It is emphasized here that the space-time topological scaling phase transition is based on the evolution of a new cognitive logic mode in the theoretical framework of machine consciousness in this paper, that is, Tri-state Logic, which belongs to the category of cognitive dynamics (Interdisciplinary Studies in Consciousness Psychology and Mathematical Physics). At the same time, this is also the topological definition of machine consciousness in this paper. The Logic Principle of "Machine Consciousness" In short, the following text of "Machine Consciousness" logic principle can be regarded as a specific application example of the "Tri-state Logic" model thought in the "Theory of Tri-state" in the field of machine intelligence. The origin of machine consciousness: based on the ontology of "Tri-state Logic", and using the ideological framework of cognitive science, complexity science and consciousness science for reference, the cognitive intelligence mechanism of machine is constructed, that is, machine cognitive structure ---machine personality (consciousness, thinking, behavior). Continuing the above the "Theory of Tri-state", consciousness is a kind of field: consciousness field, that is,a representational space-time field, and thinking is the N-class links of information fluids in consciousness field. This paper only focuses on the "machine consciousness" part of machine personality. Penrose 29 interprets consciousness as the result of the collapse of a quantum wave function in a micro-tube under gravitational forces. In some sense, this is the content that gives consciousness reality. We do not comment here on whether human consciousness constitutes quantum collapse. However, Penrose's theory of consciousness contains the characteristics of randomness, non-logic and non-domain of quantum consciousness, as well as the introduction of quantum computing logic gate correspond to the "machine consciousness" in this paper to some extent. Therefore, in this paper, the "Meta-Consciousness" logical model partially absorbs the views of Penrose's theory of consciousness, in which we would like to express our sincere gratitude to Professor Penrose. In addition, the study of condensed matter physics 30 shows that there is a deep relationship between the symmetry of the topological phase and the topology, one is dependent on the existence of symmetry, and the other has extraordinary topological characteristics even if symmetry does not exist in the system. Furthermore, from one quantum Hall state to another, the only change is the value of topological invariant rather than symmetry. At the same time, in topological quantum computation theory 31 , the non-Abel quasi-particle state space is also closely related to topologically invariant kink polynomials in knot theory, which can be interpreted by Chern-Simons canonical field theory and two-dimensional con-formal field theory. This topologically-related degenerate space is the physical basis for quantum computing in topology, it can be used to store quantum information. At the same time, any sub-unit can be braided to achieve the common logic gate group required for quantum computing. The difference in machine consciousness is embedded in the "meta-consciousness" (artificial consciousness, machine consciousness generator) of each machine.In other words, robots are different, they are not all carved in a single model, that is, the generation of machine meta-consciousness of machines is not derived from equations based on mathematical axioms, however, it is on the autonomous response to internal and external causes, and it has an endogenous mechanism of self-emergence at the energy level, which makes the perceptual correlation of different machines different is different, moreover, it leads to differences in the relevance of behavior, that is, what the relevance means to humans. Machine meta-consciousness has its original characteristics, it is an untouched meta-state, and it is also the meta-point of the same frequency resonance of the energy field. Machine meta-consciousness can be regarded as the basic unit of human-like consciousness. The basis of Tri-state Logic model of machine consciousness. The Tri-state Logic model is the cognitive logic basis behind machine personality. Its physical form is an endogenous space-time energy field. From the cognitive point of view of the conscious world, the "meta-state" shown on the left in the "Tri-state Logic Ontology Diagram" is the unconscious state; the "dual state" shown in the middle of the diagram is the subconscious state; The "neutral state" shown on the right is the state of consciousness. Its adaptability promotes the stable choice and direction choice of space-time orientation state. Its symmetry forms the unity of space-time state, and asymmetry forms the pluralism of space-time state. The situational concept model of machine consciousness (see Figure 9 below). Situation model is the concrete form of Tri-state Logic model in logic machine. There are three basic states of machine consciousness degree of expansion, degree of equilibrium, and degree of contraction. Among them, the degree of expansion refers to the extroversion of machine consciousness and the degree of external antagonism; the degree of equilibrium refers to the stability of machine consciousness and the degree of equilibrium to the outside;the degree of contraction refers to the autistic and introverted degree of machine consciousness. All three are mirror images of human beings, and they are also directed, that is, they have the attributes of emptiness, purpose and directional. All these are regulated by the dynamic topological structure of the situation model based on Tri-state Logic and the self-organization and feedback of the stream of consciousness (SoC). From the point of view of the development of information theory, the information structure of consciousness is not a discrete collection of information in the traditional sense, but a continuous stream of information, from beginning to end, boundless and from side to side. At the same time, it also has the chirality mentioned in physics and the directivity in cognitive psychology (cognitive psychology holds that psychological phenomena are intentional, that is, always pointing from the front and back the both directions). Generally, animal and plant consciousness refers to the force exerted on the change of space environment and the passage of time. For machine consciousness, it focuses on the formation of event perception, internal and external factors and the state of space-time and the state of the recognized body. The initial form of machine consciousness is the unconscious state formed by the fusion of multiple cognitive causality and related energy fields. "Machine Consciousness" Modelling 1 Firstly, based on the theory of Tri-state (Tri-state Logic / situation model) unified cognitive modeling thinking, we think it is essentially the cognitive modeling of space-time (see Figure 10 below), because its connotation contains the following four points: 1. Objective Conformity ---it could simulate the representation and process of human thinking (meta-cognition)(law / logic); 2. Scaling Scale ---it could simulate the real world of time series; 3. Mass Granularity ---it could simulate the behavior correlation of human thinking; 4. Quantity Aggregation ---it could simulate the trend of human group behavior. The machine consciousness model can be regarded as the basic unit model of human-like cognition. That is, the concepts of "blocks", "perceptual objects" , and "time gestalt" of cognitive basic units (cognitive basic variables) of Rational Choice Theory (RCT) 32 in cognitive science construct a unified human-like cognitive model by transforming the endogenous situation model. Gallagher mentioned in "Action and Interaction": Cognition cannot be explained by neuronal processes alone, and an interdisciplinary approach constitutes a "Dynamical Gestalt" 33 . Therefore, we present the following meta model of "relative space-time" based on the concept of cognitive modeling (see Figure 11 below) "Relative space-time" includes scale, topology, context and orientation, the kernel is the uncertainty and multivariate state of space-time. In other words, it is to determine the cause, location and einstellung the space-time situation at different space-time scales (coordinates). Secondly, from the point of view of complex systems, the characteristics of "relative space-time" are emergence, multi-time scale, hierarchy, multi-loop, uncertainty, nonlinearity, openness and topology. This results in continuous changes in space-time, tight coupling, self-organization, adaptive and feedback effects, and so on. Furthermore, the basic elements of mathematical modeling are naturally derived, that is, the cognitive computing model which accords with the above characteristics of "relative space-time" meta-model. But in this paper, we will only focus on the mathematical modeling of "machine consciousness". The mathematical modeling of machine thinking and machine behavior will be recounted separately. The key elements of mathematical modeling of "machine consciousness" are as follows:.  Construct the consciousness model based on the endogenous Tri-state Logic/situation model: non-linear, holographic, symmetric, overlapping, nesting, hierarchical, etc.  Construct the consciousness model with cognitive causal effect, topological phase transition, energy conversion, periodic loop (loop), context, emergence, criticality and other state information continuity mechanisms and space-time information stream storage modes.  Define and describe the mathematical and physical methods of input and output, information processing, state transformation and evolution in the consciousness model. By analogy, time is algebra, space is geometry; time is linear, space is nonlinear; time is causality, space is energy. Causality and energy are two major components of machine consciousness modeling. Here causality is a condition, a period, a frequency, and a time series; energy is an emergence, a state, an entanglement, and an overlap. Machine consciousness is essentially a causality cycle and energy change in "relative space-time". It is event and model driven, and its abstract conceptual expression is: space-time model * (causality stream+ energy stream) = machine consciousness. In other words, the emergence of machine consciousness consists of two parts. First, the input stimulus stream induces linear and nonlinear responses in the dynamic consciousness model, and then, when the consciousness model generates positive-negative neutrality feedback loop and output, it is nonlinearly associated with the input stimulus source, which leads to the generation of a new feedback loop, forming a circular nested endogenous interaction process, that is, machine consciousness. The first process follows the constitutive diagram of "Tri-state Logic", and the second process follows the ontological equation of "Tri-state Logic". "Machine Consciousness" Modelling 2 Review: the early Turing computer simulation was based on the assumption of physical symbol system (symbolism) for information processing of knowledge: knowledge representation, knowledge reasoning and knowledge application, which was essentially a symbolic sequence processing mechanism based on logic and rules. In other words, it is a mathematical logic-based interpreter that performs the tasks they are labeled for by interpreting the deterministic values of a series of inputs, and it is a passive executor. It gives a mathematical logic interpretation of "what is calculation". Since 1970s, artificial intelligence research paradigm based on connectionism (bio-mimetic structure: neural network) has emerged, which uses unit ratio as neuron and describes cognitive process by interrelation between units. It is considered that the connection weight between input unit and output unit can be continuously transformed by learning and does not affect the whole information processing. In other words, the concept of mathematical calculation behind the current computer depth learning framework is essentially to combine some differentiable computing units into a program, and then adjust the program parameters by gradient optimization, and make it achieve the desired, with a clearly pointed known goal. From the point of view of mathematics, it belongs to the category of combination mathematics and computational mathematics. Traditional computer science (Simon, Minsky 1 ) constructs two features of artificial intelligence computing mode: representation and frame information processing. It interprets the question of "what is information", that is, information is deconstructed into physical symbols as a representation of text, images and thinking (abstract thinking), information organization can be understood as concepts, objects and events. Information processing uses a step-by-step procedural framework to aggregate and adjust information to achieve the desired results. However, it is only capable of identifying and solving deterministic problems (known objectives) in the real world, thus, having known answers and classifying choices of existing knowledge, and it can not dealing with uncertainties (unknown objectives) and dynamic intrinsic and non-axiomatic logic problems. From the mathematical point of view, it simply represents various deterministic problems as convex optimization problems and how to solve them more efficiently. At the same time, Herbert Simon, in his book "The Sciences of the Artificial" 34 , suggests that an intelligent system needs six functions: input,output,store,copy,build symbol-structure and conditional transfer. This hypothesis is valid at the level of perceptual intelligence, but not at the level of cognitive intelligence. The reason is simple: it has no intrinsic "emergence", which is the intrinsic mechanism (force) that generate self-awareness. It is also a passive program execution model, not a cognitive system with endogenous self-awareness and meta-cognition functions (biological perspective: life system). In other words, information systems solve what the real world is; cognitive systems solve why it is. From the perspective of research paradigm of artificial intelligence, the research of machine consciousness in this paper belongs to the category of cognitive dynamics. It includes cognitive psychological model, dynamic model, topological model and so on. Among them, the mathematical basis of machine consciousness, that is, the mathematical category of cognitive computing, includes fiber plexus theory, homotopy theory, algebraic topology, differential topology, dynamic system, function theory (harmonic analysis), number theory and algebra (Lie group), and so on. At the same time, it covers a variety of mathematical physics (quantum computing, fluid mechanics, Fourier optics, engineering mathematics, condensed matter physics, etc.) methods. From the point of view of system engineering, the machine consciousness device involved in this paper is the kernel part of cognitive system. It is aimed at the endogenous self-awareness of the machine and the mechanism of machine meta-cognition. The fundamental feature of cognitive system is the evolution of space-time situation based on cognitive computing and stream computing. Cognitive computing attributes: language package and space-time pool; stream computing attributes: it is different from the traditional information system data processing mode, it is an event-driven mode above the stream data processing of information systems. The caching mechanism of information stream is to use a double-ring linked list structure (intrinsic information real-time information); the parsing mode of information stream includes partitioning, grouping and partitioning; the computing model of information stream is also an extension based on the boundary, sliding window and attenuation model of stream data, extending from clustering model to subordinate, association and scale model. The machine consciousness component involved in this paper is the kernel part of cognitive system.It is aimed at the endogenous self-awareness of the machine and the mechanism of machine meta-cognition. In cognitive systems, information stream can also be called the event stream. From the point of view of cognitive intelligence, the features of cognitive computing model of cognitive system are language pack and space-time pool. The fundamental feature of cognitive system is the evolution of space-time situation based on cognitive computing and stream computing. From the biochemical point of view, the above cognitive computational structural attributes are similar to a biochemical semaphore: Pheromones, also known as pheromones or exogenous hormones, which have communication and guidance functions. The "information enzyme" (i) mentioned in the relative space-time theory (space-time equation: iT  S ) in this paper exists in the form of "language package" in cognitive system. Cognitive Algorithm Library and Framework of Cognitive System In this paper, only the logical connotation of the cognitive algorithm library and framework is briefly mentioned, and the specific algorithm content will be described in another paper. In a nutshell, the cognitive algorithm library refers to the mathematical algorithm set of causality and energy models, which includes emergence algorithm, context algorithm, hierarchical algorithm, loop algorithm, periodic algorithm, center of gravity algorithm, symmetric algorithm, causal-effect algorithm, energy sub-algorithm and pulse algorithm. The constitutive equation of cognitive algorithm is nonlinear, of course, the framework of the algorithm must also be based on the "Tri-state Logic" model. The next step in the future is to construct the modeling framework of cognitive system ---the space-time-oriented visual modeling illustration language based on a space-time conceptual model, which is similar to the object-oriented Unified Modeling Language (UML). "Machine Consciousness" Modelling 3 Suppose a "kindergarten" thought experiment, machine consciousness device (humanoid) has the ability to recognize the children and their friends playing around, and can make hugging or avoidance behavior by itself independently. During the experiment, real children could not tell which one of them was a robot. The purpose of "Kindergarten" thought experiment is to verify the existence of machine consciousness. From the functional point of view of machine consciousness device, input to output is a closed loop with cyclic feedback information. Through the direct and indirect stimulation input, as well as the label mechanism of continuous time stamp and space stamp, combined with the surge of meta consciousness generator ( ), and then through the communication and interaction with causality, energy, memory, common sense and emotional containers, the information output of synesthesia correlation finally emerges. Among them, the central scheduling model ( ) is responsible for the matching and mobilization of exclusive emotions, the wake-up stimulation of specific space-time memory, and so on. It includes the conscious air switch, focusing and scaling mechanism, and enhancing synergy mechanism. Meta-consciousness generator (See Figure 15 above) is a meta-consciousness state construction container (constructor) based on "Tri-state Logic" ontology equation and situation model. In the operating mechanism , it is similar to the fore brain island region function of the human brain 35 , the area of the anterior insula plays a gating role in the process of sensory information entering consciousness. There is a gradient functional level from single-mode sensory information processing (such as vision, hearing, touch, etc.) to multi-modal information integration (such as abstract thinking, advanced cognition, decision-making, etc.). The anterior insula is in the middle hub of this functional level. On the one hand, it receives single mode sensory information, filters and prioritizes it, on the other hand, it regulates the switching of two important brain networks, and allocates attention and cognitive resources for consciousness processing. The meta-consciousness state produced by it is a digital analog state intrinsic to the "Tri-state Logic". Its intrinsic model is shown as follows(See Figure 16 below): From condensed matter physics, we know that a topological phase has the characteristic that when a column of waves (e.g. an electronic wave function) moves around a topologically extraordinary path, it will obtain a phase after completing the closed loop, rather than returning to the initial state, and its local dynamic excitation, that is, the emergence at the system boundary, is stable even in the case of defects. In other words, the polarization and transition characteristics of the topological phase are topological in nature, and they both have band structures with eigenvalues and eigenvectors. The so-called "excitation stability" is a topological invariant, which characterizes the band structure of the topology. A. Hierarchical Model of Meta-Consciousness Generator From the point of view of topology (knot theory), meta-consciousness modeling is a hierarchical model with topological properties. The degree of complex chains in a knot represents a hierarchical change in energy. The modeling process is an internal self-deconstruction of the topological structure and topological relationship of the initial space, and a high level of topological morphology, topological structure, topological relationship and topological properties emerge from it. In other words, first of all, the initial input information at the perceptual level is topologically transformed and integrated to form the meta-consciousness initial state of the meta-consciousness generator: that is, to form a primitive ordinary knot (meta-energy). In the second step, using this ordinary link ring as the base state, and then matching the topological form and topological structure of the subsequent input information, through this process an initial conceptual form is generated, that is, a composite topological structure (first-order equivalent topological invariant). The second process is to deepen the concept: the original conceptual form makes a second-order equivalent topological invariant by extracting and synthesizing the state features (energy enhancement) at this second level, which results in a second-order topological relationship. The meta-consciousness generator decomposes this second-order topological relationship to form a relatively stable sequential structure, causal state diagram, and then gradually produces other forms of composite relationship from causal state, that is, causal effect. The causal state diagram describes a causal invariance structure of the input information, this invariance is generated by the invariance of the topological relationship which comes from the set of equivalent classes of the information. The composite state produced in the structure of causal state diagram describes the invariance of the composite topological relationship, that is, the invariance of the second-order equivalence relationship and the invariance of the logical relationship. After these two processes, in the third process, a higher level of meta-consciousness concept (third-order equivalent topological invariant) emerges from the meta-consciousness generator, which forms a new higher-order topological energy body. Through the transformation of the circulation loop mechanism and situation of continuous energy activation, the intuitionistic consciousness feedback(positive-negative neutrality) of the meta-consciousness generator itself to the information is formed, that is, emotion. Then, going up, you will enter a higher level of rational thinking with certain abstract concepts (omitted). Operating Principle Diagram of Meta-Consciousness Generator(See Figure 17 below): Consciousness Field Generator By using the language of differential geometry and quantum fluid mechanics, the fiber plexus is established on the mathematical manifold, and the cross section space on the fiber plexus is what we call the field of consciousness. Note: An equation of motion describing the momentum conservation of viscous incompressible fluids (N-S equation for short) 36 is introduced at the same time. It is a nonlinear partial differential equation. Ideally, the N-S equation can be simplified to the Euler equation in the ideal flow: The variables of momentum accumulation (x, y, z direction) and flow (x, y, z components) are input externally 37 . This creates a meta-consciousness state (constructor) based on different "meta meta-consciousness generators" ("situation models" (expansion, contraction, equilibrium)). In other words, the consciousness field is a group of different vortex knots formed in the bottom manifold (the consciousness fluid). According to quantum fluid mechanics, there are large and small vortices, and many small vortices can be nested in large vortices. The same is true of the conscious field, where a large conscious fluid is nested by many small conscious fluids. The field of consciousness is denoted as: where is the dimension of the corresponding structural group. In order to be associated with quantum mechanics and further with topological fluid mechanics, and to conform to the conventions of general electromagnetism, the field of ψ is simply a complex scalar, ψ = ψ ( 1 , ⋯, n ) ∈ (10) this ψ is the wave function in the sense of quantum mechanics in the Schrodinger equation; under the Gross-Pitaevskii Equation (GPE), it is the condensed state wave function of electrically neutral quantum fluid (such as superfluid 4He), that is, the sequence parameter; and under the Ginzburg-Landau Equation (GLE), it is the condensed matter wave function of a charged quantum fluid (superconducting). Next, we use the fluid mechanical formalism of quantum mechanics to construct the velocity field of quantum mechanics or quantum fluids in order to connect with the theory of topological fluid mechanics. = ( 1 , ⋯, n ) (11) Where u is the velocity field, which is a vector field. Fluid mechanics representation in quantum mechanics: The electrically neutral quantum superfluid satisfies GPE, which is a nonlinear Schrödinger equation this leads to Formalism. The original GPE is transformed into a fluid continuity equation plus a motion equation, similar to Euler equation or Navier-Stokes equation: ⇉ where is AND Vector, that is where P = is quantum stress. The singularity of the wave function appears at the position of density ρ = 0, that is , the place where the distribution of material field is zero. It is mathematically expressed as a generalized function, the Dirac δ function, The position of density ρ = 0 is a singular line, which corresponds to the position of the vortex line. If these vortex lines are closed curves, the fluid vortex knots are formed; depending on the way of closure, the knots formed can have different topologies. In the theory of topological fluid mechanics, the core is the concept of Helicity: 3 3 3 , Ω ---The branch set, which is the domain; ∂Ω ----Ω boundaries (18) where the vorticity ω = ∇ × , satisfies the condition of not flowing out of the boundary, ω • n = 0, where n is the normal direction of the ∂Ω boundaries. Liu-Ricca believes that it is a differential homeomorphic invariant, which is directly related to the topological number of fluid knots and is the most important topological invariant 38 . In other words, it is the Chern-Simons action whose structure group is Abel group, that is, using the idea of Chern-Simons topological quantum field theory to construct the fluid knot polynomial based on the helicity. As topological excitation, knots has relatively strong robustness; however, it is not absolutely stable, with the irreversible dissipation of energy, it will degenerate from complexity to simplicity, until mediocrity and disappear. Liu, Ricca and LI pointed out that when there is no external interference, the knot complex system will degrade and dissipate spontaneously in the way of cascade degradation 39 . The degradation process may choose different paths, among which the shortest one has the greatest probability of occurrence. At the same time, from textbooks in the field of turbulence, we realize that large vortices are the source of energy, and small vortices are responsible for energy dissipation. The large scale vortexes obtains energy from the outside and output it to small scale vortexes. The small scale vortex is like an energy-consuming machine, which dissipates all turbulent kinetic energy into heat energy. The inertia of the fluid is like a transmission machine, which transfers the energy of the large scale vortex to the small scale vortex continuously. In short, our aim is to label and identify knots with sufficient topological invariants, and then implement knot coding, knot resolution, knot maintenance, and knot reconnection within the framework of computer algorithms. Meta Meta-Consciousness Generator The initial state of meta meta-consciousness is a continuous minimum energy (ring) state (a trivial knot), specifically, the situation model based on "Tri-state Logic" seamlessly docks with the solution of the output equation of the Field of Consciousness Generator(FCG). Nonlinear vibration equation: Among them, equilibrium points can be divided into two categories, that is, stable equilibrium points and unstable equilibrium points. The difference is not in the state of the equilibrium point itself, but in whether the system tends to move back to the equilibrium point, keep moving near the equilibrium point, or move further and further away from the equilibrium point when slightly off the equilibrium point. Accordingly, the equilibrium point is divided into progressive stability, only stable or unstable, in which the first two equilibrium points are also called stable equilibrium points 40 . The situation model corresponding to the "Tri-state Logic" is the dynamic expansion, contraction and equilibrium state. Solution of Ontology Equation of Tri-state Logic Meta meta-consciousness is the changing trend of the frequency and amplitude of the vibration in the process of continuous and autonomous machine perception (external-internal / direct-indirect stimulus). It consists of a time dimension (dominant frequency) and a space dimension (dominant amplitude), respectively. In the time dimension, frequency denotes the logical "1" state, and no frequency denotes the "0" state; in the space dimension, amplitude denotes the logical "1" state, and no amplitude denotes the "0" state. In the implementation of the algorithm, it is the conversion from real space to frequency space, and the core is Fourier transform. The classification and location information stream of "human and animal" input from the outside information is converted to the logical "0" and "1" states respectively, and the number of prime junction crossings, then the number of prime junctions corresponding to the number of prime junction crossings is reclassified. Next ("human" is a flow of information; "animal" is a flow of information) through the arrangement and combination of N sets of 8-bit quantum error correction codes to output "0" and "1" respectively, the Tri-state Logic ontology equation combined with the output equation solution of the meta-energy generator is used to derive a continuous meta-consciousness (2-way) binary situation flow. Based on the long-existing braid group (braid matrix) and the correlation model of the knot theory, as well as the theory of topological quantum computing,we can formally regard knot quantum computing as an extension of quantum error-correcting code 41 similar to conventional quantum computing 42 , rather than just using quantum error-correcting code to correct errors. The left diagram (See Figure 18 above) is the eigengraph of the "Tri-state Qubit", which contains three "Tri-state Quantum Bits", at some point, they are in one of the three states of uncertainty and are located on the equidistant points of the circumference. These three entangled and dual "physical" quantum bits are encoded into a "logical" qubit corresponding to a space-time point (brown point) in the center of the circle, which is a quantum fixed point. The right diagram is the ontology diagram of "Tri-state Quantum Bits", which contains two groups of three "Tri-state Quantum Bits" representing the determined possible states "0" and "1" respectively, "0" being the middle point of the line between the two "1", and "1" being the junction of the two lines. The space-time point in the middle (the green point) is the duality of the states "0" and "1". Clearly, the space-time points of the left and right diagrams are coincident, that is, the center points are consistent (symmetric). Using the conceptual symbols of photon polarization for reference, two new types of graphical symbols are given:|¦ and _ _ , the front one represents "0" ("nothing") and "1" ("have") of the frequency; the back one represents "0" ("nothing") and "1" ("have") of the wavelength. Knot -Quantum Computation Quantum computing told us: 1 quantum bit has 1 2 states, 2 quantum bits have 2 2 states, 3 quantum bits have 3 2 states, n quantum bits have n 2 states. In the other words, one register of 3 quantum bits (made up of 3 atoms) can store 8 atoms (010>110>000>100>011>111>001>101>). The consciousness logic gate in Meta Consciousness Generator is learning from the logic gate [CCNOT (Controlled-Controlled-NOT gate)] mode that operates on 3 quantum bits in quantum computing. Its characteristics are as follows: if the first two quantum bits are |1>, then the third quantum bit is treated with a logic not gate similar to the classical one, otherwise, do nothing. The first two quantum bits are operators, the third one is observer. Logical gate group is composed of n logic gate, so we can say that the machine consciousness stream is generated. Each quantum state is encoded and the corresponding relationship between the polarization state and the message sequence transmitted to the quantum channel is: Figure 19 below) Figure 19. Causality Model. The causal information stream at the input includes:  Time: real or non real moment (time stamp: interval limit).  Space: topological phase transition (space stamp: nonlinear polarizability).  Path: topological sort (bearing stamp: linear sequence).  Situation: event situation (event stamp: rejection-attraction-steady state). Causality is divided into linear causality and nonlinear causality. Linear causality is judged by probability (timing of associated events), that is, topological sort, at the same time, we use Dehaene's global neural workspace theory 28 to classify the types of self-consciousness based on the results of probability calculation; nonlinear causality forms causal effect by intuition (coupling of non related events), that is, topological phase transition. The parallel evolution of the two modes produces causal emergence functions (separate ---approximate ---neutral). From the point of view of energy aggregation and transformation, conscious energy is a multi-level, continuous spiral loop, the center point of the ring is the focal point of energy and the center of gravity of energy, which represents the main excitation point of conscious energy. To expand a little, consciousness energy, thinking energy and behavior energy are three nested concentric circles. From the static hierarchical point of view, the conscious energy circle is in the innermost layer, followed by the thinking energy circle and the behavior energy circle. The center point of the circle is the center of gravity of the energy circle. From the knot theory, the more the number of knots around, the higher the energy level. The central circle of consciousness energy circle is the operation of situation model. Among them, the expansion degree of situation model represents the degree of aggression and extroversion; the contraction degree represents the degree of autism and introversion; the equilibrium degree represents the degree of stability and friendliness. The inner ring of consciousness energy circle represents "energy level"; the four outer earrings represent the four relevance attributes: voice, pragmatics, semantics and context, which simultaneously contain the corresponding time stamp, space stamp, location stamp and event stamp. Here, we also need to use the classical picture in quantum mechanics to the quantum picture represented by wave function and energy spectrum, whose bridge is the quantization condition, , that is, the numerical characterization of the energy spectrum in a given coordinate system is also independent of the scale. From the mathematical point of view, the expression of space-time ontology of consciousness energy level is based on the ontology equation of Tri-state Logic: The input of consciousness energy level is the above four attribute variables, and the output is the corresponding energy level. Emotion Model (opposites, similarities, attention) The emotion model (See Figure 21 above) refer to the book "Theories of Emotion" 44 by R. Plutchik, an American psychologist, its sharp is similar to an inverted cone, and the kernel is the causality-energy model. The eight sectors of the innermost ring on the cross section of the cone represent eight basic emotions respectively; the eight sectors of the middle ring represent 24 kinds of complex emotions; the eight sectors of the outermost ring represent the associated attention emotions. The adjacent surfaces of the eight sectors have a certain degree of similarity. The Arabic numerals: 1|-1, 2|-2, 3|-3, 4|-4, representing the opposite of emotional indices. The emotion is a kind of energy, and also a kind of frequency. The distance between the bottom tip and the top surface of the cone represents the degree of emotional intensity from weak to strong, which is determined by the energy level of the energy model. The emotional orientation of the inner and outer ring of the transverse tangent of the cone is determined by the emergence function(See Figure 22 below) of the causality model. The emotion model has an associated motivation model built in. Space-Time Memory Model (events, text pictures, audio and video) Based on the knot-quantum computing model described above, the eigenbody of a stream of consciousness code is a set of knots in a particular topology. In the space-time memory model, while storing the emerging knot set, the topological feature of the knot set, namely the topological number or topological invariant, is also stored synchronously as an index. The space-time memory model (see Figure 23 below) exists in the form of memory map. The structure of the memory map refers to the twelve equal rhythm pattern of music 45 . Eighth tones are divided into twelve equal parts, each part is the frequency ratio of half tones, the twelfth root of two: The space-time event stamps and situation information in the memory map are stored in the time memory pool and the space memory pool respectively in the way of three-dimensional code, which is combined into a complete event + contextual memory stream code. The stream of consciousness code is a combination of molecular biology 46 DNA identification method and three-level structural mode(The helices of the two DNA strands cross arrange an ordered strand with complex topology, that is, knot property). The three-level structure represents the space-time state of everything: past, present and unrealistic. Level One Structure: a stream of consciousness code simple sequential Chain.(Time|Space) Level Two Structure: two streams of consciousness codes simple sequential Chain.(Time + Space) Level Three Structure: two stream of consciousness codes complex topological sequential chain.(Folding Time and Space) The stream of consciousness code storage of the above three-level structure is completed by De Bruijn sequence. Mathematical Expression: B(4,3) sequence length 64 [Note: De Bruijn sequence pattern: Sequence: B (k, n), a cyclic sequence consisting of k elements. All k-element constituent sequences of length n occur in their subsequences (in ring form) only once. The length of the sequence is the n-th power of k] The core part is a DNA Mobius loop model based on energy model, among them, the memory space state of the past and the present is distinguished by the chirality of the left and right rotation of the Mobius ring (cross storage). Common Sense Model (parochialism, conservatism, extremeness|equivalence, similarity, repetitiveness) The common sense model (see Figure 24 below) exists in the form of a common sense map. The structure of the common sense map is a reference to the natural cellular structure, which is stable and hierarchical. The natural attributes of the common sense map are divided into three dimensions: personal, social and scientific. The information attributes of common sense maps: space stamp, orientation stamp, event stamp, and time stamp. Among them, personal attributes contain a role transition state: self, alter ego and id. This is a nested role state, that is, a role transition state corresponding to the relevant context. Its computational mathematical model is based on the response surface algorithm in material science 47 , by modeling the natural frequencies of cellular topology, the topology model variables are established, and a second-order natural frequency table is established for the frequency response characteristics of the structure, and the variable coefficient matrix is established, and the variable coefficient matrix is established. At the same time, the model is optimized by multi-objective and multi-level according to genetic algorithm, where "empirical" exists as a growth function. The output of common sense model is several conformity functions. Concluding Remarks  The 20th century is the era of disciplinary differentiation and professional development, and the era of scientific reductionism and determinism. The resulting artificial intelligence computational and perceptual intelligence has also evolved based on random theory of probability and statistics.  The 21st century will be an era of interdisciplinary integration and multicultural integration, and the era of logical system theory based on the first principle of complex cognitive structure, or can be called post-phenomenological era 48 . The cognitive intelligence of artificial intelligence will also develop based on the meta-level Theory of Tri-state Logic.  According to the "paradigm" in Kuhn's "The Structure of Scientific Revolutions" 49 , we are creating a new "paradigm of cognitive scientific computing" and a meta-cognitive logic system.
16,826.6
2021-07-15T00:00:00.000
[ "Philosophy", "Computer Science" ]
miR-377-3p-Mediated EGR1 Downregulation Promotes B[a]P-Induced Lung Tumorigenesis by Wnt/Beta-Catenin Transduction Polycyclic aromatic hydrocarbons (PAHs), particularly benzo[a]pyrene (B[a]P), found in cigarette smoke and air pollution, is an important carcinogen. Nevertheless, early molecular events and related regulatory effects of B[a]P-mediated cell transformation and tumor initiation remain unclear. This study found that EGR1 was significantly downregulated during human bronchial epithelial cell transformation and mice lung carcinogenesis upon exposure to B[a]P and its active form BPDE, respectively. In contrast, overexpression of EGR1 inhibited the BPDE-induced cell malignant transformation. Moreover, miR-377-3p was strongly enhanced by BPDE/B[a]P exposure and crucial for the inhibition of EGR1 expression by targeting the 3’UTR of EGR1. MiR-377-3p antagomir reversed the effect of EGR1 downregulation in cell malignant transformation and tumor initiation models. Furthermore, the B[a]P-induced molecular changes were evaluated by IHC in clinical lung cancer tissues and examined with a clinic database. Mechanistically, EGR1 inhibition was also involved in the regulation of Wnt/β-catenin transduction, promoting lung tumorigenesis following B[a]P/BPDE exposure. Taken together, the results demonstrated that bBenzo[a]pyrene exposure might induce lung tumorigenesis through miR-377-3p-mediated reduction of EGR1 expression, suggesting an important role of EGR1 in PAHs-induced lung carcinogenesis. INTRODUCTION Lung cancer has the highest morbidity and mortality worldwide. Late diagnosis and poor prognosis are the main causes of cancerrelated death (1,2), and smoking is a common risk factor. Yet, over the years, the increased non-smoking-related risk associated with ambient air pollution has been frequently reported (3). Polycyclic aromatic hydrocarbons (PAHs) are widespread environmental pollutants that have been associated with carcinogenicity (in gas or particle phase) (4). The most widely studied PAH is Benzo[a]pyrene (B[a]P), which is frequently chosen as a substitute for evaluating the carcinogenic PAHs (5). B[a]P is a human group 1 carcinogen capable of initiating and promoting lung tumorigenesis (6). BPDE is the main biologically active metabolite of B[a]P that can form DNA adducts of guanine N2, thus exerting its carcinogenic effect (7). In cell-based models, B [a]P or its metabolite BPDE induce cell malignant transformation, while in mice models, it can reduce lung tumors. Recent studies have shown that B[a]P-induced tumorigenesis involves DNA methylation, oxidative stress, cell cycle, inflammation, apoptosis, and other biological processes (7)(8)(9). Yet, the exact molecular mechanism behind this remains unclear. Transient activation and regulation of immediate-early genes are considered primary cellular responses to an external signal in cancer development (10). Early growth response 1 (EGR1) is an immediate-early gene that can be directly activated by growth factors, hypoxia, ischemia, tissue injury, and apoptotic signals in different cells (11). Different roles of EGR1 have been observed in different tumors. EGFR1 can have double-edged effects in tumor development. For example, EGR1 has an oncogenic function in prostate cancer by promoting cell proliferation and survival, but it can also act as a tumor suppressor in various cancers such as glioma, lung, and bladder cancer by directly upregulating PTEN, P53, and fibronectin (12)(13)(14)(15)(16). MicroRNAs (miRNAs), an endogenous short non-coding RNA, have important functions in many developmental systems (17). miRNAs regulate gene expression in multicellular organisms by affecting both the stability and translation of mRNAs. They can target the 3'-UTR of mRNA transcripts via complementary sequences and repress the gene expression by post-transcriptional level (18). Their deregulation has been closely related to cancer initiation and progression (19). miR-377-3p is a novel tumor regulatory miRNA whose biological functions are wildly unknown. MiR-377-3p has been shown to possess tumor-inhibiting effects in clear cell renal cell carcinoma and hepatocellular carcinoma (20,21). Contrary, previous studies have shown that miR-377 promotes the proliferation and EMT process in colon cancer, while the low level of miR-377 was associated with a good prognosis of periampullary adenocarcinoma (22,23). Moreover, recent studies demonstrated that miRNAs are also involved in B[a]P-induced carcinogenicity (24,25). However, the potential contribution of miRNAs in environmental carcinogensinduced lung tumorigenesis is still not clear. In the present study, we found that EGR1 expression was strongly reduced in the malignant transformation of human lung bronchial epithelial cells and lung tumorigenicity following B[a]P and its active metabolite BPDE exposure. Moreover, miR-377-3p mediated EGR1 downregulation facilitates cell malignant transformation and tumor formation by regulating the Wnt/bcatenin pathway, suggesting an important role of the miR-377-3p/ EGR1 axis in the malignant transformation of lung tumorigenesis induced by environmental carcinogen. Patient Samples A total of 114 non-small-cell lung cancer (NSCLC) clinical samples of the Second Affiliated Hospital of Zhejiang University were used in this study. The study was approved by the ethics committee of the hospital. The clinical characteristics of these samples are shown in Table 1. The cancer tissues were formalinfixed and paraffin-embedded for immunohistochemistry (IHC). Cells and Reagents Human bronchial normal epithelium cell BEAS-2B (Cell Bank of the Chinese Academy of Science, Xiangya, China) and 293T cells (ATCC, Manassas, VA, USA) were cultured in DMEM (Gibco, Grand Island, NY, USA.) supplemented with 10% FBS (Gibco), streptomycin (100 g/mL), and penicillin (100 U/mL) in a humidified atmosphere containing 5%CO 2 /95% air at 37°C. The authenticity of the cell lines used in this study has been verified by STR profiling. BPDE was purchased from the National Cancer Institute Chemical Carcinogen Reference Standard Repository (Kansas City, MO, USA), dissolved in DMSO, and stocked in -80°C. Cell Transformation Assays Cells were exposed to 0.2 µM or 0.5 µM BPDE for 2 hours in a serum-free medium. Then, the treated medium was removed, and cells were recovered in a fresh medium at 37°C. BPDE exposure was repeated once a week for 12 weeks. After 12 weeks of treatment, the malignant phenotype was analyzed and DMSO was used as solvent control. QRT-PCR Total RNA was extracted from cell lines or tumor and normal tissue samples with TRIzol reagent (Invitrogen, Carlsbad, CA, USA). For gene expression, RNA was reverse transcribed using a Prime-Script RT reagent Kit (TaKaRa). QRT-PCR was carried out with an SYBR Premix Ex Taq (TaKaRa). For miRNA expression, RNA was reverse transcribed using an SYBR ® Premix Ex Taq II (TliRnaseH Plus) (TaKaRa, Dalian, China). QRT-PCR was performed using a Mir-X miRNAFirst-Strand Synthesis kit (Clontech, Madison, WI, USA). Experiments were performed in triplicate, and the values were normalized to GAPDH or RNU6B using the 2(−DDCt) method for gene and miRNA expression analysis, respectively. following diluted primary antibody: rabbit monoclonal antihuman EGR1 (ab194357, Abcam, Hanghzou, China), and the mouse monoclonal anti-human GADPH (sc-47724) and antihuman b-catenin (sc-7963) both purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Then, the membrane was incubated in IRDye ® 800CW-or IRDye 680-conjugated secondary antibody (LI-COR Biosciences, Lincoln, NE, USA) and detected by an Odyssey ® infrared imaging system. Animal Models A/J mice (4 weeks) and Balb/c nude mice (4 weeks) were obtained from Model Animal Research Center, Nanjing, China and SLAC Laboratory Animal, Shanghai, China. All the animals were housed in an environment with a temperature of 22 ± 1°C, relative humidity of 50 ± 1%, and a light/dark cycle of 12/12 h. All animal studies (including the mice euthanasia procedure) were done in compliance with Zhejiang University institutional animal care regulations and guidelines, and conducted according to the AAALAC and the IACUC guidelines. A/J mice (4 weeks) were randomly divided into two groups (12 mice/group). B[a]P group was intraperitoneally injected with B[a]P (25 mg/kg, in tricaprylin solvent) (Sigma), and the control group was intraperitoneally injected with the tricaprylin solvent. B[a]P treatment was given on a weekly basis for 8 weeks. The control group was treated as the same. After 4 months of restoration following the treatment period, mice were sacrificed, and the lung tissues were obtained and histologically examined. The tricaprylin solvent-treated group was used as a control group. Balb/c nude mice (4 weeks) were subcutaneously injected with 5 × 10 6 transformed cells in 100 µl volume mixed with Matrigel (1:1). Three days after injection, miR-377-3p antagomir (5 nmol/mouse) or scramble control was performed by intratumor injection twice a week. The long diameter (a) and short diameter (b) of the tumors were measured; after which, the volume (V) was calculated using the formula V = 1/2 × a × b2. Mice were sacrificed, and the tumor tissues were obtained and weighed. Soft Agar Assay The cells (1,000 cells/well) were suspended in a culture medium containing 0.4% agarose (Sigma, St Louis, MO, USA) and seeded onto a base layer of 0.7% agar bed in 12-well plates. After 2 weeks, colonies were stained with crystal violet and photographed. Colonies ≥ 0.05 mm in diameter were counted. Scratch Test Cells (1 x 10 5 cells/ml) were plated in 6-well plates. The monolayer was scratched by a 10 ml sterile pipette tip. The cells were gently rinsed twice with PBS to remove floating cells and incubated in 2 ml of serum free medium in 37°C, 5% CO2 air environment. Images of the scratches were taken by using an inverted microscope at 0, 24, and 48 hours of incubation. ImageJ software was used to analyze the percentage of wound closure. Transwell Assay We performed a cell migration assay with an 8 µm-pore in 24well transwell plates (Costar, Cambridge, MA, USA). Briefly, 400 ml of complete DMEM medium was added under the chambers, whereas cells (2 × 10 4 ) were added above the chambers in a serum-free medium. After 48 hours of incubation at 37°C, the migrated cells were fixed with 4% paraformaldehyde and stained with 0.5% crystal violet. Then, the filter membrane was examined and photographed under a microscope. Immunohistochemistry The IHC was performed using an Envision Detection System (DAKO, Carpinteria, CA) according to the instructions of the manufacturer. Rabbit monoclonal anti-mouse Ki67 (ab194357) was purchased from Abcam; rabbit polyclonal anti-mouse EGR1 (sc-110) was purchased from Santa Cruz Biotechnology. The IHC staining results were assessed and confirmed by two independent investigators blinded to the clinical data. Cell Transfection For lentiviral-mediated transfection, 293T cells were cotransfected with the lentiviral and packaging vectors. After 72 h, the supernatant was collected. Supernatants were then collected and centrifuged at 1,000 × g for 15 min at 4°C to pellet debris. Before performing the infection, the lentiviruses were recovered and re-suspended in a fresh medium with 6 g/ml of polybrene. Stable cells with EGR1 knockdown or EGR1 overexpression were selected following transduction with 0.5 mg/ml of puromycin for 2 weeks. Transfection efficiency of EGR1 knockdown or EGR1 overexpression was examined by Western blot. Dual-Luciferase Reporter Assay The full-length and mutated miR-377-3p recognition elements of 3'UTR-EGR1 were synthesized and constructed into a pGL3-Basic vector (Promega, Madison, WI, USA). After seeding the cells for 24 h, the mimic or inhibitor of miR-377-3p (GenePharma) was co-transfected with either pGL3-EGR1-3'UTR wild-type or mutant into BEAS-2B and 293T cells. Dual-Luciferase Reporter Assay System was used for testing the relative luciferase activity (Promega). Immunofluorescence The BPDE-transformed cells were plated in culture. After overexpression of EGR1, the cells were fixed for 15 min in 4% formaldehyde solution. Then, the cells were washed with PBS and treated with 0.1% Triton X-100 in PBS for 10 min. After permeabilizing the cells, we blocked the cells for 1 h in an antibody blocking buffer (10% normal goat serum, 1% BSA in PBS). Then, the cells were washed with PBS and incubated with anti-human b-catenin primary antibody. The presented IF staining pictures are the overlaid images of b-catenin staining in green fluorescence with nuclear 4'6-diamidino-2phenylindole (DAPI) staining in blue fluorescence. The IF staining images were taken and overlaid using the Nikon NIS-Elements software. Statistical Analysis The two-tailed Student's t-test and one-way analysis of variance were used for statistical data analysis. The data was expressed of three separate experiments, as mean ± standard deviation (SD). P ≤ 0.05 was considered to be statistically significant. BPDE/B[a]P Downregulates the Expression of EGR1 In Vitro and In Vivo B[a]P and its ultimate carcinogenic metabolite, BPDE, are the strong lung carcinogens found in tobacco smoke and air pollution (26). However, the molecular mechanisms underlying PAHinduced lung tumorigenesis, particularly in the early stage, remain unclear. To indicate the critical genes involved in this process, human lung epithelial cells and A/J mice were exposed to BPDE/B [a]P, respectively. Malignant transformation of BEAS-2B cells was identified upon 12 weeks of BPDE exposure (Figures 1 and S1). Figure 1A shows a schematic map of the strategy used to generate the BPDE-induced malignant transformation of BEAS-2B cells. Cell proliferation assay and soft agar assay revealed that BPDE treatment enhanced the reproductive capacity of cells and the anchorageindependent growth capability, respectively ( Figures 1B, C). We also observed that the cell migration was enhanced upon BPDE treatment ( Figures S1A, B). Xenograft assay further confirmed the malignant phenotype of BPDE-induced BEAS-2B cells ( Figure 1D). In addition, we also confirmed the above tumorigenic effects with the BPDE-induced HBE malignant transformation cell model by malignant phenotype analysis (data not shown). To investigate the genes implicated in the BPDE-induced malignant transformation process, we performed RNAsequencing analysis. Our results showed that EGR1 was the most obviously downregulated gene in the transformed cells ( Figure S1C). The downregulation of EGR1 expression was confirmed in both BEAS-2B and HBE BPDE-induced cell transformed models (Figures 1E, F). Moreover, the EGR1 protein content was also reduced in different lung cancer cells contrasted with normal cells ( Figure 1G). To further evaluate the effect of B[a]P on EGR1 expression in vivo, we established a B[a]P-treated A/J mice model ( Figure S2A). Most mice treated with B[a]P developed primary lung tumors within 6 months; this was observed by PET-CT detection and histopathological analysis (Figures 2A and S2B, C). Our results also showed that EGR1 mRNA expression and protein level were decreased in the lung tumor tissues compared to the adjacent normal tissues ( Figures 2B, C). Ki67 was extensively assessed and reported as a predictive proliferative marker of cancer cells. Moreover, the downregulation of EGR1 was not only observed in adenocarcinoma but also B[a]P-treated mice adenoma ( Figure 2D), indicating that EGR1 reduction could be the early event in B[a]P-induced tumorigenesis. To further determine whether EGR1 downregulation was involved in human lung carcinoma development, we expanded our study by investigating the expression of EGR1 in clinical cancer tissues. In eight pairs of fresh cancer and adjacent normal tissues from clinical NSCLC patients, we found the reduction of EGR1 expression in cancer tissues ( Figure S3A). TCGA (The Cancer Genome Atlas) database and the other two datasets supported in Lung Cancer Explorer confirmed that EGR1 was downregulated in NSCLC patient tissues compared to normal tissues ( Figures 2E and S3B, C). Collectively, the results indicated that the inhibition of EGR1 was involved in cell malignant transformation and mice lung tumorigenesis induced by BPDE/ B[a]P exposure. The EGR1 reduction was also observed in clinical cancer tissues. The above data suggested that EGR1 could have a tumor-suppressive role in the lung cancer process. EGR1 Reduction Mediates BPDE-Induced Malignant Transformation To investigate the potential role of EGR1 downregulation in lung tumorigenic effects upon BPDE exposure, we established stable EGR1 overexpression models in BPDE-induced transformed cells with lenti-EGR1 lentivirus ( Figure S4A). The ectopic expression of EGR1 led to a reduced malignancy in BPDE-induced transformed cells (Figure 3). Moreover, EGR1 overexpression reduced the cell migration ability (Figures 3A-C) and xenograft tumor growth (Figures 3D, E). The suppressive effect of EGR1 on were repeated three times, and the results were expressed as mean ± SD. # P < 0.05, **P < 0.01, and ***P < 0.001. cell malignant phenotypes was further confirmed by EGR1 knockdown. EGR1 shRNAs introduction through lentiviral vectors resulted in an increased malignancy of BEAS-2B cells ( Figures S4C-E). Moreover, the rescue of EGR1 also reversed the effect of EGR1-knockdown in promoting cell transformation ( Figures S4F, G). The knockdown efficiency of EGR1 was supported in Figure S4B. Our results suggested that EGR1 downregulation was critical for promoting BPDE-induced cell malignant transformation. mir-377-3p Targets EGR1 and Induces its Inhibition Following BPDE Exposure Notably, we found that the expression of EGR1 was inhibited during the BPDE-induced cell malignant transformation (data not shown). To investigate the molecular mechanism underlying EGR1 reduction upon BPDE treatment, we first evaluated EGR1 promoter DNA methylation level by bisulfite sequencing PCR. The DNA methylation level of the EGR1 promoter sequence did not change after BPDE exposure ( Figure S3D). Over the past decade, it has been widely reported that miRNAs regulate gene expression by recognizing the 3'UTR sequence. Using the microRNA database and target prediction tools (miRanda, PicTar, and TargetScan), we predicted the potential microRNAs that could target EGR1 and regulate its mRNA transcription. qRT-PCR revealed that miR-377-3p levels were markedly increased in BPDE-treated cells ( Figure 4A). Transient transfection with mimics and inhibitor of miR-377-3p showed that miR-377-3p regulates EGR1 expression ( Figures 4B, C). To further identify the effect of miR-377-3p on EGR1 expression regulation, we constructed the luciferase reporter containing wild-type regulatory sequence with or without EGR1 binding site mutation ( Figure 4D). The result showed that miR-377-3p mimic reduced the reporter activity of the fulllength EGR1 3'UTR-containing luciferase construct, and the inhibitor of miR-377-3p augmented the reporter activity in BEAS-2B and 293T cells. The effect of miR-377-3p on the reporter activity was abrogated with the mutant-type EGR1 3'UTR-containing luciferase construct (Figures 4E, F). These results indicated that miR-377-3p mediated the downregulation of EGR1 in BPDE-induced malignant transformed cells by directly targeting its 3'UTR sequence. mir-377-3p Antagomir Rescued the Effect of EGR1 Downregulation in Cell Malignant Transformation and Lung Carcinogenesis To detect whether the inhibition of miR-377-3p allows for the reexpression of EGR1 and reduces the malignancy of BPDEinduced transformed cells, we transfected the cells with miR-377-3p antagomir. Our results revealed that the antagomir of miR377-3p reduced the malignancy phenotypes of the transformed cells induced by BPDE exposure (Figures 5A-D). Moreover, miR-377-3p upregulation was identified in mice lung tumor tissues induced by B[a]P, concomitantly with EGR1 downregulation ( Figure 5E). Furthermore, we observed a negative relevance between EGR1 and miR-377-3p in mice lung tumor tissues by the correlation analysis ( Figure 5F). Consistent with our findings, the TCGA database analysis revealed the increase of miR-377-3p and the decrease of EGR1 in human lung adenocarcinoma tissues (Figures 5G, H). Besides, the expression of EGR1 and miR-377-3p in fresh lung cancer tissues also showed a negative relevance ( Figure S5A). To confirm the tumor repressive effect of EGR1 in NSCLC, we performed IHC staining to evaluate the clinical relevance of EGR1 expression. Our results showed a high EGR1 immunoreactivity in the nuclei of adjacent normal cells compared with cancer cells. In 114 paired cases, EGR1 was significantly inhibited in tumor tissues ( Figures 5I, J and S5B). EGR1 expression was negatively associated with tumor invasion, lymph node status, histological grade, and TNM stage ( Table 1). Our results suggested that EGR1 functions as an onco-suppressor, and the inhibition of EGR1 was associated with tumor aggressiveness in lung cancer. Taken together, our results suggest that the upregulation of miR-377-3p inhibits EGR1 transcription, which is implicated in BPDE/B[a] P-induced cell malignant transformation and lung tumorigenesis. EGR1 Inhibition is Involved in the Regulation of Wnt/b-Catenin Transduction in PAHs-Induced Tumorigenesis EGR1 is an important transcription factor for regulating the cell cycle, differentiation, apoptosis, and stress. To identify the potential EGR1-downstream genes involved in the BPDE/B[a]P-induced tumorigenesis, we performed the RNA-sequencing by knocking down EGR1 expression. As expected, Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis indicated that the Wnt/b-catenin pathway is one of the most significantly altered gene set concepts in EGR1 knockdown cells, and gene set enrichment analysis (GSEA) revealed a large fraction of Wnt/b-catenin downstream genes that displayed significant alterations ( Figures 6A, B). Moreover, we also observed the upregulation of b-catenin in BPDE-induced malignant transformed cells and mice primary lung cancer tissue ( Figures 6C-E). By transient transfection, EGR1 overexpressing led to a reduction of CTNNB1 gene expression. Moreover, EGR1 knockdown upregulated the CTNNB1 gene expression ( Figure 6F). Also, the rescue of EGR1 expression abrogated the upregulation and nuclear localization of b-catenin induced by BPDE exposure (Figures 6G, H). These data suggested that the Wnt/b-catenin pathway is the potential downstream signal in EGR1-mediated cell malignant transformation. Furthermore, we detected the most altered genes of RNAsequencing by knocking down EGR1 in the transformed cells. Our result revealed that ATF3 and ANKRD1 were downregulated in malignant cells and mice primary lung cancer tissues induced by PAHs (Figures S5C, D). Ectopic expression of EGR1 resulted in the upregulation of ATF3 and ANKRD1. Moreover, siRNA of EGR1 reduced the expression of ATF3 and ANKRD1 ( Figure S5E). Besides, the rescue of EGR1 expression in BPDE-induced transformed cells abrogated the inhibition of ATF3 and ANKRD1 expression induced by BPDE ( Figure S5F). To sum up, our data indicated that the downregulation of EGR1 could alter the downstream cell signals and the expression of its target genes to contribute to the cell malignant transformation and lung carcinogenesis. DISCUSSION B[a]P can directly induce lung carcinogenesis by inducing DNA damage and activating the signaling pathways (26)(27)(28). This study further investigated early events and the molecular mechanisms of gene dysregulation that lead to cell malignant transformation and lung tumorigenesis following B[a]P/BPDE exposure. We discovered that B[a]P/BPDE treatment led to miR-377-3p induction, which targeted EGR1-3'UTR and inhibited its expression, subsequently resulting in the activation of Wnt/bcatenin signal and promotion of cell malignant transformation, thus further contributing to lung tumorigenesis. Consequently, EGR1 could be considered a potential target for B[a]P initiation of lung carcinogenic actions. As a transcription factor, EGR1 has a crucial role in human cancers. EGR1 has been increasingly attracting research attention due to its tumor-suppressing role in the occurrence and development of tumors. The expression of EGR1 decreases or even disappears in a variety of human malignancies, and its expression level is associated with tumor sensitivity to (E, F) EGR1 3'UTR luciferase reporter assays in BEAS-2B cells and 293T cells. The analyses were repeated three times, and the results were expressed as mean ± SD. *P < 0.05 and **P < 0.01. chemotherapy (29). EGR1 depletion has been associated with tumor anti-apoptotic and invasion events, whereas its overexpression may depress the tumorigenicity and metastasis in different cancer cells, including lung cancer (30). Mechanistically, EGR1 can directly transactivate P53 and PTEN, implicated in the proliferation inhibition of lung tumor cells (31,32). It can also suppress the EMT transition and cell migration in lung cancer by regulating TGFb activity (33). Recent studies have shown that EGR1 can directly and negatively regulate cell growth in different epithelial tumor cell lines (34). It can also regulate KRT18 expression to inhibit the malignancy of human NSCLC cells (35). Our data showed that EGR1 was strongly decreased in the early stage of malignant cell transformation upon BPDE exposure. The inhibition of EGR1 promoted the progression of BPDE-induced tumorigenicity. Moreover, the downregulation of EGR1 was also confirmed in B[a]P-induced lung tumors in vivo. The results indicated that EGR1 has a tumor repressive effect in cell malignant transformation and lung tumorigenesis upon B[a] P/BPDE treatment. DNA methylation and miRNA dysregulation are important molecular mechanisms of gene expression, which are critical for epigenetic regulation in tumor formation and development by negatively regulating targeting downstream genes (36,37). Recent studies reported that miR-301b, miR-191, and miR-146a could target EGR1 mRNA and inhibit its expression, thus contributing to oncogenesis (16,38,39). In this study, EGR1 was persistently decreased after BPDE exposure but without variation of its promoter DNA methylation level. miRNA screening analysis demonstrated that miR-377-3p is a new regulator of EGR1 by directly binding to its 3'UTR. miR-377-3p was significantly increased in BPDE-induced malignant transformed cells, as well as in the lung tumor tissues of B[a]P-treated A/J mice. Antagonized miR-377-3p reversed the effect of EGR1 in cell malignant transformation, thus supporting the critical role of miR-377-3p in regulating EGR1 expression to promote cell transformation and tumor formation. Recent studies reported that miR-377 displays an ambiguous role in different cancers. miR-377-3p can drive malignancy characteristics by upregulating GSK-3b expression and activating the NF-kB pathway in CRC cells (22). It can also target the pro-oncogenic genes, like E2F3, VEGF, and CDK6, or negatively regulate the Wnt/b-catenin signaling to suppress the proliferation of cancer cells (40)(41)(42)(43). However, the dual effect of miR-377 in tumor inhibition and promotion needs to be further explored. Clinical studies reported that the depletion of EGR1 sensitizes the chemotherapy of cisplatin in ovarian tumors (44). The low levels of EGR1, associated with the expression of PTEN, can predict poor outcomes after surgical resection of NSCLC (45). In this study, clinic tissue analysis showed a downregulation of EGR1 expression in cancer tissues compared with the normal tissue. The repression of EGR1 was associated with the local invasion depth, lymph nodes, and TNM stages. It was also negatively associated with histological grade ( Table 1). Our result confirmed that the deactivation of EGR1 was associated with cancer aggressiveness. Furthermore, it is reported that EGR1 can be increased by chemotherapy, and negatively regulate the Wnt/b−catenin signaling pathway in CML cells (46). In this study, we observed an enrichment of Wnt/b−catenin downstream genes after EGR1 knockdown. Besides, b−catenin, the key effector of canonical Wnt signaling, was activated in malignant transformed cells and lung cancer tissues following EGR1 inhibition. The rescue of EGR1 expression reversed the upregulation and nuclear staining of b−catenin after BPDE exposure. The Wnt/b−catenin pathway is a cell signaling that promotes cancer initiation and development. It has an important role in crucial cellular processes, including cell fate determination, embryonic development, homeostasis, motility, polarity, and stem cell renewal (47). It has also been reported that the activation of canonical Wnt/b-catenin signaling is critical for the initiation and progression of NSCLC (48). In patient-derived xenograft models of lung cancer, the activation of WNT/bcatenin signaling and nuclear b-catenin staining was associated with a poor prognosis in patients with lung cancer (49). Previous studies also reported that the Wnt/b-catenin pathway contributes to the induction of EMT by transactivating several EMT-related transcriptional factors, such as Snail, Slug, Twist, ZEB1, and ZEB2 in lung adenocarcinoma (50). Moreover, we also observed that ANKRD1 and ATF3, as the target genes of EGR1, were significantly downregulated in malignant transformed cells and mice lung cancer tissues. ATF3, a highly conserved transcription factor, was described as a principal target of EGR1 and discussed as a tumor suppressor and promoter (51)(52)(53). A recent study also reported that ATF3 and EGR1 are involved at the beginning of the inflammatory processes related to cancer (54). ANKRD1 is a tumor-suppressive downstream gene of the Hippo pathway, downregulated in different human cancers (55,56). A previous study demonstrated that ANKRD1 could be inhibited by lncRNA, resulting in the promotion of pancreatic cancer proliferation and metastasis (57). Therefore, EGR1 could regulate its downstream signals and target genes, thus having a tumor-suppressive role in human lung cancer. In summary, our current study demonstrated the regulation mechanism of EGR1 inhibition induced by miR-377-3p activation following exposure to environmental carcinogen B [a]P/BPDE. We also discovered that EGR1 has a repressive effect on lung tumorigenesis by regulating the Wnt/b-catenin signaling pathway ( Figure 6I). Our findings provided a novel molecular regulatory mechanism through which the miR-377-3p/EGR1 axis was implicated in cell malignant transformation and tumorigenesis induced by PAH. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the ethics committee of the Second Affiliated Hospital of Zhejiang University. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by The Committee on the Use of Animals of Zhejiang University.
6,391
2021-08-23T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Computational fluid dynamic simulations for dispersion of nanoparticles in a magnetohydrodynamic liquid: a Galerkin finite element method This investigation studies the effects of the thermo-physical properties of four types of nano-metallic particles on the thermo-physical properties of radiative fluid in the presence of buoyant forces and Joule heating (ohmic dissipation). The Galerkin finite element algorithm is used to perform computations and simulated results are displayed in order to analyze the behavior of velocity and temperature of copper, silver, titanium dioxide and aluminum oxide-nanofluids. All the simulations are performed with ηmax = 6 computational tolerance 10−6 for 200 elemental discretizations. Due to the dispersion of nano-sized particles in the base fluid, an increase in the thermal conduction is noticed. This study also predicts future improvements in the thermal systems. Due to magnetic field and fluid flow interaction, the electrical energy converts into heat. This is undesirable in many thermal systems. Therefore, control of Joule heating in the design of thermos systems is necessary. However, this dissipation of heat may be desirable in some biological fluid flows. An increase in energy losses is noted as magnetic intensity is increased. Introduction Technologists and engineers have a major concern in enhancing the efficiency of thermal systems like hydronic heating and cooling in buildings, heating and cooling processes of transportation in the petro-chemical industry, pulp and textile manufacturing etc. 1 Several methods to enhance the efficiency of thermal systems have been used for this purpose. These methods include active methods and passive methods. 2 As mentioned in ref. 2, active methods include external agents like a mechanical input or magnetic eld etc. whereas passive methods include treated surfaces, insert extended surfaces, boiling, condensation, twisted tape, wire coils 2 etc. Combinations of active and passive methods are called compound methods. Although, the above mentioned methods are very effective and have been used for the enhancement of heat transfer, recent advancements in technology have opened the doors to new techniques and methods. One of these methods is the dispersion of nano-metallic particles in pure liquid. This inclusion of particles increases the thermal conductivity of the resulting mixture. Consequently, rate of heat transfer is enhanced. Several theoretical studies on this technique are published. For example Masuda et al. 3 conrmed that the dispersion of ultrane particles in the base uid increases its ability to conduct more heat as compared to the pure uid. Although this work reconrms the enhancement in the process of transfer of heat due to inclusion of nanoparticles in liquids, this analysis is carried out in a limiting sense i.e. Joule heating, thermal radiation, buoyancy effects and heat generation are not considered. The work by Buongiorno 4 introduced some empirical models for the thermophysical properties of nanouids and formulated the mathematical relationships between the physical properties of solid particles, pure uid and mixtures of pure uid and nano-particles create a potential for theoretical studies on transport of heat by liquid as a coolant containing metal particles of very small size, but this work does not consider Joule heating and buoyancy effects. Transfer of heat in nanouids over a stretching surface was studied by Khan and Pop. 5 They investigated thermophoretic and Brownian motion in the ow of nanouids. However, this work does not consider the heating generation and Joule heating effects simultaneously. Nadeem et al. 6 analyzed the effects of Brownian motion and thermophoresis in the ow of Maxwell uid. In fact this work does not consider the inclusion of nano-particles rather than thermophoresis and Brownian motion. Das et al. 7 numerically investigated the effects of different types of nano-particles on the entropy generation of MHD ow over convective by heat surface boundary conditions. Although this work considers more than one effect simultaneously but it does not consider Joule heating, thermal radiation, buoyancy and heat generation effects simultaneously. The effect of space dependent magnetic eld on free convection ow of Fe 3 O 4 -water nanouid was studied by Sheikholeslami and Rashidi. 8 It is important to mention that dispersion of Fe 3 O 4 nanoparticles in the water is considered i.e. Cu, Ag, Al 2 O 3 and TiO 2 are not considered. In another study, Rashidi et al. 9 investigated the behavior of nano-particles on the thermal conductivity of the base uid through Lie group approach but heat generation, Joule heating and buoyancy force are not taken into account. Nadeem and Saleem 10 studied mixed convection ow of nanouid over a rotating cone in the presence of magnetic uid. Nawaz and Hayat 11 studied heat transfer characteristics in an axisymmetric ow of nanouid over a radially stretching surface. Nawaz and Zubair 12 analyzed the effects of different types of nano-particles in the ow of blood over a surface moving with space dependent velocity. This work considers only two types of nano-particles (Cu and Ag). Other than this, convective type boundary condition and the entropy generation are not considered in this study. 12 Ahmed et al. 13 studied the effects of the shape of nanoparticles on mixed convection ow over a disk rotating with time dependent angular velocity. However, this work does not consider Joule heating, heat generation and buoyancy effects simultaneously. These effects will be considered in the present work. There are various models (empirical formulae) which describe correlations between viscosities of the base uid and metallic nano-particles and effective viscosities of nanouids. These models include Einstein model, 14 Brinkman model, 15 Batchelor model, 16 Graham model, 17 model adopted by Wang et al., 18 model of Masoumi et al. 19 Einstein model is valid for very low volume fraction (volume fraction < 0.002) and does not consider Brownian motion of nano-particles. Brinkman model is the modied form of the Einstein model and valid for average volume fraction whereas Batchelor model is the modication of Einstein model and considers Brownian motion of nano-particles. 20 The model used by Wang et al. 18 expresses the effective viscosity as a quadratic function of volume fraction. The model used by Masoumi et al. 19 involves Brownian motion effects. It is also important to note that the models discussed in ref. [14][15][16][17][18][19] give correlations of effective viscosities. Studies on nanouid show that dispersion of nanoparticles impact thermal conductivity. Therefore, different correlations for effective thermal conductivity proposed. Detailed review on analytical models of effective thermal conductivity is given in ref. 20. It is noted that studies 14-20 do not consider model of effective electrical conductivity of nano-uid. As the present work considers magnetohydrodynamic ow of nanouid and model for effective electrical conductivity is unavoidable. The correlations for effective electrical conductivity, effective thermal conductivity and effective viscosity are used by Das et al. 7 The model used by Das et al. 7 had dual characteristics of effective thermal viscosity and effective thermal conductivity as well as analytical model for effective electrical conductivity. This model is given by where r, k, s, 4 and c p , respectively, are density, thermal conductivity, electrical conductivity, volume fraction and specic heat. The subscripts f, nf and s stands for uid, nano-uid and solid particles (nano-particles) respectively. Minimization of the entropy generation in the thermal system is a major concern as wastage of energy causes a great disorder. Therefore, the control of the entropy generation during the heat transfer has been investigated extensively in the last few years. Bejan 21 was rst to work on the minimization of the entropy generation. Aer his work on the entropy generation, several studies have been published. But, here some recent investigations are described. For instance, Bhatti et al. 22 investigated the effects of magnetic eld on the entropy generation of nonlinear transport of heat and mass in the boundary layer ow. Numerical investigation of the entropy generation during the heat transfer in the cavity ow was carried out by Armaghani et al. 23 Vincenzo et al. 24 analyzed the effects of the entropy generation due to temperature difference and viscous losses/ friction loses in the ow. The aim of this work is three fold. First, to study heat transfer enhancement in nanouids in the presence of applied magnetic eld, buoyancy force, thermal radiation and heat generation/ absorption using correlation of effective electrical conductivity together with the correlations of effective thermal conductivity and effective viscosity based volume fraction and second is to investigate the effects of dispersion of nano-particles on entropy generation whereas third one is to implement nite element method to two-dimensional hydrothermal ow in the presence of buoyancy force and electromagnetic radiation. Physical situation We consider the enhancement of heat transfer in water through four types of nano-particles (Cu, Ag, Al 2 O 3 and TiO 2 ) in an incompressible ow of an electrically conducting uid over a vertical stretching sheet with space and time dependent velocity U w (x,t) ¼ ax/(1 À ct). A constant magnetic eld [0,B 0 ,0] is applied along y-axis normal to the sheet. The variation of the temperature of sheet is due to the variation of hot uid occupying half space y < 0. The temperature of hot uid below the sheet is varying as T w (x,t) ¼ T N + ax/(1 À ct) 2 where T N is the ambient temperature, a and c are constants. There is no applied electric eld and the effects of polarization and induced magnetic eld are negligible. Thermo-physical properties (viscosity, density, thermal conductivity, specic heat etc.) are constant. The transport of heat nanouid (occupying half space y > 0) is due to convection from the hot uid (occupying half space y < 0) of temperature T w (x,t) ¼ T N + ax/(1 À ct) 2 . The buoyant force under Boussinesq approximation is signicant. Governing boundary layer equations Applying boundary layer approximation to full two-dimensional conservation laws, one obtains the following boundary layer equations is the heat generation/ absorption coefficient, T is the temperature of the uid and q is the radiative heat ux vector which is dened by Stefan- Stefan-Boltzmann constant and k* is the mean absorption coefficient. The associated conditions are Dimensional analysis In view of the importance of the results obtained from the dimensionless form of conservation equations, the following transformations are introduced ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi a n f ð1 À ctÞ s y; where j(x,y) is the stream function, f(h) and q(h) is the dimensionless form of stream function and temperature, h is independent similarity variable. The continuity eqn (3) is identically satised and eqn (4) and (5) and conditions are reduced to where and are, respectively, the Hartmann number, the Grashof number, the unsteadiness parameter, the radiation parameter, heat generation/ absorption parameter, the Prandtl number, the Eckert number and the Biot number. The prescribed wall temperature case can be recovered as Bi / N. Also note that 4 ¼ 0 is the case when uid is pure and nano-particles are not dispersed, [the case of Butt and Ali 25 and for Gr ¼ 0, Nr ¼ 0 and Ec ¼ 0, the problem reduces to the case of Das et al. 7 with heat generation/absorption. The case of M 2 ¼ 0, l ¼ 0, 4 ¼ 0 and Bi / N is also considered by Abolbashari et al. 26 and Das et al. 7 . The numerical values of thermo-physical properties used in this study are (Table 1). Galerkin finite element formulation Following studies 12,27-29 weighted residual approximations (WRA) for the system dened in eqn (9)-(12) are given below where f 0 ¼ h, the dependent variables are approximated in term of unknown nodal values in the following way f ¼ Computation for stiffness matrix Using the Galerkin nite element scheme, the following elements of stiffness matrix are calculated h j and f j are nodal values at previous iteration. Results and discussion Galerkin nite element algorithm is implemented to study the effects of thermo-physical properties of nano-sized metallic particles on unsteady two dimensional ows in the presence of buoyant force, thermal radiation and Joule heating. Non-linear stiffness matrix is linearized using Picards linearization scheme and system of algebraic equations are solved iteratively with tolerance 10 À5 . Several numerical experiments are done to search h max and grid independent studied is also carried out. Through extensive experiments, we have noted that the computed results converges with tolerance 10 À5 when h max ¼ 6 and domain [0,h max ] is discretized into 200 elements. Fig. 1-4 display the effects of Eckert number Ec on the dimensional velocity of Cu, Ag, Al 2 O 3 and TiO 2 -nanouids when Gr > 0. These gures demonstrate that dimensionless velocity f 0 increases as Eckert number Ec is increased. Ec is the ratio of kinetic energy to enthalpy and an increase in Ec results an increase in the kinetic energy. This increase in kinetic energy temperature of the uid rises. This rise in temperature causes density differences which results an increase in the magnitude of buoyancy force. Due to this, as Eckert number is the coefficient of term (in energy eqn (11)) due to Joule heating and an increase in Ec corresponds to an increase in This journal is © The Royal Society of Chemistry 2018 temperature. Grashof number (Gr) is the ratio of buoyancy force to the inertial force and it varies through positive values for downward ow and hence ow is accelerated by gravitational force and therefore, signicant increase in the velocity is observed. For evidence, Fig. 5-8 are displayed. Thus by increasing Grashof number (Gr), a signicant increase in velocity can be observed from Fig. 5-8. In qualitative sense, the buoyancy force has similar effects on the ow of Cu-nanouid and TiO 2 -nanouid. It is also observed that the momentum boundary layer thickness increases when Grashof number (Gr) is increased. During numerical simulations and numerical experiments, it is also noted that the velocity motion of nanouid decelerates when Gr is varied through negative values. Gr is negative when ow is vertically upward and is opposed by the negative gravity. For the case of negative gravity, uid motion slows down and a signicant reduction in momentum boundary layer thickness is observed for (Gr < 0). The uid under discussion has a property of emitting thermal radiations in the form of electromagnetic waves. The emission of electromagnetic waves from the uid regime carries heat energy away which results a signicant decrease in the temperature. In order to examine the effects of thermal radiation on the temperature of four types of nanouids, simulations are carried out and are recorded in Fig. 9-12. It is found from simulations displayed by Fig. 9-12 that the motion of nanouids slows down due to a reduction in buoyancy force. This is due to the fact that the temperature decreases when Gr > 0. Hence it is concluded that the emission of thermal radiation from the nanouid of a decrease in the temperature of nanouid. This decrease in temperature causes a density difference and hence favorable buoyancy force becomes weak. Consequently, uid motion slows down. It is also observed that momentum boundary thickness is decreased by thermal radiation when gravity is positive. However, opposite behavior is observed for opposing gravitational force. The effects of different nanoparticles on the motion of nanouids are simulated and are displayed by Fig. 13. This gure reects that the velocity of Cu-nanouid is smaller (in magnitude) than the velocities of Ag, Al 2 O 3 and TiO 2 -nanouids. Temperature proles The effects of dispersion of nanoparticles (Cu, Ag, Al 2 O 3 and TiO 2 ) buoyancy force and thermal radiations on the transport of heat in the ow of nanouid are simulated and results are displayed by the Fig. 14-26. Respectively, Fig. 14 depicts that the temperature of Al 2 O 3 -nanouid is high as compare to the temperature of Cu, Ag and TiO 2 -nanouids and vice versa for Cu-nanouid. The effects of Joule heating phenomenon on the temperature of four types of nanouids are displayed in Fig. 15-18. These gures reects that the temperature of the nanouids increases as Eckert number Ec is increased. This increase in This journal is © The Royal Society of Chemistry 2018 temperature (for four types of nano-particles) is due to the fact that Ec and M are the coefficient of Joule heating term in the dimensionless form of energy equation. An increase in Ec means that the effect of Joule heating becomes more and more strong and correspond to the generation of more heat due to ohmic dissipation of the uid. Consequently, this heat adds to the uid and hence temperature rises. Comparative study of Fig. 15-18 also shows that in TiO 2 -nanouid highest amount of heat dissipates. It is already mentioned that the three types of modes of heat transfer (convection, conduction and thermal radiation) are considered. Further, opposing and favorable buoyant force is considered. In case of positive buoyant force (Gr > 0), ow experiences a favorable force due to which convection phenomenon becomes signicant and process of carrying heat from hot wall to the uid speeds up. Hence temperature of the uid rises (see gures). This fact is completely in agreement with the physics of uid ow (see Fig. 19-22). The four type of nanoparticles are dispersed in the uid which is capable of radiating heat in the form of the electromagnetic waves as heat passes through it. Here in this study the effect of radiative nature is examined through radiation parameter Nr. An increase in the radiation parameter Nr represents the situation for which more electromagnetic waves carry heat energy away from the uid. That is why temperature of nanouid (four types of nanouids) decreases with an increase in the radiation parameter Nr as shown in Fig. 23-26. Entropy analysis The entropy generation due to temperature gradient, viscous dissipation and Joule heating is dened by Using the similarity transformations given in eqn (8), one obtains the following dimensionless form of the entropy generation where, and, respectively, are called dimensionless the entropy generation number, the Reynolds number, the Brinkman number and the non-dimensionless temperature difference number. Entropy generation proles The behavior of dimensionless entropy under the variation of Eckert number Ec, Grashof number Gr, unsteadiness parameter M and Biot number Bi is displayed in Fig. 27-31. Fig. 27 represents that rate of the entropy generation increases when Ec is increased. Therefore, it can be advised to use the uid exhibiting less dissipation in order to avoid losses of heat energy in magneto-thermal system. This recommendation is both for nano and pure uid. Despite of the advantage of magnetic uid to control the momentum boundary layer thickness, it is not recommended to use electrically conducting uid when the reduction of losses of heat energy is of high concern. The effect of buoyancy force on the entropy generation is also simulated and the simulated results are graphed in Fig. 28. This gure reects that favorable buoyancy force causes an increase in the energy losses. These losses can be controlled by introducing the opposing buoyancy force i.e. considering downward ow on vertical sheet. Fig. 28 also demonstrates that This journal is © The Royal Society of Chemistry 2018 losses of heat energy are signicant for nanouid as compare to the pure uid. The entropy generation in steady and unsteady ow of both nanouids and regular uids is represented in Fig. 29. It is noted from Fig. 29 that the entropy generation is high in steady ow as compare to unsteady ow. The effect of Joule heating on the entropy generation is displayed in Fig. 30. This gure depicts that there is signicant increase in the entropy generation when heat losses due to dissipation caused by the external magnetic eld. This behavior is same for both pure and nanouid. Therefore, it is advised not to use electrically conducting uid. Alternatively, magnetic intensity of the uid be adjusted in such a way that losses of heat energy should be minimum. This is for both nano and regular uids. The entropy generation in nano-magnetohydrodynamic ow is high as compare to nano-hydrodynamic ow (see Fig. 30). The effect of convection boundary condition on the entropy generation is displayed in Fig. 31 is noted from this gure shows that there is a signicant effect of Biot number (dimensionless number due to convective boundary conduction) on the entropy generation. Conclusion In this paper, the effects of four types of nano-particles (Cu, Ag, Al 2 O 3 and TiO 2 ) on the transport of heat in unsteady twodimensional boundary layer ow of a radiative uid over a convectively heated surface in the presence of Joule heating, heat absorption/generation and buoyant force are investigated. It is observed that dispersion of nano-particles in the pure uid increases the thermal conductivity of the resulting mixture which may play a vital role in the thermal systems. For favorable buoyant force the velocity of the mixture (mixture of nanoparticles and radiative uid) increases which causes an increase in the thermal and momentum boundary layer thicknesses. However, in case of opposing buoyant force, a reverse mechanism regarding momentum and thermal boundary layer thicknesses is observed. The magnetic eld intensity and ohmic dissipation are directly proportional with each other. Hence an increase in the intensity of the magnetic eld converts more electrical energy into heat (due to ohmic dissipation process). It is also observed that an increase in the intensity of the magnetic eld retards the ow and reduces the momentum boundary thicknesses. Therefore, it is advised that an external magnetic eld may be applied to control the ow and momentum boundary layer thickness. However, it should be in mind that an increase in the imposition of external magnetic eld has opposite effect on the thermal boundary layer thicknesses due to Joule heating mechanism. It is also important to mention that momentum boundary layer thickness for hydrodynamic ow is higher than that of the magnetohydrodynamic ow. However, thermal boundary layer thickness of hydrodynamic ow is less than that of the magnetohydrodynamic ow. During numerical computations, it is studied that the velocity of TiO 2 -nanouid is higher than the velocity of Al 2 O 3 , Ag and Cu-nanouids. Due to magnetic eld and uid ow interactions, the electrical energy converts into heat. This may undesirable in many thermal systems. Therefore, control of Joule heating in the design of thermal system is necessary. However, this dissipation of heat may be desirable in some biological uid ows. Moreover, an increase in the intensity of the magnetic eld causes an increase in the entropy generation. The positive buoyancy force enhances the entropy generation. However, opposing buoyancy force reduces energy losses. Energy losses in steady ow are high as compare to the unsteady ow. The key observations are listed below: The buoyant force is responsible for the inuence of thermal radiations on the ow of nanouid. It is observed that if buoyant force is not considered, then there is no effect of thermal radiations on the ow and hence momentum boundary layer thickness. As the buoyant force is signicant in vertical ows, therefore, it is recommended that horizontal arrangement of physical model (sheet) should be taken if no impact of thermal radiations on the ow of nanouid is desired. The magnetic eld decelerates the uid motion due to hindrance caused by the Lorentz force. Therefore, it is recommended to apply external magnetic eld perpendicular to the plane of sheet if momentum boundary layer thickness is to be controlled. Convectively heated surface causes more entropy generation. Therefore, it is recommended not to use the convectively heated surface in thermal systems. Imposition of external magnetic eld increases the entropy generation and is responsible of great energy loses. Therefore, thermal systems work efficiently without loses of energy if external magnetic eld is not imposed. Conflicts of interest There are no conicts to declare. nancial support under NRPU-vide 5855/Federal/NRPU/R&D/ HEC/2016. Authors are also thankful to the referees for their useful comments regarding earlier version of this manuscript.
5,565
2018-11-14T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
A novel method for in silico assessment of Methionine oxidation risk in monoclonal antibodies: Improvement over the 2-shell model Over the past decade, therapeutic monoclonal antibodies (mAbs) have established their role as valuable agents in the treatment of various diseases ranging from cancers to infectious, cardiovascular and autoimmune diseases. Reactive groups of the amino acids within these proteins make them susceptible to many kinds of chemical modifications during manufacturing, storage and in vivo circulation. Among these reactions, the oxidation of methionine residues to their sulfoxide form is a commonly observed chemical modification in mAbs. When the oxidized methionine is in the complementarity-determining region (CDR), this modification can affect antigen binding and thus abrogate biological activity. For these reasons, it is essential to identify oxidation liabilities during the antibody discovery and development phases. Here, we present an in silico method, based on protein modeling and molecular dynamics simulations, to predict the oxidation-liable residues in the variable region of therapeutic antibodies. Previous studies have used the 2-shell water coordination number descriptor (WCN) to identify methionine residues susceptible to oxidation. Although the WCN descriptor successfully predicted oxidation liabilities when the residue was solvent exposed, the method was much less accurate for partially buried methionine residues. Consequently, we introduce a new descriptor, WCN-OH, that improves the accuracy of prediction of methionine oxidation susceptibility by extending the theoretical framework of the water coordination number to incorporate the effects of polar amino acids side chains in close proximity to the methionine of interest. Introduction Since the first approval of a monoclonal antibody (mAb) for therapeutic use in 1986 [1], the development of mAb-based therapeutics has gained increasing interest in the pharmaceutical industry. To date, more than 70 mAbs have been approved and many more are currently in discovery and clinical phases [2]. A mAb consists of two fragment antigen-binding (Fab) regions and one fragment crystallizable (Fc) region. The Fab fragments contain the variable regions (Fv) that are responsible for antigen-binding specificity through the complementarity-determining region (CDR). The Fc fragment contains the constant regions and is responsible for the mAb function, via interactions with Fc receptors, and in vivo disposition, via interactions with the neonatal Fc receptor (FcRn), which may extend serum half-life [3,4]. During production and purification of a therapeutic mAb and manufacture of clinical supplies, the candidate molecule must go through a series of potentially stressful unit operations (e.g. antibody generation in culture media, purification, formulation, and storage) before dosing in patients [5,6]. Given the intrinsic flexibility and the dynamic nature of the antibody structure in solution, and the presence of many functional groups in the amino acids side chains, the occurrence of chemical modifications is much higher for mAbs than for small molecule drugs. Since these modifications may negatively impact the intended biological functions of the antibody, it is efficient and cost effective to predict the chemical modification propensity of potentially liable sites as early as possible during the development process [7][8][9][10]. Among the 20 natural amino acids in mAbs methionine, cysteine, tryptophan, tyrosine and histidine residues are theoretically susceptible to oxidation. The sulfur in methionine, present as a thioether (R-S-R 0 ), has a low oxidation potential and therefore a large number of oxidizing species can oxidize this residue [11]. The oxidation reaction of methionine can occur via two distinct mechanisms, depending on the oxidant species: oxidants such as HOCl, H 2 O 2 , and singlet oxygen directly oxidize methionine (Met) to methionine sulfoxide form (MetO) via a formal oxygen transfer by a two-electron oxidation; radicals such as HO• and metal ions such as Fe III and Cu II , in contrast, oxidize methionine in a one-electron oxidation [11]. The oxidation of methionine to the methionine sulfoxide form has been widely reported for many types of proteins [12][13][14][15]. Although methionine oxidation has been reported to be reversible in vivo through the activity of the methionine sulfoxide reductase (MsR), which catalyzes the thioredoxin-dependent reduction of MetO back to Met, it is not yet clear whether, in the context of therapeutic mAbs, an oxidized methionine can be readily reduced back to Met [11,[16][17][18]. In light of these observations, in order to develop an effective and safe therapeutic mAb, it is highly desirable to predict and understand the role of potential chemical liabilities, such as methionine oxidation sites, during molecular profiling that occurs in late discovery or early development of the candidate. Oxidation of Met residues in mAbs has been reported as a result of exposure to oxidizing agents [19], photo-irradiation [20], or simply during storage [21]. Depending on the site of the modification, this oxidation has been shown to alter a mAb's conformational structure [22], stability [23], binding to Protein A and Protein G [24,25], loss of antigen binding [26], reduced binding to FcRn [27] and shorter in vivo half-life [28]. Given the potential impact of methionine oxidation on drug development, early identification of this liability provides the opportunity to reengineer the susceptible site during the late discovery or early pre-clinical development phases. Conversely, if the mAb candidate with an oxidizable methionine cannot be reengineered and is advanced to the development phase, early identification allows for timely establishment of a risk management strategy of this critical quality attribute (CQA). The in silico prediction of potential liabilities is an essential component of developability screening that also relies on a variety of stress conditions and extensive analytical characterization to provide a developability risk assessment of the lead candidate. Therefore, there is a significant interest in developing new descriptive variables that may be used with current modeling software to provide in silico prediction of chemical liabilities in the antibody sequence. their proprietary nature. In this work, we have used two experimental datasets to validate the in silico predictions of methionine oxidation propensities and to reach the conclusions drawn in the manuscript. The first dataset, referred to in the manuscript as "clinical stage therapeutic (CST) antibodies dataset", was derived from existing data, which are openly available at https://doi.org/ 10.1080/19420862.2017.1290753. This is the largest dataset used in this work, counting 14 antibodies and a total of 46 methionines. The second dataset, referred to in the manuscript as "internal dataset", is derived from antibodies and ADCs currently in development at AbbVie. Therefore, due to its proprietary nature, supporting data cannot be made openly available. This is the smallest dataset used in this work, counting 7 antibodies and 2 ADCs for a total of 26 methionines. We want to highlight that the conclusions drawn from both datasets are in total agreement and the "internal dataset" serves as a further validation of the methods described in the manuscript. Moreover, in the supporting information S2 Table and S3 Table, we have disclosed more details regarding the calculations derived from the molecular dynamics simulations for both datasets. PLOS ONE Several advancements have been reported previously that predict methionine oxidation in proteins. Based on the observation that methionine residues on the surface of a protein are oxidized at a higher rate than the buried ones [29][30][31][32][33][34], it has been hypothesized that the oxidation reaction is largely governed by the solvent exposure of the methionine side chain. For this reason, the solvent-accessible surface area (SASA) of the methionine side chain from a predicted antibody structure is a commonly used parameter to predict oxidation propensity of methionine residues in mAbs [35,36]. However, Chu et al. showed that SASA fails to explain the oxidation rates of partially buried methionine residues in granulocyte colony-stimulating factor (G-CSF) and α-1 antitrypsin [37]. Specifically, the authors showed that if 2-3 water molecules are present around buried methionine sites, the reaction barrier is similar to that of free methionine. The solvent, in fact, might still access spatially restricted residues via thermal fluctuations. Thus, Chu et al. proposed a "water-mediated" mechanism and developed the 2-shell water coordination number (WCN) as a parameter. Interestingly, WCN correlates better with experimentally measured oxidation rates [37,38]. More recently, Aledo et al. reported a predictive method for methionine residues oxidized in vivo in response to oxidative signals based on published proteomic data [39]. Their study identified the three most relevant features that contribute to methionine oxidation: (i) the solvent accessible area of the methionine residue, (ii) the number of residues between the analyzed methionine and the next methionine found towards the N-terminus and (iii) the spatial distance between the sulfur atom in the methionine and the closest aromatic residue [39]. Lastly, Sankar et al. developed a quantitative and highly predictive in silico methionine oxidation model for screening early candidates using machine learning and features calculated from the primary sequence, from the mAb structure obtained by homology modeling, and from coarse-grained elastic network models [40]. Despite the success of the models described above, they remain inconsistent with oxidation data regarding partially buried methionines. For instance, Yang et al. observed a good correlation between the solvent-accessible surface area (SASA) of the side chain of methionine residues and the measured oxidation events in the corresponding segments of the mAb [36]. An exception to this observation, however, was represented by a subset of the 121 clinical stage mAbs containing a buried methionine residue in the H3 loop of the CDR (SASA <11%, Fig 1). Yang et al., found that for the 22 antibodies that contain such a feature, the factors affecting the oxidation of methionine at this position are not entirely captured by the solvent accessibility of the methionine side chain. In this study, we present our efforts to develop a more accurate model for methionine oxidation, regardless of location within the protein, using both publicly available and experimentally generated data. Homology modeling and molecular dynamics simulations This study considered 7 mAbs and 2 antibody-drug conjugates (ADCs) provided by AbbVie as well as 14 mAbs advanced to clinical stages by other companies. The 14 clinical stage antibodies were: Abituzumab, Dinutuximab, Duligotuzumab, Eldelumab, Fletikumab, Golimumab, Imgatuzumab, Lintuzumab, Lirilumab, Natalizumab, Ofatumumab, Tocilizumab, Tovetumab and Vesencumab. The three-dimensional structures of the variable fragment (Fv) of all antibodies in the study were modeled using the automated protocol implemented in the BioLuminate package, Schrödinger suite version 2019-2 [41][42][43] (Schrödinger, LLC, New York, NY). Briefly, the python script build_antibody.py, provided within the BioLuminate package, was used with default options to generate the homology models of the Fv regions of the antibodies of interest. Additionally, 10 models of the Fv region of Vesencumab were obtained with the "advanced loop model" option within the antibody prediction tool in Maestro (Schrödinger suite version 2019-2), employing ab initio structure prediction with Prime [44,45]. Sampling of the methionine side chain conformations in each of the protein structures was achieved using a molecular dynamics (MD) approach. The Fv models described above were solvated in an orthorhombic box with SPC waters and MD simulations were carried out using DESMOND [46], Schrödinger suite version 2019-2 (Schrödinger LLC, New York, NY), in explicit solvent with periodic boundary conditions. The OPLS3 force field [47] was used for each protein. The system was initially relaxed with restraints on solute allowing waters to freely equilibrate, followed by extensive simulation of the entire system without any restraints. Production trajectories were 5 ns long, collected with a 50-ps recording interval at 300K using a standard protocol. The same MD simulation protocol was applied to the crystal structures of the Fab regions of Abituzumab (PDB ID: 4O02), Ofatuzumab (PDB ID: 3GIZ) and Vesencumab (PDB ID: 2QQN). MD simulations were performed at the iForge National Center for Supercomputing Application (NCSA), University of Illinois at Urbana-Champaign on nodes with Nvidia V100 GPUs. Analysis of trajectories The analysis of the structural properties of the three-dimensional models of the Fv region of the 23 antibodies (7 mAbs and 2 ADCs provided by AbbVie, 14 clinical stage antibodies) and the trajectories described in the section above was performed with python scripts using the PLOS ONE Schrödinger Python API. Specifically, the solvent-accessible surface area (SASA) of the side chain of each methionine residue in the Fv region was computed (static SASA or sSASA); the time-averaged value of the SASA of each methionine side chain was calculated from the MD trajectories (dynamic SASA or dSASA); the 2-shell water coordination number (WCN), defined as the average number of water molecules within a radius of 6 Å from the sulfur atom in the side chain, was computed for each methionine residue in the Fv region of the 23 antibodies; similarly, the average number of hydroxyl groups (#OH) in the side chain of tyrosine, threonine and serine residues within a radius of 6 Å from the sulfur atom of each methionine was calculated from the trajectories. SASAs are reported as relative values, the maximum allowed area being described by Tien et al [48]. Oxidation stress Oxidized samples of mAb1-7 were generated by diluting to 2 mg/ml in Phosphate buffered saline (PBS), pH 7.4. Samples were subsequently subjected to tert-butyl hydroperoxide (tBHP, Alfa Aesar, Haverhill, Massachusetts) at a final concentration of 0.1% for an incubation time of 24 hours at 21˚C in the dark. The oxidation reaction was subsequently quenched by addition of free methionine in solution at 10 mM final concentration. Samples were stored at -80˚C before further treatment or analysis. Details for the preparation and analysis of ADC1-2 and for replicates of mAb2, mAb4, mAb7 (S1 Table) are provided in Supporting Information. It is worth noting that the oxidation condition used to generate the data described here differs from the one reported for the dataset by Yang et al. [36] due to the nature of the oxidizing agent. Both hydrogen peroxide and tBHP oxidize methionine residues by a nucleophilic substitution reaction [52]. H 2 O 2 can react with oxidation vulnerable amino acids, especially methionines, to generate irreversible modifications such as methionine sulfoxides; tBHP, a tertiary-butyl analog of H 2 O 2 , oxidizes predominantly surface exposed methionine residues, and therefore is used to probe the effect of oxidation of exposed methionines on protein structure and stability [52]. It has been reported, however, that especially for methionines in the Fab of an antibody, forced oxidation studies carried out by H 2 O 2 or tBHP produce similar oxidation levels and lead to the identification of the same methionine as oxidation prone [19]. Peptide mapping and oxidation quantification Each oxidized sample was diluted to 0.5 mg/ml in denaturing buffer, 8 M Guanidine-HCl (GuHCl), pH 8.5 (VWR, Radnor, Pennsylvania), to a final concentration of 6 M GuHCl and reduced in the presence of 5 mM Dithiothreitol for 20 min at 37˚C. After alkylation in 17 mM Iodoacetic acid (Alfa Aesar, Haverhill, Massachusetts) for 20 min at 37˚C samples were buffer exchanged to digestion buffer (10 mM TRIS pH 8) using 0.5 mL Zeba spin desalting columns, 7K MWCO. Exchanged samples were digested for 2 hours at 37˚C using 1:10 enzyme:protein Trypsin/LysC mix (Promega, Madison, USA). After incubation, samples were acidified using 5% TFA (Sigma Aldrich, St. Louis, Missouri) to quench proteolysis reaction and frozen for further analysis. Peptides generated during proteolysis were separated and analyzed on a Q Exactive™ Plus Mass Spectrometer (Thermo Fisher Scientific, Waltham, Massachusetts) using an ACQUITY Peptide BEH 300 C18 column (300 Å pore size, 1.7 μm particle size, 2.1 mm diameter and 150 mm length, Waters, Milford, MA, USA). The solvents used for chromatographic separation were 0.1% formic acid in MS grade water (mobile phase A) and 0.1% formic acid in acetonitrile (mobile phase B). Eluted peptides were sprayed by a HESI-source with a spray voltage set at 3.5 kV, capillary temperature 300˚C, aux gas heater temperature 430˚C, sheath gas and auxiliary gas flow rates of 50 and 15, respectively. Full MS scan was set at microscan 1, resolution 70000, ACG target 3e6, maximum IT 50 ms and scan range 200 to 2000 m/z. The dd-MS2 was set at microscan 1, resolution of 17500, AGC target 1e5 and maximum IT 150 ms and an isolation window of 2.0 m/z in a top-5 method. Quantification of the methionine oxidation was performed with Thermo Scientific Biopharma Finder 3.1. Prediction of methionine oxidation propensity in clinical stage therapeutic (CST) antibodies: Homology modeling Availability of reliable and accurate experimental data is a crucial step for the development and validation of any predictive algorithm. At first, we considered the results presented by Yang et al. [36] in their study of methionine oxidation in monoclonal antibodies in the context of forced oxidation by hydrogen peroxide. Briefly, Yang et al. employed a high-throughput liquid chromatography-mass spectrometry-based method to identify oxidation events in three distinct segments of an antibody resulting from enzymatic cleavage: light chain, Fab portion of heavy chain, and Fc. The method was applied to 121 clinical stage mAbs and for each segment of these molecules the fraction of the native (non-oxidized) species was reported together with the fraction of oxidized products. This approach led to ambiguous assignment of the oxidation events to a specific residue for segments containing more than one methionine. To better evaluate the accuracy of the predictions based on these structural features, among the 22 mAbs identified by Yang et al. we selected the 14 that displayed either 0% or 100% of non-oxidized species. Cases that are partially oxidized are omitted since it is impossible to rule out that the measured oxidation signal for the segment results from only one methionine or from the combination of low-level oxidation from multiple methionine residues. In order to develop a more accurate predictive method for methionine oxidation propensity in monoclonal antibodies, we built the three-dimensional structures of the Fv of these 14 mAbs containing a buried methionine in the CDR-H3 loop using homology modeling. Based on previously described predictive methods [35,38], from these structures we calculated the static SASA (sSASA) for methionine side chains and, using MD simulations, the time-averaged SASA (dSASA) and water coordination number for methionine side chains (S2 Table). We then compared the calculated features with the experimental data described by Yang et al. Finally, we used the calculated structural features described above to predict the oxidation propensity of each methionine in the Fv portion of the heavy chain for the 14 mAbs. We defined threshold values for each feature to obtain a binary descriptor, with values of 0 and 1 corresponding to non-oxidation prone and oxidation prone methionine residues, respectively. This binary descriptor allowed for counting the oxidation events in the Fab portion of heavy chain (Table 1). In the case of the sSASA and dSASA, we labeled a methionine as oxidation prone when its relative SASA is greater than 15%. Previous studies report similar cut-off thresholds, 8-11%, with variability due to the use of different values of the maximum possible SASA for the methionine side chain [36]. For the water coordination number (WCN), we defined a methionine to be oxidation prone if at least 6 water molecules are within 6 Å from the sulfur atom. These values were chosen to maximize the agreement between the predicted values and the experimental results in this specific dataset. For each mAb, we counted the total number of oxidation prone methionine residues, comparing the results with the experimental observations. We found that the predictions based on sSASA, dSASA and WCN provided similar results for the 14 mAbs in the dataset, with values of sensitivity (true positive rate, TPR) and specificity (true negative rate, TNR) within one standard deviation. These methods correctly predict all the negative oxidation events. However, they performed poorly on prediction of positive oxidation events on this dataset, with a TPR of about 0.5. To improve upon the aforementioned methods, we expanded the concept of water coordination number in the context of the theoretical framework developed by Chu et al. [38]. In their work, Chu et al. computationally characterized the oxidation reaction of the sulfur atom in the methionine side chain by hydrogen peroxide. The authors identified the limiting step in the oxidation reaction as the charge separation that occurs between the two oxygen atoms in the hydrogen peroxide molecule. In the model, this charge separation is favored by the network of hydrogen bonds that is formed between the hydrogen peroxide and the surrounding water molecules. As noted by the authors, however, the polar side chains of other amino acids surrounding the methionine could form such hydrogen bonds when no water molecules, or few, are available. In light of these observations, we introduce here a new parameter, WCN-OH, that takes into account both water molecules and polar side chains containing a hydroxyl group (threonine, serine and tyrosine) within 6 Å from the sulfur atom in the methionine (Fig 2). This parameter is calculated from MD trajectories computing at each timestep and averaging both the WCN and the number of hydroxyl groups (#OH) in neighboring side chains. The number of hydroxyl groups showed better agreement with the experimental results than other hydrogen bond donors/acceptors (i.e. amide groups). This finding can be explained with the difference in electronegativity between oxygen and nitrogen and the resulting different partial charges in the hydroxyl and amide groups. The parameter WCN-OH is used to label oxidation prone methionine residues when one of the following conditions is satisfied: 1. WCN is greater than 6, or Table 1 PLOS ONE 2. WCN is greater than 0.1 and #OH is greater than 1.5. Condition 1 is equivalent to the WCN method described above; condition 2 is based on the rationale that the methionine side chain must be accessible to a transient water molecule and that more than one hydroxyl group from neighboring side chains have to be present within 6 Å from the sulfur atom. As for the other methods, threshold values were chosen to maximize the agreement between the predicted values and the experimental results in this dataset. With the threshold values described in condition 1 and 2, the prediction method based on this new parameter provided good agreement with the experimental results (Table 1) and a significant improvement in the prediction sensitivity (Fig 3) compared to the other methods described above. With this new method, only one oxidation event out of the 46 events is wrongly assigned and therefore the resulting sensitivity is 0.88±0.05. Prediction of methionine oxidation propensity in CST antibodies: Highresolution experimental structures and advanced H3 loop modeling In the previous section, we showed that both the WCN and the exposed surface of the side chain can be used to predict the oxidation propensity of methionine residues in mAbs. Unfortunately, these features were not able to account for all the factors that can lead to the oxidation reaction. As a result, we surmised that the poor correlation between WCN, sSASA and dSASA and experimentally determined methionine oxidation may be the consequence of estimating the three-dimensional structures of the mAbs using homology modeling. To exclude this hypothesis, we considered three CST antibodies among the 22 with at least one methionine in the CDR-H3 loop for which there is an X-ray crystal structure available in the RCSB Protein Data Bank: Abituzumab (PDB ID: 4O02), Ofatuzumab (PDB ID: 3GIZ) and Vesencumab (PDB ID: 2QQN). For these structures we calculated the sSASA of the eleven methionine residues in the heavy chain of the Fab fragment and, using molecular dynamics simulations, the time-averaged values of the water coordination number and dSASA. Additionally, we computed the WCN-OH parameter as described in the previous section. PLOS ONE The results, shown in Table 2, illustrate that the methods based on sSASA, dSASA and WCN only correctly predict the experimentally observed oxidation of one methionine residue, Vesencumab H:100B, but these methods are unable to predict the second oxidation event for PLOS ONE Vesencumab. The method based on WCN-OH, however, correctly predicts both oxidation events for Vesencumab, H:100B and H:100F (Fig 4), in agreement with the experimental results. Finally, we tested the hypothesis that ab initio calculation of the CDR-H3 loop structure might generate alternative conformations where the experimentally observed oxidized methionines are significantly solvent-exposed. For this purpose, we identified Vesencumab as the candidate for further analysis. This mAb, in fact, contains four methionines in the Fab portion of the heavy chain, two of which reside in the H3 loop (Fig 4), and two oxidation events are observed. We generated 10 models for the CDR-H3 loop using Prime [44,45] and we calculated the static SASA of the side chain for the four methionines. The results, shown in Table 3, demonstrate that as in the crystal structure three of the four methionines, except H:100B, occur with a SASA of less than 4 Å 2 . These findings confirm that factors other than the degree of exposure to the solvent must be considered to improve the accuracy of the prediction of methionine oxidation propensity. Test case: Internal dataset We demonstrated in the previous sections that the solvent accessibility of a methionine side chain, either assessed through the WCN or the SASA, is necessary but not sufficient to determine its oxidation propensity. We illustrated, in fact, that the chemical environment of the methionine moiety, in particular the presence of proximal side chains containing a hydroxyl group is a factor that affects the oxidation propensity of the residue. To further validate the predictive methods described above, we performed forced oxidation studies on seven IgG1 antibodies and two IgG1 ADCs provided by AbbVie and we assessed the degree of oxidation of each methionine in the Fv region using peptide mapping. Consequently, we obtained experimental results that provided more detailed information for each residue (compared to the CST dataset described in the previous sections) and allowed for a better estimation of the prediction accuracy of the different methods. As shown in Table 4 and S1 Fig, we observed that six among the twenty-six methionines considered in our experiments showed an oxidation level � 5%, the established minimum oxidation level. For each methionine, we calculated from the three-dimensional structures and the MD trajectories the sSASA, the dSASA, the WCN and the WCN-OH parameters from the three-dimensional structure and the MD trajectory as described above for the CST antibodies dataset (S3 Table). PLOS ONE The predictions of the oxidation propensity based on the sSASA (Table 4) resulted in the incorrect assignment of two negative events and one positive event, as illustrated in the confusion matrix in Fig 5. Accordingly, this method was, for this dataset, the least accurate, with a sensitivity of 0.86±0.05 and a specificity of 0.90±0.02. The predictions based on the dSASA and the WCN provided similar results, correctly assigning all the twenty negative events and five of the six positive events. The resulting sensitivity and specificity were 0.84±0.06, 1.00±0.00 for dSASA and 0.83±0.03,1.00±0.00 for WCN, respectively. When using the WCN-OH method, all the six positive events and the twenty negative events were correctly assigned, with both the sensitivity and the specificity equal to 1. Although all methods performed better on this set of antibodies than on the CST set, only the WCN-OH method correctly predicted all positive and negative events. The likely cause of the better performance registered by all methods was the fraction of buried methionines in the two datasets: in the CST set 91% of the methionines have a relative dSASA < 15%, whereas in the proprietary dataset (n = 9) the fraction of buried methionines is 81%. In particular, the class of methionines for which the subset of CST antibodies was selected (CDR-H3 loop) is underrepresented in the internal dataset, with only 2 elements. However, it is worth noting that for the internal dataset the sSASA, dSASA and WCN methods incorrectly identified a PLOS ONE methionine in the CDR-H3 loop as being non-oxidized. The chemical modification of such a residue might result in loss of antigen binding and possibly drug potency. Furthermore, the availability in this dataset of oxidation levels for each methionine in the Fv allowed for assessement of quantitative or semiquantitative prediction based on the accessibility of the methionine side chain. We observed that WCN, sSASA and dSASA showed some degree of semiquantitative agreement with the experimental data, with values of R 2 , correlation and Spearman correlation coefficient in the range of 0.54-0.58, 0.74-0.76 and 0.59-0.67, respectively (S1 Fig). These methods, although ineffective in predicting absolute values of oxidation rate, might still be used to rank the relative oxidation of different methionines in an antibody. Discussion Methionine oxidation is a common chemical modification that occurs in therapeutic antibodies. Methionine oxidation can significantly reduce the serum circulation half-life of the antibody and, if the methionine is located in the vicinity of the CDR, it can decrease the binding affinity of the antibody with the epitope [25,34]. For this reason, identification of oxidation sites during the early stages of antibody discovery provides an engineering opportunity to remove the liable site. The information gained could also guide the subsequent formulation and production processes to reach high stability and drug potency if the candidate with an oxidation liability moves forward into development. To date, several methods have been developed by different groups to predict in silico the oxidation propensity of methionine residues in proteins. Interestingly, none of them identified a consensus sequence or motif that show any correlation with the experimentally observed oxidation of the residues. Such motifs, for example, have been identified for other chemical modifications in proteins, such as deamidation [53]. Instead, accurate prediction of methionine oxidation propensity requires knowledge of the chemical environment of the sulfur atom in the methionine side chain [38]. Specifically, Chu et al. highlighted the crucial role of the water molecules surrounding the methionine side chain in stabilizing the transition state that represent the limiting step in the reaction [38]. From their seminal work several methods to assess the oxidation propensity of methionine in proteins arose, mainly based on the solvent exposure of the methionine side chain or on the water coordination number. However, as reported by Yang et al., these methods fail at predicting the oxidation propensity of at least one class of methionines in antibodies, specifically the ones situated at the end of the H3 loop in the CDR [36]. Despite these methionines having side-chain relative SASA <11%, oxidation at this site occurs frequently (Yang et al. reported oxidation in 7 out of 22 antibodies with a methionine at this position) [36]. In view of this, a deeper understanding of the mechanism underlying this reaction and a structure-based analysis of the antibodies are required to obtain reliable prediction of the oxidation liabilities. In this work, we aimed to exploit the theoretical framework of the water coordination number to expand its prediction capability. In this context, water molecules form hydrogen bonds with hydrogen peroxide and therefore stabilize the charge separation between the two oxygens that drive the oxidation reaction. For this reason, predictive methods based on this feature associate high propensity to be oxidized with a large number of water molecules, usually 6 or more, within approximately 6 Å from the sulfur atom. An exception to this model are methionine residues at the end of the H3 loop in the CDR identified by Yang et al. [36]. In the method presented here, WCN-OH, we consider not only the water molecules surrounding the methionine side chain, but also other residues side chains containing hydroxyl groups. The rationale behind this new method lies in the observation discussed in the work of Chu et al., that polar side chains in proximity of the methionine sulfur atom could play a role in stabilizing the transition state of the oxidation reaction by mean of hydrogen bonds with the oxidizing species [38]. We showed that, compared to other methods, WCN-OH represents a significant improvement when applied to the Yang et al. dataset of mAbs containing a partially buried methionine near the end of the H3 loop in the CDR (14 mAbs, 46 methionines). We further validated the WCN-OH method on an internal dataset of proprietary molecules (7 mAbs and 2 ADCs, 26 methionines in total). On this dataset, WCN-OH correctly predicted the oxidation propensity of 26 methionines. Interestingly, WCN-OH was the only method able to predict the oxidation of a methionine in the H3 loop of the CDR, whose modification represents a potential liability during drug development because of the risk of reduced binding affinity. We determined the time-averaged values of WCN and SASA from 5 ns MD trajectories. Such trajectories can be collected in less than one hour on a GPU-accelerated compute node, allowing the screening of a large number of candidates during the antibody-discovery process. However, the oxidation reaction of methionine residues in antibodies occurs several orders of magnitude slower. For example, Agrawal et al. studied methionine-oxidation kinetics in the presence of 0.1% H 2 O 2 in the Fv and Fc. The measured pseudo-first-order rate constants were 1.33 h -1 and 0.25 h -1 , respectively [54]. Thus, extending simulations may improve performance, but any added benefit in accuracy needs to outweigh the added runtime burden. Moreover, side-chain dynamics and local conformational changes occur on a fast time scale (ps) which ns simulations accurately capture [35]. Lastly, we report that the predictions from 5 ns trajectories effectively identify oxidation-liable methionines. The results presented here show that WCN-OH represents an improvement over current algorithm to predict methionine oxidation propensity. In particular, WCN-OH has shown to be better at predicting oxidation of methionine residues partially buried within the threedimensional structure of the mAb. This new method not only improves prediction accuracy, but also provides additional atomic-level insight of the methionine oxidation mechanism. We anticipate that prospective application of the WCH-OH method on more extensive datasets will further validate its improved accuracy and applicability to antibody-drug development.
7,768.6
2022-12-29T00:00:00.000
[ "Chemistry", "Medicine" ]
Induction of immunogenic cell death of tumors by newly synthesized heterocyclic quinone derivative Many cancer types are serious diseases causing mortality, and new therapeutics with improved efficacy and safety are required. Immuno-(cell)-therapy is considered as one of the promising therapeutic strategies for curing intractable cancer. In this study, we tested R2016, a newly developed heterocyclic quinone derivative, for induction of immunogenic tumor cell death and as a possible novel immunochemotherapeutic. We studied the anti-cancer effects of R2016 against LLC, a lung cancer cell line and B16F10, a melanoma cell line. LLC (non-immunogenic) and B16F10 (immunogenic) cells were killed by R2016 in dose-dependent manner. R2016 reduced the viability of both LLC and B16F10 tumor cells by inducing apoptosis and necrosis, while it demonstrated no cytotoxicity against normal splenocytes. Expression of immunogenic death markers on the cell surface of R2016 treated tumor cells including calreticulin (CRT) and heat shock proteins (HSPs) was increased along with the induction of their genes. Increased CRT expression correlated with dendritic cell (DC) uptake of dying tumor cells: the proportion of CRT+CD11c+cells was increased in the R2016-treated group. The gene transcription of Calr3, Hspb1, and Tnfaip6, which are related to immunogenicity induction of dead cells, was up-regulated in the R2016 treated tumor cells. On the other hand, ANGPT1, FGF7, and URGCP gene levels were down-regulated by R2016 treatment. This data suggests that R2016 induced immunogenic tumor cell death, and suggests R2016 as an effective anti-tumor immunochemotherapeutic modality. Introduction Cancer is a serious malady, and in its malignant form, it leads to inevitable death depending on its type and stage of discovery.In many cases, the present anti-cancer therapies with surgical operation, chemotherapy, and radiotherapy cannot adequately therapeutic, as these methods also reveal serious side-effects such as toxicity to normal cells and tissues [1].To eliminate the tumor completely, inducing tumor specific immunity is considered an effective strategy of therapy [2].Immunogenic death of tumor cells induced by certain chemotherapeutics like anthracyclines may thus be an effective therapeutic strategy [3,4].This immunogenic cell death is characterized by the early cell surface exposure of chaperon proteins CRT, HSPs and the late cell apoptosis marker high mobility group box 1 (HMGB1), which affect dendritic cell (DC) maturation and the uptake and presentation of tumor antigens by DCs [5][6][7][8][9].As such, inducing immunogenic tumor cell death may enhance the effectiveness of DC-based antitumor therapies. Naturally occurring quinones, which are widely found in plants, animal, fungi and bacteria, possess various potent biological activities including anti-fungal and anti-tumoral activities [10][11][12][13][14].The cytotoxic effects of these quinones are primarily due to inhibition of DNA intercalation [15].A variety of analogues of heterocyclic quinone have been designed and synthesized.R2016 (3-(4-chlorophenylamino)-6-hydroxy-9-methyl-9H-carbazole-1,4-dione) (Fig 1) is a newly designed and synthesized heterocyclic quinone compound, and originally devised as an anti-fungal agent [16].No studies verifying the immunogenic death induction by R2016 as an anti-tumor entity has been reported.In this study, the possibility of R2016 as an immunogenic cell death inducer was tested with the related molecular changes in the target cells.This data may provide the scientific rationale for development of R2016 as a new immuno-chemotherapeutic displaying enhanced anti-tumor potency. Animals Pathogen-free female C57BL/6 mice, at 5-6 weeks old, were purchased from the Orient Bio (Seong-nam, South Korea).The mice were provided with water and food ad libitum and quarantined under a 12 h light, 12 h dark light cycle in the animal care facility of the Animal Resource Center at the Asan Institute for Life Science and Technology (Asan Medical Center, Seoul, South Korea).Animal care was performed according to the Institute for Laboratory Animal Research (ILAR) guidelines.The mice were acclimated for at least one week before any experiments were conducted.Animal Research was approved by animal research ethics committee in ASAN Medical Center, Seoul, KOREA.(AMC IACUC; approval # 2015-02-185) Reagents R2016 was synthesized and supplied by Dr. Chung-Kyu Ryu (Ewha Women's University, Seoul, Korea).Doxorubicin hydrochloride was purchased from Sigma-Aldrich (St. Louis, MO, USA).Dulbecco's modified Eagle's medium (DMEM) and gentamicin were obtained from GIBCO laboratories (Grand Island, NY, USA) and fetal bovine serum (FBS) was from HyClone Laboratories (Logan, UT, USA).Annexin V/PI and the antibodies for flow cytometric phenotyping were purchased from eBioscience (San Diego, CA, USA); these included the fluorescence labeled-monoclonal Abs against calreticulin (CRT), HSP60, HSP70, and HSP90.ELISA kits for cytokines including TGF-β1, IL-10, and IL-12 were also purchased from eBioscience. Cell lines C57BL/6 syngeneic Lewis lung carcinoma (LLC) and B16F10 (melanoma) cell lines were purchased from the American Type Culture Collection (ATCC) (Rockville, MD, USA).All cell lines were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% heat-inactivated fetal bovine serum (FBS) and 10 mg/ml gentamicin at 37˚C in a 5% CO 2 atmosphere. Measurement of the viability of spleen cells Spleen cells were obtained from C57BL/6 mice.Briefly, spleens were disrupted with mechanical force and treated with hypotonic lysis buffer to remove red blood cells.The spleen cells were seeded at a concentration of 1×10 5 cells/200 μl/well in 96-well culture plates for the cell viability assay.The cells were cultured in presence of R2016 (0.1, 0.5, 1, 1.5, 2, 2.5, 3 μg/ml) for 48, 72 h in 96-well plates at 37˚C in a 5% CO 2 incubator.At the end of each incubation period, the cultured wells were then treated with 20 μl/well of Cell Counting Kit-8 solution (Dojindo Laboratories) for last 3 hr and the optical density of wells was measured at 450 nm using a microplate reader (Bio-Tek). Flow cytometric analysis Phenotype observation.The phenotype and ability to induce immunogenicity of R2016-treated cells (LLC, B16F10) were analyzed by direct immunofluorescence staining of cell surface antigens using fluorescein isothiocyanate (FITC) or phycoerythrin (PE) conjugated antibodies against HSP60, HSP70, HSP90, and CRT.Single cells were incubated with fluorescence-labeled surface antibodies in PBS with 0.1% sodium azide and 1% FBS (PBS-CS) for 40 min at 4˚C.No more than 1 h after antibody labeling, cells in 300 μl PBS-CS were analyzed with CytoFLEX (Beckman-Coulter, Pasadena, CA, USA).Cultured BM-DC characterization was performed after R2016 treatment by staining with FITC or PE-conjugated mAb against Clec9A, CD197, CD11c, CD8a, CD80, MHC-I, CD86, and MHC-II.Data was analyzed using Flowjo software (Flowjo LLC, Ashland, OR, USA). Apoptosis detection.Cell death was assessed by Annexin V-allo-phycocyanin (APC) staining.R2016-treated tumor cells were collected, washed and re-suspended in an incubation buffer containing Annexin V-APC antibodies.The samples were kept in dark after the addition of the staining antibodies and incubated for 15 minutes before the addition of 0.1% propidium iodide (PI).Labeled cells were observed in a CytoFLEX (Beckman-Coulter, Pasadena, CA, USA).Data were analyzed using Flowjo software. DC uptake of tumor cells R2016 treated tumor cells were labeled with CRT-FITC, then co-cultured with CD11c-PE labeled DC for 6 h at 37˚C in a humidified CO 2 incubator.The cells then were washed 2 times.Flow cytometric observation was performed by CytoFLEX (Beckman-Coulter) and the data analyzed by Flowjo software.FITC and PE double positive cells were considered as DCs taken tumor cells. Signal protein phosphorylation The R2016 treated tumor cells were washed once with the Stain Buffer (BD Pharmingen, Franklin Lakes, NJ, USA) and centrifuged to pellet the cells.They were then incubated on ice for 30 min.After washing and centrifugation at 250g for 10 minutes, the supernatants were removed.The cells were resuspended in Stain Buffer at 1x10 7 /ml and aliquoted to 100 μl for each flow tube to continue with PE anti-mouse pSTAT antibody staining (BD Biosciences) and analyzed by flow cytometry. Microarray protocol Total RNA was isolated, labeled, and prepared for hybridization to an 11K mouse oligonucleotide microarray gene chip (Macrogen Inc., Seoul, Korea) following the manufacturer's instructions.Hybridization was then conducted overnight using 15 μg of labeled RNA product, after which the arrays were scanned using Affymetrix scanners.The gene expression profile of the cells were created using the Affymetrix system (Beyond Bioinformatics ISTECH AATC, Gyeonggi, South Korea) in conjunction with the mouse genome 430A 2.0 Array, which contains approximately 54675 probes.Pre-treatment was conducted using the GCOS global scaling in GenPlex software (Istech Corp., Goyang-si, Gyunggi-do, Korea).Differences in the distribution of data were confirmed by comparing an MA plot of the control array to a plot of the experimental array.Data were considered significant when gene expression changed by at least twofold at three consecutive time-points when compared to the expression of the control (0 h). Increased gene expression also had to include at least one present call (Affymetrix algorithm) or both control points needed to be present when gene expression increased or decreased. RT-PCR Total RNA was prepared from the R2016 treated-tumor cells using RNeasy mini kit (Qiagen Inc., Germantown, MD, USA).Then RNA was transcribed into complementary deoxyribonucleic acid (cDNA) according to amfiRivert cDNA synthesis master mix kit introduction (Gendepot, Barker, TX, USA).Next, cDNA was used per reaction with TAKARA EX Taq PCR master mix.The RT-PCR reactions were performed on PTC-100 Peltier Thermal Cycler instrument (MJ Research/Bio-Rad, Hercules, CA, USA).The PCR was a 3-stage reaction: initial denaturation at 95˚C for 15 minutes, followed by 40 cycles of denaturation at 95˚C for 60 seconds, annealing at 60˚C for 60 seconds, and extension at 72˚C for 60 seconds.Mouse glyceraldehydes-3-phosphate dehydrogenase (GAPDH) acted as an internal reference.The image analysis was performed multi Gauge v3.0.Based on gene sequence published by Genbank database, Primer BLAST software in PUBMED was used to design primer as presented in S1 Table. Statistical analysis The experiments with the same protocol were repeated at least 3 times.Data was expressed as mean±standard error (SE).Statistical significance was determined by ANOVA, followed by Tukey's range test.P-values of less than 0.05 or 0.001 indicated statistical significance. Surface expression of immunogenicity-inducing molecules on the tumor cells treated with R2016. As immunogenic cell death signal, tumor cell surface expression of calreticulin (CRT) and heat shock proteins (HSP60, HSP70, and HSP90) were measured after R2016 treatment.Flow cytometry results show that at 1.5 μg/ml (4.25 μmol), approximately the IC 50 dose of R2016, there was a rapid translocation of CRT and heat shock proteins onto the surface of LLC and B16F10 cells 18 hr after the treatment (Fig 4A and 4B).After R2016 treatment, CRT and heat shock proteins expression increased on the LLC cells than the B16F10 ones.CRT and HSP90 were induced by R2016 more than doxorubicin for both tumor cell types (Fig 4A and 4B). Secretion of HMGB1, a nuclear cytokine.As another immunogenic cell death signal, the levels of HMGB1, a nuclear cytokine that mediates various immune responses [17].was measured in the supernatant of R2016 treated-tumor cells (Fig 5A and 5B).The HMGB1 released from the R2016-treated cells was significantly increased in a dose-dependent manner compared with that released from the untreated cells.This observed release of HMGB1 was an additional indicator that R2016 induced immunogenic cell death in these tumor cells. Cytokines secretion analysis with ELISA.The effect of R2016 on secretion of TGF-β1, IL-10, and IL-12 from the tumor cells was investigated (Fig 6A and 6B).In R2016 treated tumor cell lines, secretion of TGF-β1, an immune-suppressive cytokine, was reduced in dosedependent manner.The R2016 effect on the IL-10 secretion was however not significant.Normally, it is not usual to detect IL-12 secretion, a Th1 response inducer, from the tumor cells studied, and in this study also, only a negligible amount ( 10 pg/ml) of IL-12 was observed from the untreated LLC or B16F10 cells (Fig 6A and 6B).However, interestingly, tumor cells treated with R2016 secreted IL-12 at a detectable level (>30 pg/ml)).The data indicates that R2016 treatment alters the tumor cell microenvironment favorable to anti-tumor immune responses. Uptake of CRT-expressing tumor cells by DCs.CRT is known as an "eat-me" signal, inducing DC uptake of cells with CRT surface expression.It was hypothesized that R2016 would induce the immunogenic cell death of LLC and B16F10 cells with CRT expression.To confirm that R2016 induced immunogenic cell death, DC (CD11c+) and R2016-treated tumor cells (CRT+) were co-cultured.By measuring CD11c+CRT+ double positive cells by flow cytometer, DC uptake of tumor cells was determined.DCs took up the R2016-treated tumor cells more than the untreated control tumor cells (0.88% vs. 35.2%for LLC; 1.53% vs. 39.7% for B16F10 untreated tumor cell vs. R2016 treated tumor cells, respectively) (Fig 7A and 7B). Molecular signal alteration induced in the R2016 treated tumor cells Signal Transducer and Activator of Transcription (STAT) signal activation.STAT intracellular signaling in cancer development, treatment and prognosis has been demonstrated to be significant.The reports indicate that constitutive STAT3 activation is associated with anti-apoptotic as well as proliferative effects in various human cancers.Not surprisingly, poor prognosis and promotion of oncogenesis were reported from a constitutive activation of STAT3 [18][19][20].For R2016-treated tumor cells, a dose-dependent reduction of phosphorylated-STAT3 was observed in both LLC and B16F10 cells (Fig 8A and 8B).Also, a constitutive presence of activated (phosphorylated) STAT5 has been reported in cancer cells [21], suggesting that a role in cancer formation and malignant transformation.By R2016 treatment, the levels of phosphorylated-STAT5 were reduced in LLC and B16F10 cells (Fig 8A and 8B).Unlike the STAT 3 and 5 signals which are related to cell proliferation and death, another STAT molecule, namely STAT1, instead has a role in IFN-γ type I and II signaling.R2016 did not affect STAT1 phosphorylation (Fig 8A and 8B). RT-PCR analysis To confirm the genetic modulation induced by R2016, RT-PCR was performed for differentially expressed genes observed in the microarray data (S2 and S3 Tables).In the R2016 treated B16F10, the expression of IL10RB, TGFB1, and TLR6 were reduced significantly, but not in the LLC cells (Fig 9A and 9B).While, TLR4 expression was reduced significantly by R2016 in tumor cell types (4x vs. 2x reduction than control in LLC vs. B16F10 cells, respectively).The expression of RTX, CD274, IL12RB1, CASP8, and CRT tended to increase in both LLC and B16F10 cells with R2016 treatment.Among the up-regulated genes, CRT expression was the most significant (3x vs. 2x induction than control in LLC vs. B16F10 cells, respectively) (Fig 9A and 9B). Discussion Heterocyclic quinone compounds have demonstrated potent antifungal and other biologic activities.R2016, a newly synthesized heterocyclic quinone derivative developed by Dr. Chung-Kyu Ryu as an anti-fungal agent, proved in cytotoxic tests to have the ability to kill tumor cells without harming the immune cells used.These observations led us to pursue R2016 as a new candidate anti-tumor agent and assay its effects in regards to inducing immunogenic cell death in mouse lung cancer LLC and melanoma B16F10 cells.Both the LLC and B16F10 cells were killed by R2016 at over 0.5 μg/ml in a dose-dependent manner (Fig 2).Melanoma cells were more sensitive to R2016-induced cytotoxicity than the lung cancer cells.Immunogenic cell death is known to be initiated by induction of apoptosis.Unlike doxorubicin of necrosis, the R2016-led tumor cell death was observed to be from both necrosis and late apoptosis induction.Also, the R2016-induced apoptotic cell death was dose-dependent with increasing dose of the compound (Fig 3A and 3B).This data suggested the possibility of R2016, a heterocyclic quinone compound, being a new type of immunogenic cell death inducer.This is similar to anthracycline derivatives that are known immunogenic cell death inducers, and are also used as chemotherapeutics. To confirm the immunogenic cell death induced by R2016, several immunological assays, other than direct cytotoxicity with apoptosis assay, were performed.The enhanced immunogenicity of R2016-killed tumor cells was strongly linked to the induction of CRT and HSPs on the surface of the dying tumor cells and the extracellular release of HMGB1.This translocation of CRT or HSPs onto the surface of dying cells is assumed to be a mechanism underlying the increased immunogenicity of the apoptotic cells as this study revealed it for the treated LLC and B16F10 cells during their R2016-induced apoptotic cell death (Fig 4A and 4B).Furthermore, HMGB1, another immunogenic molecule, was released at the time of cell death induction by R2016 (Fig 5A and 5B).High mobility group box 1 (HMGB1) protein, also known as high-mobility group protein 1 (HMG-1) and amphoterin, is a nuclear cytokine, which can form complexes with ligands to enhance the immune response [22].The above data supported the premise that R2016 could induce immunogenic cell death of the tumor cells. Altered levels of immunomodulatory cytokines relating to R2016 treatment of cells as part of their roles in induction of immunogenic tumor cell death were also investigated.Tumor microenvironment is controlled, in part, through cytokine milieu modulated by tumor cells.Immune suppressive cytokines, TGF-β1 and IL-10, are representative factors forming a tumor-favorable environment.In R2016-treated tumor cells, the secretion of both TGF-β1 and IL-10 was suppressed, suggesting to contribute to the immunogenic anti-tumor effect of R2016.In addition, the production of IL-12, a Th1 response-inducing immune stimulatory cytokine, was significantly induced by R2016 treatment of the tumor cells.These observations pointed to R2016 treatment modulating the tumor microenvironment favoring an anti-tumor effect. One marker of immunogenic cell death is CRT, which is known as an "eat me" signal expressed on the surface of the dying tumor cells, leading to recruitment of DCs to engulf the tumor cells, and inducing tumor specific immunity.Induction of the CRT expression was observed in R2016 treated LLC and B16F10 tumor cells (Fig 4).To confirm the role of CRT as a DC recruitment signal, R2016 treated tumor cells and DCs were co-cultured to observe any differences in the uptake of tumor cells by DCs.The results showed that CD11c + DCs take up the R2016-treated CRT + tumor cells more readily than untreated tumor cells (Fig 7). Molecular alterations were observed to confirm and define the mechanism of R2016 induced immunogenic tumor cell death.Activation of STAT signaling was observed by flow cytometry, measuring phosphorylated-STAT1, 3, and 5 levels.STAT proteins have been known to be major transcriptional mediators of various fundamental function of cells including those for proliferation, apoptosis, differentiation, and immune responses.Constitutive activation (phosphorylation) of STAT 3 and 5 is known to occur in various cancer cells, and is associated with tumor cell proliferation, invasion and survival along with suppression of antitumor immunity [23,24].In the R2016 treated LLC and B16F10 tumor cells, the levels of phosphorylated STAT 3 and 5 were reduced by 40~60% for both (Fig 8A and 8B).R2016 induced apoptotic tumor cell death and anti-tumor favorable immune induction may thus in part be through the inhibition of STAT3 and STAT5 activation. The transcriptional changes in R2016 treated tumor cells were also analyzed by microarray profiling (S2 and S3 Figs) and confirmed by RT-PCR (Fig 9).Induction of genes for TNFRSF including Fas-associated death domains (FADD, TRADD) and caspase8 and Fadd like apoptosis regulator (CFLAR) with reduction of anti-apoptotic genes such as STAT6 may explain the function of R2016 as a candidate tumoricidal chemotherapeutic.Along with the upregulation of genes relevant to immunogenic cell death including CRT, expression of various immunogenicity-inducing genes were also increased by R2016 treatment; these included those for cytokine receptor IL12RB1, various chemokines (CCL2, etc), MHC molecule expression regulatory factors (RFX) and B7.1 co-stimulatory molecule (CD274).The levels of the immune suppressive cytokine transforming growth factor beta (TGFB) that would otherwise be favorable to a tumor microenvironment were also inhibited in the R2016 treated tumor cells.Genetic alterations as possible molecular mechanisms for R2016-induced immunogenic cell death of tumor cells were also confirmed by RT-PCR.Interestingly, expressions of TLR 4 and 6 were reduced in the R2016 treated tumor cells, indicating the initiating signal from R2016 is quite different LPS or other ligands for TLR 4 and 6.Further study is being conducted to find the detailed signaling mechanism for R2016 induced immunogenic cell death of tumor cells, although as shown here, R2016 induced apoptotic tumor cell death may in part be through the inhibition of STAT3 and STAT5 phosphorylation. In conclusion, the data herein indicate that R2016 can induce immunogenic cell death of tumor cells without killing normal lymphocytes.The data suggests that R2016 may be a new chemotherapeutic agent with improved efficacy and safety.Further study is being performed to establish R2016 as a novel cancer chemotherapeutic. Fig 3 . Fig 3. Apoptotic cell death was observed by R2016 in LLC (A) and B16F10 (B) cell lines.Cells were treated for 18 hr with the indicated R2016 (0.5, 1.5, 2.5 μg/ml) and doxorubicin (0.01 μg/ml) concentrations.The percentage of apoptotic cells were determined by measuring Annexin V/PI expression using flow cytometry.Each experiment was done in triplicate.doi:10.1371/journal.pone.0173121.g003
4,636.8
2017-03-10T00:00:00.000
[ "Chemistry", "Medicine" ]
Identification of Peptide Inhibitors of Enveloped Viruses Using Support Vector Machine The peptides derived from envelope proteins have been shown to inhibit the protein-protein interactions in the virus membrane fusion process and thus have a great potential to be developed into effective antiviral therapies. There are three types of envelope proteins each exhibiting distinct structure folds. Although the exact fusion mechanism remains elusive, it was suggested that the three classes of viral fusion proteins share a similar mechanism of membrane fusion. The common mechanism of action makes it possible to correlate the properties of self-derived peptide inhibitors with their activities. Here we developed a support vector machine model using sequence-based statistical scores of self-derived peptide inhibitors as input features to correlate with their activities. The model displayed 92% prediction accuracy with the Matthew’s correlation coefficient of 0.84, obviously superior to those using physicochemical properties and amino acid decomposition as input. The predictive support vector machine model for self- derived peptides of envelope proteins would be useful in development of antiviral peptide inhibitors targeting the virus fusion process. Introduction Fusion process is the initial step of viral infection, therefore targeting the fusion process represents a promising strategy in design of antiviral therapy [1]. The entry step involves fusion of the viral and the cellular receptor membranes, which is mediated by the viral envelope (E) proteins. There are three classes of envelope proteins [2]: Class I E proteins include influenza virus (IFV) hemagglutinin and retrovirus Human Immunodeficiency Virus 1 (HIV-1) gp41; Class II E proteins include a number of important human flavivirus pathogens such as Dengue virus (DENV), Japanese encephalitis virus (JEV), Yellow fever virus (YFV), West Nile virus (WNV), hepatitis C virus (HCV) and Togaviridae virus such as alphavirus Semliki Forest virus (SFV); Class III E proteins include vesicular stomatitis virus (VSV), Herpes Simplex virus-1 (HSV-1) and Human cytomegalovirus (HCMV). Although the exact fusion mechanism remains elusive and the three classes of viral fusion proteins exhibit distinct structural folds, they may share a similar mechanism of membrane fusion [3]. A peptide derived from a protein-protein interface would inhibit the formation of that interface by mimicking the interactions with its partner proteins, and therefore may serve as a promising lead in drug discovery [4]. Enfuvirtide (T20), a peptide that mimicks the HR2 region of Class I HIV-1 gp41, is the first FDA-approved HIV-1 fusion drug that inhibits the entry process of virus infection [5][6][7]. Then peptides mimicking extended regions of the HIV-1 gp41 were also demonstrated as effective entry inhibitors [8,9]. Furthermore, peptides derived from a distinct region of GB virus C E2 protein were found to interfere with the very early events of the HIV-1 replication cycle [10]. Other successful examples of Class I peptide inhibitors include peptide inhibitors derived from SARS-CoV spike glycoprotein [11][12][13] and from Pichinde virus (PICV) envelope protein [14]. Recently, a peptide derived from the fusion initiation region of the glycoprotein hemagglutinin (HA) in IFV, Flufirvitide-3 (FF-3) has progressed into clinical trial [15]. The success of developing the Class I peptide inhibitors into clinical use has triggered the interests in the design of inhibitors of the Class II and Class III E proteins. e.g. several hydrophobic peptides derived from the Class II DENV and WNV E proteins exhibited potent inhibitory activities [16][17][18][19][20]. In addition, a potent peptide inhibitor derived from the domain III of JEV glycoprotein and a peptide inhibitor derived from the stem region of Rift Valley fever virus (RVFV) glycoprotein were reported [21,22]. Examples of the Class II peptide inhibitors of enveloped virus also include those derived from HCV E2 protein [23,24] and from Claudin-1, a critical host factor in HCV entry [25]. Moreover, peptides derived from the Class III HSV-1 gB also exhibited antiviral activities [26][27][28][29][30][31], as well as those derived from HCMV gB [32]. Computational informatics plays an important role in predicting the activities of the peptides generated from combinatorial libraries. In silico methods such as data mining, generic algorithm and vector-like analysis were reported to predict the antimicrobial activities of peptides [33][34][35]. In addition, quantitative structure-activity relationships (QSAR) [36][37][38][39][40] and artificial neural networks (ANN) were applied to predict the activities of peptides [41,42]. Recently, a support vector machine (SVM) algorithm was employed to predict the antivirus activities using the physicochemical properties of general antiviral peptides [43]. However, the mechanism of action of antiviral peptides is different from antimicrobial peptides; in fact, various protein targets are involved in the virus infection. e.g. HIV-1 virus infection involves virus fusion, integration, reverse transcription and maturation, etc. Thus it is difficult to retrieve the common features from general antiviral peptides to represent their antiviral activities. Virus fusion is mediated by E proteins. Although E proteins are highly divergent in sequence and structure, they share a common pathway of membrane fusion dynamics. i.e. E proteins experience significant conformational change to form a-trimer-of-hairpin, which drives the fusion of viral membrane and host membrane [44]. The antiviral peptides derived from enveloped proteins function by in situ binding to their respective accessory proteins, disrupting forming of the trimer-of-hairpin and membrane fusion, and therefore inhibiting the virus infection. In view of the important role of E proteins in virus fusion process and common mechanism of action of self-derived peptides, we developed a SVM model to predict the antiviral activities of self-derived peptides using sequence-based statistical scores as input features. The sequencebased properties were calculated by a conditional probability discriminatory function which indicates the propensity of each amino acid for being active at a specific position. Our model exhibited remarkably higher accuracy in predicting the activities of self-derived peptides, compared to the previous models developed for general antiviral peptides using classical physicochemical properties as descriptors [43]. The method would be useful in identification of entry inhibitors as a new generation of antiviral therapies. Data collection 202 peptide virus entry inhibitors of enveloped viruses were collected, among them, 101 are active peptides and 101 are non-active peptides. These peptides comprised the 75p+75n training set of SVM models. The remaining 26 active peptides and 26 non-active peptides inhibitors were used as the test set. Amino acid composition. Amino acid composition is the fraction of each amino acid in a peptide. The fraction of the 20 amino acids was calculated using the following equation: Fraction of amino acid X ¼ Total number of X = peptide length Physicochemical properties Five physicochemical properties were used in SVM models. Isoelectric point (PI), Molecular weight (MW) and Grand average of hydropathicity (GRAVY) [45] were calculated using the Protparam tool implemented in Expasy web server. Solvent accessibility and secondary structure features were calculated using SSpro and ACCpro packages implemented in the SCRATCH protein predictor server [46]. Sequence-based statistical scoring function. The knowledge-based statistical function is developed from the concept of residue-specific all-atom probability discriminatory function (RAPDF) [47]. RAPDF is a structure-based statistical scoring function. It is based on the assumption that averaging over different atom types in experimental conformations is an adequate representation of the random arrangements of these atom types in any compact conformation. Here we developed a sequence-based statistical scoring function, where we presume that averaging over different amino acid sequences with experimental validated inhibitive activities is an adequate representation of the random amino acid sequences with any inhibitory activity. The basis of this assumption is that the peptides share a common mechanism of action, i.e. the peptides derived from E proteins bind competitively to their partner proteins, disrupt the forming of a-trimer-of-hairpin, and therefore inhibit the virus membrane fusion. The sequence-based scoring function is described in the following form: Sðfq i a gÞ ¼ Àln Here, q i a 2 factiveg. Pðq i a jCÞ is the probability of observing amino acid i in an active peptide sequence; Pðq i a Þ is the probability of observing amino acid i in any peptide sequence, active or nonactive. They are approximately estimated using the following forms: Similarly, we employed a dataset of experimentally verified non-active peptides in developing the statistical function, where q i a 2 finactiveg. For a given amino acid sequence, 20 columns of input are generated, corresponding to the occurrence of twenty natural amino acids at each position. Each column is assigned a value of N Ã (−log-likelihood), where N is the number of amino acid and −log-likelihood is derived from the statistical function score. Each of the features thus combines the propensity of the amino acid for being active or non-active with the corresponding amino acid composition. Below is an example of calculating the statistical scores for a given peptide sequence: The amino acid order for SVM input features is set as: ACDEFGHIKLMNPQRSTVWY. If the amino acid sequence of an active peptide inhibitor is: SVM Parameter Optimization SVM models combined with radial basis function (RBF) kernel parameters were developed using the C-SVC module in LIBSVM (version 3.1) [48,49] and executed under the Matlab interface. The performance of SVM depends on two parameters, gamma -g and cost-c [50]. The default value is 1 for -c and 1/k for -g, where k is the number of input entries. Various pairs of (c, g) values were converted to exponential values (i.e. 2 x ;2 y ) and optimized using cross-validation and the pair with the best cross-validation accuracy was selected. 5-fold cross validation was performed to evaluate the performance of SVM models. In the evaluation process, dataset was partitioned randomly into five equally sized subsets. The training and testing were carried out five times, each time four distinct subsets being used as training sets and the remaining subset as test set. The results were averaged over all five rounds of validation. The following equations were used to evaluate the prediction quality of the SVM models [48,51]: In the above equations, TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives and FN is the number of false negatives. Matthew's correlation coefficient (MCC) reflects the performance of the model. It ranges between -1 to 1 and a larger MCC value indicates a better prediction. Results and Discussion SVM learning algorithm is a powerful machine learning method that has been widely used in pattern recognition and classification. SVM trains a dataset of experimentally validated positive and negative samples and generates a classifier to classify unknown samples into two distinct categories (positive or negative). Collection of dataset We performed an exhaustive literature search on self-derived peptide inhibitors of enveloped proteins and collected experimentally validated peptides derived from the three classes of E proteins. For those peptides with overlapping segments, only one peptide sequence was kept. 202 peptides were found, among them, 101 are active peptides and 101 are non-active peptides ( Table 1). 75 active peptide inhibitors and 75 non-active peptides (75p+75n) of E proteins were used as the training dataset in SVM learning; the remaining 26 active and 26 non-active peptides (26p+26n) were used as the test set. SVM input features. Three SVM models were developed using different features as input descriptors, namely physicochemical properties (denoted as EAPphysico), amino acid composition (EAPcompo) and statistical scoring function amino acid composition (EAPscoring). Knowledge-based statistical functions are rooted in the Bayesian (conditional) probability formalism and derived directly from properties observed in the known folded proteins [52][53][54]. In knowledge-based scoring function, it was presumed that averaging over different atom types in experimental conformations is an adequate representation of the random arrangements of these atom types in any compact conformation [55]. Because the three classes of E proteins have different structural folds, it is difficult to retrieve a structure-based feature that is relevant to their antiviral activities. Generally speaking, any property associated with folded proteins can be converted into an energy function [56]. Since amino acid sequence determines the structural folds and properties of proteins/peptides, we presumed that a sequence-based statistical scoring function averaging over different amino acid sequences exhibiting inhibitive activities is an adequate representation of the random combinations of all twenty amino acid exhibiting any activity. In this approach, a peptide sequence derived from E protein is represented by twenty features each corresponding to the propensity of observing each of the twenty natural amino acids to be either active or non-active. A vector space of twenty sequence-based statistical scores was used as the EAPscoring input entries in the SVM learning. We also built a SVM model using physicochemical properties as input features. Because of the feature of membrane fusion process, it was suggested that functional regions in glycoproteins need to be solvent accessible, hydrophobic and flexible [57]. Actually the majority of known peptide entry inhibitors share a common physicochemical property of being hydrophobic and amphipathic with a propensity for binding to lipid membranes [58]. Therefore, here the properties of E peptide inhibitors were described by five physicochemical parameters: PI, MW, GRAVY index (positive and negative GRAVY values indicate hydrophobic and hydrophilic peptides, respectively), solvent accessibility (exposed or buried) and secondary structure features (propensity for adopting α-helix, β-sheet or turn structure). These physicochemical features were calculated for each of the peptides and used as the EAPphysico input entries in the SVM learning. A third SVM model EAPcompo was also built where the fractions of amino acids in a peptide were used as input features in the machine learning process. SVM training. The SVM models were trained using the experimentally validated 75p+ 75n data sets. During 5-fold cross validation, the training set was randomly partitioned into four subsets with equal size of (15p+15n) and a remaining subset (15p+15n). Three SVM models were built using sequence-based statistical scores, physicochemical properties and amino acid composition, respectively. The performances of the three models are shown in Table 2. It can be seen that the EAPscoring model performed best among the three models during 5-fold cross validation. A "grid-search" combined with cross-validation was adopted to search for the optimal parameters -c and -g in SVM models [49]. The result of the grid search is shown in the support information (S1 File). It is shown that the performances of three EAP models during 5-fold cross validation have been improved significantly using the optimized parameters ( Table 2). Evaluation of the predictive efficiency of SVM models on independent test set The performance of the SVM models was evaluated using an independent dataset of experimentally validated peptides that were not contained in the learning dataset (Table 1). In the EAPphysico model where physicochemical properties of peptides were used as input features, an accuracy of 65% with a MCC value of 0.31 was observed (Table 3). In the EAPcompo model where amino acid composition features were used, the predictive accuracy and the MCC value are slightly higher. When the sequence-based statistical function scores were used as input in the EAPscoring model, a remarkable accuracy of 92% was achieved with a MCC value of 0.84. Thus the sequence-based statistical scores developed in the present research are predominantly superior to the conventional physicochemical properties or amino acid decomposition features in identifying active peptides derived from enveloped proteins. Comparison of the predictive efficiency of the AVP and EAP Models AVPpred is a web server for prediction of the activities of general antiviral peptides (AVPs) based on a number of experimentally validated positive and negative data sets [43]. The peptide inhibitors employed in AVPpred target a variety of biological targets involved in virus infection. In contrast, the self-derived peptides of enveloped proteins being studied in the present research competitively bind to E proteins so as to mediate the virus fusion process. Because the self-derived peptides share similar mechanism of action, it is feasible to retrieve common features from them to build predictive SVM models. In order to evaluate the performance in predicting peptide inhibitors of the enveloped virus, we compared the AVPpred models with our EAPpred models using an independent 26p+26n dataset as test set. The results are shown in Table 3. Four different features were employed in the AVPpred models, namely conserved motif search using MEME/MAST, amino acid composition, sequence alignment using BLAST and physicochemical parameters including secondary structure, charge, size, hydrophobicity and amphiphilic character [43]. When the AVPmotif model was used to predict the activities of the self-derived peptide inhibitors, it performed rather poorly with accuracy of 52% and MCC of 0.14. This is not surprising because AVPmotif was developed based on 20 general antiviral peptide motifs. However, the self-derived peptide inhibitors may not share a conserved motif with the general antiviral peptides since the latter interact with various biological targets with different mechanisms of action. In the AVPalign model, the peptide sequences were classified into active and non-active databases and the query peptide sequences were matched against the active and non-active databases using the BLAST program. Compared with AVPcompo and AVPphysico, AVPalign performed better with a predictive accuracy of 73% and MCC value of 0.52. Fusion mechanism is highly conserved among related viruses and entry of viruses into host cells has been inhibited by peptides derived from various regions of envelope glycoproteins [59]. Self-derived peptides would inhibit interactions of their original domain by mimicking its mode of binding to partner proteins [4]. Because similar sequences are often associated with similar structure and function, the sequence-based property AVPalign would account for the activities of the self-derived peptide inhibitors which regulate the virus fusion by mimicking the binding to E proteins. In the AVPphysico model, 25 best performing physicochemical properties were selected out of the 544 properties to build the SVM model [43]. Antiviral peptide inhibitors are generally amphiphilic [60] and the activities of peptide entry inhibitors are dependent on their interfacial hydrophobicity [58]. Therefore we only employed five physicochemical properties reflecting hydrophobicity, solvent accessibility and secondary structure features as SVM input features. It was demonstrated that the accuracy and MCC of EAPphysico is comparable to that of AVPphysico model, indicating the five properties used in current modeling building are critical for their activities. The MCC value of the AVPcompo models is 0.20, indicating that the antiviral activities of the peptides are related to amino acid composition. When the amino acid composition was used as input, the predictive accuracy of the EAPcompo model was higher than that of the AVPcompo model, indicating the peptide inhibitors of E proteins employed in the training set is sufficient to represent the contribution of amino acid composition to their inhibitive activities. In the EAPcompo model, the preference of the amino acid composition was ranked as: P, R, Q, D, F, W, E, L, T, I, N, H, Y, C, A, S, M, V, K, G (Fig 1). The role of arginine-arginine pairing and its contribution to protein-protein interactions has been investigated by computational approaches [61]. The higher abundance of R at protein-protein interfaces compared to K may be attributed to the formation of cation-π-interactions and the greater capacity of the guanidinium group in R to form hydrogen bonds (compared to K) [62][63][64]. Furthermore, it was suggested that the interface regions are enriched in aliphatic (L, V, I, M) and aromatic (H, F, Y, W) residues and depleted in charged residues (D, E, K) with the exception of arginine [62,[65][66][67][68][69]. This is in agreement with our amino acid composition analysis, where higher population of aliphatic Leu residue as well as aromatic residues Trp and Phe was observed, whereas positively charged Lys was hardly observed. The predominant occurrence of proline and glutamine residues is characteristic for the unique protein-protein interactions for E proteins. e.g. a conserved proline-rich motif was suggested to be engaged in monomer-monomer interactions in Dengue E proteins [70]. A conserved glutamine-rich layer is involved in the extensive Hbond network in HIV-1 gp41 E proteins [71]. Thus the preference of the amino acid composition identified from the EAPcompo model is generally in accordance with the predominant residues involved in protein-protein interactions, manifesting the amino acid composition of the self-derived peptide inhibitors are closely related to their potential activities in mediating the protein-protein interactions in the virus fusion process. Because the antiviral activities of peptides are dependent on amino acid composition, we presume amino acid composition discriminated by the propensity of their activities would be an intrinsic feature in the self-derived peptide inhibitors which share a common mechanism of action. When statistical function scores were employed in the SVM model (EAPscoring), a remarkable predictive accuracy of 92% with an ideal MCC value of 0.84 was achieved, significantly better than any AVP models. The logarithm form of the discriminatory function (Eq 1) can be deemed as the pseudo energy of the system. In our previous study, we suggested that the stability of proteins is related to their in situ binding potential to the partner regions [72]. The prominent performance of EAPscoring model indicates the sequence-based stability feature of self-derived peptides may reflect their potential of binding to E proteins so as to regulate the virus entry process. Conclusions We developed three SVM models using physicochemical properties, amino acid composition and statistical discriminative function as input features. The prediction accuracy and the MCC value of the EAPphysico model where five physicochemical properties were employed are comparable with the previous AVPphysico model where 25 physicochemical properties were used. The AVPcompo and EAPcompo models demonstrated that the activities of antiviral peptides are dependent on amino acid composition. A sequence-based scoring function was developed for the self-derived peptide inhibitors of E proteins. The outperformance of the EAPscoring models supports our hypothesis that an intrinsic feature, represented by the propensity of each amino acid for being active in self-derived peptides, is responsible for the activities of the peptides to regulate virus fusion by mimicking the binding to their accessory proteins. The sequence-based statistical scoring function would be useful in development of novel antiviral therapies to target the initial step of viral infection. Supporting Information S1 File. Parameters optimization by Grid-research combined with 5-fold cross validation. x-axis is log2 g , y is log2 c and z-axis represents accuracy(%) ( Figure A
5,077.8
2015-12-04T00:00:00.000
[ "Computer Science", "Medicine" ]
Oscillations of a statically indeterminate system with a finite number of degrees of freedom (the experience with the application of mathematical packages in the technical university course of mechanics) Symbolic mathematics packages give the opportunity to execute the difficult symbolic transformations with use of computer, abandoning graphic methods. The resilient weightless beam fixed by resilient links and carrying two concentrated masses is considered. Instead of building the bending moment diagram and the later use of Vereshchagin's method for disclosure of static indeterminacy, the equation of distribution of the bending and single moments along the beam length is written, and Mohr's integral is calculated. Introduction Students have to create and perform operations on the inertia and flexibility matrices, when studying oscillations of systems with a finite number of degrees of freedom. If a discrete system is statically indeterminate (such problems can occur when studying structural mechanics), then the solution of the problem becomes cumbersome. In addition to calculating the potential energy, it is necessary to disclose static indeterminacy, to calculate the constraint force, which is traditionally performed using graph-analytical methods. Integration of symbolic mathematics packages into the educational process changes things dramatically. The students get the opportunity to perform difficult symbolic transformations with use of the computer, abandoning the graphical methods. When solving this problem, the convenience of the symbolic mathematics package DERIVE [1] which was one of the five best and widely distributed packages for a long time, is demonstrated. The purpose of this article is thus demonstration of advantages of applying symbolic mathematics packages while performing tasks of the systems oscillation theory with finite number of degrees of freedom. This is referred to the practical experience of such implementation in the technical university course of mechanics. As an example, the resilient weightless beam fixed by resilient links and carrying two concentrated masses is considered. Instead of building the bending moment diagram and the later use of Vereshchagin's method for disclosure of static indeterminacy, the equation of distribution of the bending and single moments along the beam length is written, and Mohr's integral is calculated. Calculation of potential energy and creation of a matrix is a cumbersome and an exigent task in case of the "manual" computation, but it is carried out by means of the mathematical package without effort. We now need to determine the base frequency and a form of oscillations [2] of the resilient mechanical structure represented in Figure 1 under the following mechanical and geometric properties: Let us concentrate the mass of the whole beam in beam middle. The resilient massive beam will turn into a system with two degrees of freedom ( Figure 2). The position of system is characterized by the generalized coordinates vector with components 1 , 2 . As this takes place the mass of points is respectively equal to 1 = , 2 = . Kinetic energy of this system is: Introducing a vector of the generalized velocities, it is possible to write down kinetic energy in a matrix form: where is the inertia matrix M contains nonzero elements only on the main diagonal and has the form: Let us calculate the potential energy in the resilient system, expressing it through the generalized forces of inertia of 1 and 2 . For totality of planar system of parallel forces the structure is once statically indeterminate. Let us consider two cases: 1) the generalized force is applied above support B, 2) the generalized force is applied below support B. We will choose the main system, having rejected support B ( Figure 5). where 0 is the bending moment from unit force. In a local system of coordinates XDY, we will find the equation of the bending moment diagram ( ) depending from force arising in section K. For this purpose we will write down expression of the moment from force relating to the point K with coordinates (0,y,0) in an analytical form: for y<l or ( ) = ( − ) , < . The minus sign is taken because ( ) + ( ) = 0, where ( ) is moment from force relating to the point K. Let us similarly find ( 2 ). or ( 2 ) = ( 2 − ) 2 . The bending moment from unit force 0 we will obtain by having replaced force with unit. Thus 0 = − . The calculated dependences of bending moments on coordinates correspond to building the bending moments diagram of compressed fibres. As a result, Mohr's integral for force lying above support B will take form: Here INT is designation of integral, p sub2 -designation of force 2 ; y sub2 -designation of 2 ; x sub -designation of force . Let us calculate potential energy of the resilient beam for the preset values of h and l: = 2.2(5120 2 2 − 1728 1 2 + 351 1 )/( ) Afterwards values of variables h and l are defined, potential energy of resilient beam is written down and calculation of the beam is made. For further calculations by using DERIVE package it is necessary to change from index variables to regular ones. This is due to restricted opportunities of DERIVE package during the work with index variables. Therefore, the designations 2 = , 1 = will be used in files. As a result by using DERIVE package: For comparison we will define proper frequencies for similar statically determinate system, without support B. Reactions of resilient links are equal: Further, we will calculate potential energy of resilient system by the formulas given above. After potential energy calculation and items grouping we will receive: = 2.03 • 10 −6 (2.50 1 2 + 0.44 2 2 + 1.94 1 2 ) (24) Proper frequencies of oscillations in such system, are equal to 3.45 s -1 and 29.17 s -1 , respectively. The least proper frequency has decreased that is explained by lowering the system rigidity. Conclusions In this paper we demonstrated on one typical example our positive experience in implementing symbolic mathematics packages in teaching courses of mechanics and similar at our technical university, which allows us to modernize our teaching methods.
1,432.2
2018-01-01T00:00:00.000
[ "Engineering" ]
High-Density Spin–Orbit Torque Magnetic Random Access Memory With Voltage-Controlled Magnetic Anisotropy/Spin-Transfer Torque Assist This article explores an area saving scheme for spin–orbit torque (SOT) magnetic random access memory (MRAM) by sharing the SOT channel and write transistor among multiple magnetic tunnel junctions (MTJs). We use two write mechanisms to selectively write the MTJs, i.e., voltage-controlled magnetic anisotropy (VCMA)-assisted write in the presence of an external magnetic field and field-free spin-transfer torque (STT)-assisted write. Using micromagnetic simulations that are augmented by the rare-event enhancement, we study various trade-offs among write current, time, and energy, write error rate (WER), and the number of MTJs on an SOT channel. We quantify the issue of IR drop on the SOT channel as a function of the SOT layer thickness and number of MTJs. Our results show having more than four MTJs on an SOT channel poses major challenges in terms of IR drop and WER. In addition, we evaluate the impact of the proposed scheme on read performance. I. INTRODUCTION S PINTRONIC memories are being actively pursued for various applications, such as last-level cache [1], [2], embedded memory [3], and deep neural networks [4]. Spintransfer torque (STT)-based and spin-orbit torque (SOT)based magnetic random access memories (MRAMs) are two major examples of spintronic memories being explored. The STT-MRAM offers high cell density due to compact cell requiring only one transistor; however, it suffers from issues, such as low read margin, low charge to spin conversion efficiency, and oxide degradation. Moreover, large write current needed for STT-MRAM poses a challenge in terms of scaling. The SOT-MRAM is an emerging alternative for STT-MRAM. The SOT-MRAM has lower write energy while also improving the read operation by decoupling the read and write paths. There have been major advances in large-scale adoption of the SOT-MRAM technology in recent years. For example, wafer-level integration along with sub-nanosecond magnetization switching has been demonstrated [5]. However, one key issue with SOT-MRAM is the large cell area compared with STT-MRAM, as SOT-MRAM requires two separate transistors for read and write operations. There have been works on reducing the cell footprint of SOT-MRAM by sharing the SOT channel among multiple magnetic tunnel junctions (MTJs) with the help of STT [6] or voltage-controlled magnetic anisotropy (VCMA) effect [7], [8]. However, such schemes would require many trade-offs and a detailed evaluations of such schemes that proves low write error rates (WERs) and adequate selectivity accounting for thermal noise, variability, and the IR drop on the SOT layer are missing. Likewise, the potential impact of such schemes on write/read energy and latency as a function of cell density is also lacking. In this article, we discuss transistor sharing schemes for SOT-MRAM with the help of VCMA effect and STT while considering the limitations in terms of WER and IR drop in the SOT channel. We provide detailed thickness optimization of the SOT layer while considering the effect of IR drop, write energy, and MTJ selectivity. For both the VCMA and STT-assisted write operations, we evaluate the impact of increasing the number of MTJs on an SOT channel in terms of WER and write energy. Moreover, in the case of SOT + STT scheme, we study the impact of pulse timings of the SOT and STT write currents. In addition, we evaluate the read performance of the cell as a function of oxide thickness and present the associated trade-offs in terms of read and write operations. The rest of this article is organized as follows. After this introduction, Sections II and III describe the SOT + VCMA and SOT + STT schemes, respectively. In Section IV, we evaluate the read performance. Section V presents the optimization and benchmarking results for cell area and write performance, and the key findings of this article are summarized in Section VI. II. SOT + VCMA The first write mechanism we discuss is to use VCMA effect to selectively write into MTJs on a shared SOT channel. A. CELL DESIGN AND WRITE OPERATION The 3-D layout and schematic of the cell are shown in Fig. 1. The SOT channel is shared among multiple MTJs while having a single SOT write transistor. Selecting a specific MTJ for writing data is achieved by applying a voltage on the desired MTJ through the corresponding read/write select transistor. The write operation is based on utilizing the VCMA effect [9] to lower the thermal stability ( ), thereby lowering the switching current by applying a voltage across an MTJ. The applied spin current is then selected such that it is large enough to switch MTJs with reduced thermal stability and small enough to avoid flipping nonselected MTJs. Writing to all MTJs on an SOT channel can be accomplished in two cycles, as shown in Fig. 1(b). In Cycle 1, all the 1's can be written, while all the 0's can be written in the next cycle by reversing the direction of the SOT current. The write operation requires the presence of an external magnetic field, which can be generated on-chip by using a cobalt magnetic hard mask [5]. For driving the SOT channel, the driver design described by earlier work [10] can be used. Fig. 2 shows the schematic of the write driver, which uses eight fin transistors to provide sufficient SOT current. The pitch and height of the write driver are 8F and 28F [11], respectively, with half metal pitch (F) being 32 nm. The write drivers may occupy ≈7% of the total area for an array size of 256 × 128. Fig. 3 shows the memory array based on shared SOT channel. B. IR DROP IN THE SOT LAYER The length of the SOT layer depends on the number of MTJs (N MTJ ) integrated on it. A longer SOT channel results in a higher resistance (R SOT ); hence, a larger voltage drop V SOT across it. A large IR drop across the SOT channel can result in larger write voltages that can pose several challenges, such as large variation in the effective VCMA voltages and the requirement for high-voltage transistors. To lower the resistance, the thickness (t SOT ) of the SOT channel can be increased. However, a larger t SOT may require a larger write current (I w ) to maintain a sufficient current density (J SOT ) in the SOT channel. In addition, damping-like spin-torque efficiency (ξ DL ) may also change with t SOT , according to the drift-diffusion model of spin generation and transport [12] where θ SH is the spin Hall angle, λ sd is the spin diffusion length in the SOT material, G r is the real part of the spin-mixing conductance (G ↑↓ ), and σ SOT is the conductivity of the SOT material. For the SOT channel, we use AuPt [13], which is a well studied SOT material with low resistivity (83 µ cm) and large ξ DL . Fig. 4(a) shows the required I w as well as R SOT versus t SOT . The inset plot in Fig. 4(a) shows the variation of ξ DL with t SOT . Increasing t SOT results in an increased I w despite the increase in ξ DL as J SOT decreases. The resistance; however, decreases with increasing t SOT , resulting in an overall reduction in V SOT , as seen in Fig. 4(b). Write energy (E w ), on the other hand, is nonmonotonous, and the lowest E w is obtained when t SOT is 3.5-4 nm. C. DEVICE SIMULATIONS To obtain various trade-offs among write current, write time, and WER and to evaluate the VCMA selectivity of the MTJs, we use object oriented micromagnetic framework (OOMMF [14]) simulations augmented with the rare-event enhancement [15] method. The simulation framework has already been validated with experiments [16]. We use perpendicular MTJ with a diameter of 51 nm and a free-layer thickness of 1.2 nm. The room temperature saturation magnetization (M s ) and interface anisotropy (K i ) are 1.257 MA/m and 1.3 mJm −2 , respectively [17], which provides a room temperature of ≈90. Required symmetry breaking for SOT switching can be achieved by applying a magnetic field of 32 mT [5]. In addition, we assume a field-like to dampinglike torque ratio of 0.18 [18]. Fig. 5(a) shows the obtained WER versus applied spin current for various values of voltage applied across the MTJ (V MTJ ). The duration of the write current is 1 ns. We use a VCMA coefficient of 100 fJ/Vm [19]. To quantify the VCMA selectivity, we also calculate the accidental write rate for the nonselected MTJ, as shown in Fig. 5(b). Here, accidental write rate refers to the probability of a nonselected MTJ (V MTJ = 0) getting switched. Here, we have ignored the effect of any STT current due to V MTJ , as the design requires minimization of STT current as discussed next. In addition, field-assisted switching of perpendicular magnets usually requires larger damping coefficient [20] (≈0.1), which effectively suppresses the effects of the STT current. One key challenge with regards to the VCMA selectivity of MTJs is the current injected in the SOT channel due to the applied V MTJ . Application of V MTJ results in a finite amount of current being added into the SOT channel. This increases the overall SOT current in the channel. This extra current ( I ) can help reduce the WER for the selected MTJs; however, it will also have the unintended effect of accidentally switching the nonselected MTJs. This extra current can be quantified as a function of N MTJ and oxide thickness (t ox ). The worst case, corresponding to maximum I , occurs when (N MTJ −1) consecutive MTJs are written parallel (P) to antiparallel (AP) in one cycle, while the remaining MTJ is written AP-P in another cycle, as shown in Fig. 6. In this case, we can define, where R P is the resistance of the MTJs in P state. The maximum allowable value of I is determined by the available switching margin (I margin ), which is defined as the difference in the write currents for the selected and nonselected MTJs corresponding to a target error rate, as shown in Fig. 5(b). It is also important to note that during the write operation, nonselected MTJs will experience negative voltage (V MTJ < 0) due to the finite potential of the SOT channel. Thus, the available I margin will be higher than the depicted value in Fig. 5(b). For reliable operation, I < I margin is required. The obtained values of I margin for a WER of 10 −6 are ≈88 and ≈127 µA, when the voltage across the nonselected MTJ is 0 and −0.42 V, respectively. In comparison, the corresponding I margin values for a WER of 10 −4 are 97 and 137 µA, respectively. Here, for the nonselected MTJ, it is not possible for us to calculate accidental write rates below 10 −4 due to the limitations imposed by the computation time. Improving the VCMA coefficient and lowering the charge to spin conversion efficiency can increase I margin . In addition, I margin depends on , as shown in Fig. 7. Further optimization of the magnetic parameters is required to improve I margin . Reducing V MTJ can lower I ; however, that would also reduce I margin . The best trade-off among I w , V SOT , and I margin can be achieved by selecting V MTJ = 1.5 V and t SOT = 6 nm. Another way to suppress I is to increase t ox , which increases the MTJ resistance and lowers the current passing through it. It is shown in Fig. 8(a) where the values of I corresponding to different t ox values are plotted against N MTJ . Here, the MTJ resistance values are obtained from experiment [21]. Fig. 8(b) shows I versus N MTJ for various values of V MTJ at t ox = 1.7 nm. Increasing t ox beyond 1.6 nm can significantly suppress I , allowing the integration of a larger number of MTJs on a single SOT channel. However, a large t ox comes with a read performance penalty as discussed in Section IV. III. SOT + STT Another way of sharing the SOT channel among MTJs is to use a small STT instead of the VCMA effect. A. CELL DESIGN AND WRITE OPERATION The cell design is the same as the SOT + VCMA scheme. In this case, the deterministic magnetization switching is achieved by applying a small STT current. First, an SOT current is applied to move the magnetizations of all the MTJs toward the in-plane meta-stable direction. After that, the SOT current is stopped, and a small STT current is applied through each MTJ. The direction of the STT current determines the final MTJ state. All the MTJs are written at once by applying appropriate polarities of STT currents. The write scheme is demonstrated in Fig. 9. Also, as the direction of the SOT write current remains the same, a separate driver for SL is not required. B. DEVICE SIMULATIONS The diameter and thickness of the free-layer ferromagnet used here are 42 and 1.3 nm, respectively, giving a room temperature of ≈60. Contrary to the SOT + VCMA case ( ≈ 90), used here is lower, as the SOT + VCMA scheme requires a large to effectively suppress the accidental write rate for nonselected MTJs. The SOT + STT scheme has no such restriction, and the value of can be chosen based on the retention time requirement. Fig. 10(a) shows the magnetization switching for a single MTJ, illustrating the write scheme used. The spin current generated by the SOT is fixed at 600 µA, which is applied for 1 ns. The magnitude and direction of the STT current are varied to obtain various WERs, as seen in Fig. 10(b). Here, we assume the STT efficiencies of 0.6 for AP-P and 0.3 for P-AP switching [22]. Similar to the VCMA-assisted write, the number of MTJs on a single SOT channel is limited by the SOT current in the worst case scenario for the write operation. During the STT switching phase, there will be a finite amount of current injected into the SOT channel due to STT current. The current flowing in the SOT channel will apply an in-plane torque on the magnetization of the free layer. If this current becomes too large, it will result in the magnetization being stuck in-plane, suppressing the effect of STT. This will cause switching errors and increased WER. The worst case scenario is when all the MTJs are being switched from P to AP state, as shown in Fig. 11. To reduce the SOT current seen by MTJs, we ground both write bitline (WBL) and SL during the STT phase. This allows the current to flow in both directions within the SOT channel and lowers the voltage drop. To calculate the resulting current density in the SOT channel below each MTJ, we use COMSOL-based finiteelement simulations. Fig. 12 shows the obtained SOT current densities below each MTJ for N MTJ = 4 and N MTJ = 6 cases due to the applied STT current of 16.7 µA. The resulting current density data are used in micromagnetic simulations to calculate WER and find the limit on the number of MTJs. Fig. 13 shows the magnetization dynamics corresponding to the worst case write operation for the MTJ seeing the most SOT current when N MTJ is 4, 6, and 8. Large amount of current flowing in the SOT channel results in increased switching failures, as seen in Fig. 13(c). WER in the worst case for different MTJs on an SOT channel for N MTJ = 4 is shown in Fig. 14(a). Fig. 14(b) depicts WER for the MTJs experiencing the largest SOT current in the worst case write operation for N MTJ = 4, 6, and 8. The results show that increasing the number of MTJs leads to higher WER. C. ROBUSTNESS TO WRITE PULSE TIMING Another key metric for the circuit is its sensitivity to the timing of SOT and STT pulses. Based on SPICE simulation results, we show that the circuit is robust with regards to any variation in the relative timings of SOT and STT pulses, as shown in Fig. 15. We apply the STT pulse 100 ps before the SOT pulse ends; assuming the uncertainty due to jitter and skew does not exceed 100 ps. This ensures that as soon as SOT ends, STT will begin to switch the magnetization in the desired direction. A delay between SOT and STT may lead to switching errors, as the magnetization remains in the meta-stable state, and thermal noise may move it in the unwanted direction. During the SOT phase, SOT channel remains at finite potential, while read bitlines (RBLs) are grounded. If read wordline (RWL) is enabled before RBLs are charged [solid lines in Fig. 15(b) toward the fixed layer for a small amount of time. However, this will not be an issue, as this unintended STT current is much smaller (<10%) in magnitude than the SOT current applied on the MTJs and will not affect the magnetization dynamics, as shown in Fig. 15(e). IV. READ OPERATION The read performance is evaluated based on SPICE simulations. We use a differential sensing scheme [23], [24] for the read operation. Only one MTJ on a single SOT channel can be read at a time, as the read current path is shared among them. This is not an issue, as the number of MTJs that can be read at once is limited by the number of sense amplifiers (SAs). We assume one SA for every 64 bitlines as commonly done in STT-MRAM arrays. These 64 bitlines are multiplexed together and then compared with the reference MTJ. The bitline voltages corresponding to the MTJ being read and the reference MTJ are compared using a double-tail latch-type voltage SA [25]. The read performance strongly depends on t ox and tunnel magnetoresistance (TMR) ratio. To evaluate the read performance, we consider t ox from 1.2 to 1.9 nm. The resistance area (RA) product values are obtained from experimental data [21]. We assume a constant TMR ratio of 120% [17]. Fig. 16(a) shows the resistance of the MTJ with a diameter of 51 nm in P and AP states. Read performance is evaluated using SPICE simulations for a 256 × 128 array with four MTJs on each SOT channel. We use 14-nm FinFET models from the Predictive Technology Model (PTM) by Arizona State University (ASU) [26] with a half metal pitch of 32 nm. Table 1 lists the parasitic resistance and capacitance values used in the simulations. The capacitance values are obtained from prior benchmarking work [24], and the resistance values of wires are calculated based on Cu resistivity values reported in [27]. Fig. 16(b) shows the obtained read margins in P and AP states for the nominal case where the read margin is defined as the voltage difference seen at the input of the SA. The read margin reduces drastically for t ox < 1.4 nm and t ox > 1.7 nm, especially for the AP state. To account for variation, we use 3σ variation of 10% in MTJ area and 10% uniform variation in the supply voltage while also accounting for thermal noise. We use a read time of 5σ higher than the mean value to obtain read error rate below 10 −6 . The total read delay can be written as follows [24]: where R drive (=5 k ) and R RWL are the resistances of the drive transistor and RWL, respectively, C RWL is the capacitance of RWL, and t sense accounts for the delay to reach the required voltage margin. The effective read delay and energy, including the effects of variation, are shown in Fig. 17 for a read margin of 70 mV. For t ox = 1.3 nm and t ox = 1.9 nm, the available read margin is <60 mV. Optimal read performance is observed for t ox within the range of 1.4-1.6 nm. Increasing t ox initially results in lower read energies because of smaller read currents; however, beyond 1.7 nm, the read energy starts to increase, as the delay goes up rapidly due to read current being too small. The choice of t ox based on the reliability of write operation is different from the read performance optimization. The SOT + VCMA scheme requires a larger t ox to suppress any extra current due to V MTJ , while the SOT + STT scheme requires lower oxide thickness to reduce the write energy. V. BENCHMARKING We benchmark this SOT + VCMA/STT scheme against other competing memories, such as SRAM, STT-MRAM, and in-plane magnetic anisotropy (IMA) and perpendicular magnetic anisotropy (PMA)-based conventional two transistor SOT-MRAM. Fig. 18(a) shows the cell area per bit versus the number of MTJs for the SOT + VCMA/STT scheme. In Fig. 18(b), the cell area per bit of the SOT + VCMA/STT scheme with four MTJs on a shared SOT channel is compared against those of other magnetic memory options. In both plots, the 14-nm technology node (F = 32 nm) and the layout rules described in prior benchmarking work [11], [28] are used to calculate the cell areas. Compared with the conventional 2T SOT-MRAM, ≈2× bit density can be achieved. The write energies for the SOT + VCMA and SOT + STT schemes, calculated using SPICE simulations, are shown in Fig. 19. The write voltages and current for the SOT + VCMA scheme are listed in Table 2, and the same for the SOT + STT are listed in Table 3. The write energy values are benchmarked against other memory options [16], as shown in Fig. 20. For the conventional 2T SOT-MRAM cell, the write energy results for various SOT materials are included, such as PtCu [29], AuPt [13], BiSe [30], β-W [31], and BiSb [32]. The SOT + VCMA scheme has a higher write energy but much lower write delay compared with the SOT + STT scheme. The higher write energy can be attributed to the large (≈90) requirement as discussed in Section III-B and the large energy associated with charging RBL capacitance due to the application of V MTJ . The higher thermal stability can be useful, as it will increase the data retention time. The higher write delay observed in the SOT + STT scheme with SOT channel sharing compared with the conventional 2T SOT + STT MRAM cell is due to lower STT current requirement to suppress WER in the worst case write as discussed previously. Overall, both the SOT + VCMA and SOT + STT schemes discussed here provide major density advantage over the conventional SOT-MRAM while sacrificing a bit in the write performance. One important question here is that improving which material properties would more significantly improve the array-level performance of the proposed schemes. Some key material properties, which are considered here for benchmarking, are STT efficiency, SOT efficiency, and VCMA coefficient. There are not any known approaches to improve the STT efficiency, and the current values that are commonly used (60%) are not too far from the ideal value, which is 100%. On the other hand, improving the SOT efficiency is an active area of research with many promising materials being explored. For the SOT + VCMA scheme, increasing the SOT efficiency while keeping the SOT layer thickness will reduce the available I margin , resulting in higher error rates. Similarly, for the SOT + STT scheme, a higher SOT efficiency may result in an increased SOT during the STT phase and increased error rate. However, a higher available SOT efficiency may allow increasing the SOT layer thickness, which can help the IR drop issue, improving the device performance and reliability. Also, for the SOT + VCMA scheme, improving the VCMA coefficient will have the most impact, as it will lower the write energy and increase I margin , thereby lowering the WER. VI. CONCLUSION This article presents a comprehensive modeling, optimization, and benchmarking of transistor sharing schemes for SOT-MRAM devices using VCMA and STT effects. Using experimentally validated micromagnetic simulations augmented with rare-event enhancement along with SPICE simulations, we demonstrate that the number of MTJs that can be put on a single SOT channel is limited by the write error induced due to the injection of current in the SOT channel through the MTJs and voltage drop on the SOT channel. For the SOT + VCMA scheme, we quantify the WER, unintentional write rate, and the current injection through MTJs as a function of the MTJ oxide thickness. For the SOT + STT scheme, finite-element simulations are used to calculate the SOT current density in the SOT channel underneath each MTJ and the resulting WER. In addition, we quantify the IR drop along the SOT layer in terms of the number of MTJs and provide a way to optimize the SOT layer thickness while considering the write energy, current, SOT channel resistance, and the voltage drop along the SOT layer. Our results indicate that having four to six MTJs on a single SOT channel provides the best trade-off among the write energy, bit density, WER, and IR drop. The SOT + VCMA/STT schemes show a ≈2× bit density improvement over the conventional two transistor SOT-MRAM and a ≈6× bit density improvement over SRAM. While the energies are slightly higher than the conventional 2T SOT-MRAM, the SOT + VCMA/STT schemes are still more energy efficient than STT-MRAM. We also quantify the read performance in terms of oxide thickness and show the read penalty associated with sharing SOT channel among MTJs. Our read simulation results show read times <4 ns for both the schemes. Moreover, since the current through the select transistors is significantly smaller than that of STT-MRAM, this approach may enable adopting SOT-MRAM to more advanced technology nodes. While both the SOT + VCMA and SOT + STT schemes look promising, there are certain challenges that must be addressed. For the VCMA + SOT scheme, a relatively large VCMA coefficient (>100 fJ/V-m) is needed to keep the required VCMA voltage below 1.5 V. A tighter control over variation in magnetic properties is also required to ensure sufficient I margin . For the SOT + STT scheme, there is an additional cost associated with the peripheral circuits that can supply both the positive and negative voltages for the write operation.
6,171.2
2022-12-01T00:00:00.000
[ "Engineering", "Physics" ]
planetMagFields: A Python package for analyzing and plotting planetary magnetic field data Long term observations and space missions have generated a wealth of data on the magnetic fields of the Earth and other solar system planets. planetMagfields is a Python package designed to have all the planetary magnetic field data currently available in one place and to provide an easy interface to access the data. planetMagfields focuses on planetary bodies that generate their own magnetic field, namely Mercury, Earth, Jupiter, Saturn, Uranus, Neptune and Ganymede. planetMagfields provides functions to compute as well as plot the magnetic field on the planetary surface or at a distance above or under the surface. It also provides functions to filter out the field to large or small scales as well as to produce .vts files to visualize the field in 3D using Paraview, VisIt or similar rendering software. Lastly, the planetMagfields repository also provides a Jupyter notebook for easy interactive visualizations. Statement of need Planetary scientists studying the magnetic field of planets need to constantly access, visualize, analyze and extrapolate magnetic field data.In addition, with technological advancements in space exploration and planetary missions, we are constantly getting new data for planetary magnetic fields and hence, better field models.Though reviews of these field models are often written (Schubert & Soderlund, 2011;Stanley, 2014), there is very little software available that provides easy access to these models with a high level language and a way to easily visualize and analyze them.To the knowledge of the authors, there are a few publicly available repositories that are capable of providing access to planetary magnetic field data and tools to analyze them such as JupiterMag (James et al., 2024;Wilson et al., 2023), KMAG (Khurana, 2020), ChaosMagPy (Kloss, 2024), SHTools (Wieczorek & Meschede, 2018), PlanetMag (Styczinski & Cochrane, 2024) and libinteralfield (https://github.com/mattkjames7/libinternalfield).Out of these, only libinteralfield provides data and software to analyze and access magnetic fields of all planets.However, it is a C++ library which needs to be interfaced with something at a higher level to enable fast analyses and visualization.Thus, a software package that has different magnetic field models for all different planets of the solar system in one place, as well as provides a high level API to access, analyze and visualize them is not available.planetMagfields is intended not only to currently fill this gap, but also to provide a central repository, to be constantly updated, as more magnetic field models become available. In addition to the research aspect of our software, the interactive Jupyter notebook serves as a valuable educational resource, fostering a deeper appreciation for the complexities of planetary magnetic environments. Mathematics Magnetic fields in planets are generated by electric currents in a fluid region inside them through a process called dynamo action (Jones, 2011;Schubert & Soderlund, 2011;Stanley, 2014).Outside this region, in the absence of current sources, the magnetic field ⃗ B can be written as the gradient of a scalar potential, ⃗ B = −∇V .The potential V is usually written as an expansion in orthogonal functions in spherical coordinates (r, θ, ϕ), where, g m l and h m l are called the Gauss coefficients.R p represents the radius of the planet and P m l are associated Legendre functions of order l and degree m, where l and m are integers.The above equation can be recast in terms of spherical harmonics, which is what the code uses. The raw data obtained from satellites or space missions are usually inverted to obtain these Gauss coefficients which are the key to describing the surface magnetic field of a planet as well as how that field looks at a certain altitude from the surface.The magnetic energy content on the surface in a certain degree l is given by the Lowes spectrum: l plays the role of a wavenumber.Low degrees represent large spatial features in the field while high degrees represent small scale features.The maximum available degree l max of data for a particular planet depends on the quality of observations. Benchmarking We benchmarked our software against two publicly available repositories : JupiterMag (James et al., 2024;Wilson et al., 2023) for Jupiter and the CHAOS-7 (Finlay et al., 2020;Kloss, 2024) for Earth.For Jupiter, we compare the field at a depth of 85% of planetary radius, thus testing our extrapolation capability while for Earth, we compare the field on the surface in 2016, testing our implementation of taking into account changes in the Earth's field in a linear fashion (as is done for the IGRF model, Alken et al., 2021) The comparison for Jupiter is shown in Figure 1.We also use these cases in our unit testing. Description of the software The software package planetMagfields has data files containing Gauss coefficients from various inversion studies of planetary magnetic models for different planets.These coefficients are then used to obtain the magnetic field on a grid of latitude and longitude using equation ( 1).The main way of accessing the data is through the Planet class.An example is provided below using IPython (Pérez & Granger, 2007), The last plot statement produces Figure 2 which is the radial magnetic field at 85% of the planetary radius.This can be compared against Figure 1h of Moore et al. (2018).planetMagfields primarily uses NumPy (Harris et al., 2020), Matplotlib (Hunter, 2007) and SciPy (Virtanen et al., 2020) for most of its analyses.Further support for various map projections is added through Cartopy (Met Office, 2010-2015).planetMagfields also provides functions to extrapolate and obtain all components of the magnetic field at a certain depth or height through spherical harmonic transforms using the SHTns library (Schaeffer, 2013).Finally, this extrapolation also allows one to visualize the field in 3D.To enable that, planetMagfields uses the PyEVTK library (https://github.com/paulo-herrera/PyEVTK) to write .vtsfiles which can be visualized using software like Paraview or VisIt.An example for Jupiter is provided below in Figure 3.A full list of available features is provided in the documentation. Figure 1 : Figure 1: Benchmarking the code against publicly available repositories. Figure 2 : Figure 2: Plotting example of Jupiter's radial magnetic field at a depth of 85% of the planetary radius. Figure 3 : Figure 3: 3D rendering of Jupiter's magnetic field using Paraview, using a vts file produced by planetMagfields.
1,442
2024-05-09T00:00:00.000
[ "Physics", "Computer Science" ]
GENDER AND EXPLETIVES AS DISCOURSE MARKERS: SOME USES OF JODER IN YOUNG WOMEN’S INTERACTIONS IN SPANISH AND GALICIAN EL GÉNERO Y LOS TACOS COMO MARCADORES DISCURSIVOS: ALGUNOS USOS DE JODER EN INTERACCIONES DE MUJERES JÓVENES EN ESPAÑOL This paper approaches young women’s speaking style by analysing the ways in which the interjection joder is employed in interactions in Spanish and Galician among young females. The analysis identifies several uses of this form at the interactional and discursive level: reinforcement of speech acts, marker of disagreement, marker of complaints, expression of minimal emotional assessments, correcting and stalling. It is concluded that joder has developed multiple functions in interaction as a discursive marker, in contrast to arguments against the inclusion of interjections in this pragmatic category. The findings also suggest that this expletive fulfils a sociolinguistic function as a marker of ‘young femininities’, since it demonstrates how it has been integrated into young women’s speaking style, in contrast to traditional gender rules and broader descriptions of ‘women’s talk’ in Language and Gender studies. primera Author / Autora: Virginia Acuña Ferreira Universidad de Zaragoza Zaragoza, Spain<EMAIL_ADDRESS>https://orcid.org/0000-0001-6845-4242 Submitted / Recibido: 20/10/2020 Accepted / Aceptado: 09/04/2021 To cite this article / Para citar este artículo: Acuña Ferreira, V. (2021). Gender and expletives as discourse markers: Some uses of joder in young women’s interactions in Spanish and Galician. Feminismo/s, 38, 53-83. Women, Sexual Identity and Language [Monographic dossier]. I. Balteiro (Coord.). https://doi.org/10.14198/ fem.2021.38.03 Licence / Licencia: This work is licensed under a Creative Commons Attribution 4.0 International. © Virginia Acuña Ferreira Feminismo/s 38, July 2021, 53-83 ISSN: 1989-9998 Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women’s interactions in Spanish and Galician 54 Feminismo/s 38, July 2021, 53-83 Virginia acuña Ferreira Abstract This paper approaches young women' s speaking style by analysing the ways in which the interjection joder is employed in interactions in Spanish and Galician among young females. The analysis identifies several uses of this form at the interactional and discursive level: reinforcement of speech acts, marker of disagreement, marker of complaints, expression of minimal emotional assessments, correcting and stalling. It is concluded that joder has developed multiple functions in interaction as a discursive marker, in contrast to arguments against the inclusion of interjections in this pragmatic category. The findings also suggest that this expletive fulfils a sociolinguistic function as a marker of 'young femininities', since it demonstrates how it has been integrated into young women's speaking style, in contrast to traditional gender rules and broader descriptions of 'women' s talk' in Language and Gender studies. INTRODUCTION This paper approaches young women's speaking style by analysing how the Spanish word joder is used in interactions among female speakers in their early 20's. These interactions are produced in Spanish and Galician, as the participants are mostly from Galicia (Spain) 1 . Specifically, the study focuses on the discursive and interactional functions of joder in its grammatically invariable form as an expletive or 'vulgar interjection' (RAE & ASALE, 2014). In Spanish grammars, interjections of this type are only attributed an 'expressive' function: the speaker's communication of feelings (RAE & ASALE, 2010, pp. 630-632). However, some authors have defended that interjections can fulfil multiple functions in interaction, acting as 'discourse markers' (Blas Arroyo, 1995) or 'discourse particles' (Drescher, 1997). According to Blas Arroyo (1995, p. 86), discourse markers are «piezas importantes en los procesos de construcción conjunta de la interacción […] y también contribuirían al añadido de matices diversos de significación emotiva e interpersonal» 2 . While he argues for the inclusion of at least some interjections in this category, other researchers generally exclude them, arguing that they only serve the discursive function that corresponds to their grammatical class, such as the expression of feelings (Borreguero Zuloaga, 2015). This assumption could explain that corpus-based research on Spanish and English colloquial conversation has not paid much attention to expletives as discourse markers (Briz, 1998;Briz et al., 2008;Stenström, 2014) or that approaches to them are taken from a predominantly quantitative perspective (Murphy, 2009;Stenström, 2006Stenström, , 2014, which results in an oversimplified picture of their functional properties. In this way, one of the aims of this paper is to provide a more in-depth analysis of how joder is employed in colloquial conversation to demonstrate its multifunctionality as a discourse marker. A second aim of this paper is to apply a gender perspective to the analysis of this form in young women's friendly interactions, taking as a premise that expletives are stereotypically linked to men and masculinity, as sociocultural restrictions on their use have been traditionally much more severe for women in different languages and cultures (Coates, 2004;Lakoff, 1975Lakoff, /2004Lozano Domingo, 1995). Because of these gender-based restrictions and their association with 'verbal aggressiveness', expletives can be considered as part of the vernacular culture that entails 'covert prestige' (Trudgill, 1983) for men, as argued by Lozano-Domingo (1995, p. 125). Such social constructions, however, do not exclude the possibility that women also employ these words because of the influence of other social factors, such as age or class, the communicative situation and/or a desire to challenge them. Taking a social-constructionist conceptualisation of gender (Acuña, 2009(Acuña, , 2011aHolmes & King, 2017;Mills, 2003;Pichler, 2015), this paper will focus on these latter possibilities. More specifically, I intend to call attention to the 2. Blas Arroyo (1995, p. 86, my translation): «important pieces in the processes of joint construction of the interaction […] and would also contribute to the addition of various nuances of emotional and interpersonal significance». Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women' s interactions in Spanish and Galician multiple functions that joder can acquire in talk among young women from middle-class backgrounds, contributing to displaying a speaking style that notably contrasts with general characterisations of 'women's talk' in previous Language and Gender research. The following section provides an overview of this research, in which discourse markers and expletives have been studied separately, in line with the general trend that I have already pointed out from the beginning. GENDER, DISCOURSE MARKERS AND EXPLETIVES An early reference to women's avoidance of swearing was made by Jespersen (1922, p. 246): «Among the things women object to in language must be specially mentioned anything that smacks of swearing». From a feminist perspective, this matter was originally addressed by Robin Lakoff (1975Lakoff ( /2004 in Language and woman's place. Lakoff critically argued that women are educated to be gentle and polite so that they learn to employ a 'powerless' speaking style, by which they are later perceived as insecure and unable to express themselves forcefully. This 'women's language' is based, for example, on the use of 'weak expletives' like goodness or oh fudge and the employment of tag questions to avoid strong statements, as in The way prices are rising is horrendous, isn't it? (Lakoff, 1975(Lakoff, /2004. Such arguments have given rise to many studies on women's and men's use of tag questions and other English discursive particles like I think, sort of, kind of or probably (see Aries, 1996, for a discussion). Drawing on the analysis of spoken discourse corpora and the politeness theory proposed by Brown & Levinson (1987), some of these studies concluded, from a more positive viewpoint, that such forms are multifunctional and that women tend to employ them as both positive and negative politeness strategies (Cameron et al., 1988;Coates, 1996;Holmes, 1995) to foster interaction, to avoid strong statements so as not to impose own's opinions and to keep contact with the interlocutor 3 . Recent research 3. Brown & Levinson (1987) elaborated a politeness theory based on Goffman's (1967) notion of 'face'. These authors distinguished between a negative and a positive face, which were defined as «the desire to be unimpeded in one's actions (negative face), and the desire (in some respects) to be approved of (positive face)» (1987, p. 13). From this perspective, positive politeness strategies are those oriented to protect the speaker's Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women' s interactions in Spanish and Galician based on spoken discourse corpora in Spanish (Albelda et al., 2020) provides similar results in relation to geographic areas like Madrid, where women showed a greater tendency than men to use linguistic and paralinguistic devices to avoid imposing on others by softening assertions. However, the spoken discourse data obtained in Valencia, Las Palmas de Gran Canaria and Santiago de Chile indicated that this gender difference was reversed or that it did not exist in this respect. Lakoff's (1975Lakoff's ( /2004) hypotheses also stimulated a body of research on gender and expletives that was initially based on questionnaires. Earlier studies that emerged in the USA revealed that single, younger and feminist women reported using expletives more than married, older and non-feminist women (Bailey & Timm, 1976;Oliver & Rubin, 1975) or did not find a difference in comparison to men (Staley, 1978). Later research on other countries re-emphasises women's heterogeneity, noting that British women from the lower working class regularly use these kinds of words (Hughes, 1992), or a lack of gender differentiation among teenagers of an English-speaking school in South Africa (De Klerk, 1992. Chun (1991, cited in Lozano Domingo, 1995 carried out a study among Spanish speakers from Madrid, which found that men and young people without higher education used most of the lexical items under study to a greater extent than women and young undergraduate students, although joder was one of the expletives with a more balanced frequency of use between women and men. More recently, studies based on the analysis of spoken interactions have stressed the links between masculinity and the use of expletives, insults and generally taboo words, as well as their special significance in men's talk as forms of camaraderie, in line with the classical study by Labov (1972) on ritual insults among male teenagers from New York. See, for example, the studies by Pujolar (1997) on two youth gangs from Barcelona, Zimmermann (2002 on young men from Spain, Mexico and Uruguay, or Martínez Lara (2009) on students from the Universidad Central de Venezuela. Coates (1996Coates ( , 2003 analyses friendly conversations among British speakers and and/or the recipient's positive face, for example, by seeking agreement, while negative politeness strategies intend to avoid impositions on the speaker and/or the recipient, for example, by avoiding promises and strong assertions. Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women' s interactions in Spanish and Galician remarks on the importance of taboo words as devices to express solidarity and to reinforce hegemonic masculinity in all-male interactions, while this type of language is much less frequent in all-female data (Coates, 2003). The magnitude of this gender difference is illustrated by highlighting the case of fuck, which is usually taken as the English equivalent to joder: «the word fuck and words deriving from it (fucking, fucked, fucker, etc.) appear 72 times in the stories in the all-male sub-corpus, 12 times in the mixed sub-corpus, and not at all in the all-female sub-corpus» (2003, p. 45). My previous research on third-party complaints in interactions in Spanish and Galician among friends and relatives (Acuña, 2002(Acuña, /2003(Acuña, , 2004(Acuña, , 2011a(Acuña, , 2011b found that expletives were not only employed by male participants but played a prominent role in their speaking style, making their indignation displays especially 'aggressive'. While it seems that gender continues to make a difference in the use of expletives, even among young people, other recent studies on interactions in English and Spanish that focus on teenagers or include young speakers point to the growing importance of expletives and taboo words in girls' talk, paying special attention to joder and the English fuck. Stenström (2006) compares the occurrences of taboo words in two subsets of The Bergen Corpus of London Teenage Language (COLT) and the Corpus de Lenguaje Adolescente de Madrid (COLAm), which are constituted by colloquial conversations among teenagers from London and Madrid, respectively. This comparative analysis is restricted to interactions among girls, in which fuck and joder were the most frequent taboo words. Regarding gender differences, Stenström refers to a previous research on the COLT (Stenström et al., 2002), in which fuck/ed/ing «occurred more than twice as often in boys' as in girls' speech» (Stenström, 2006, p. 122). However, in a later study, Stenström (2014) emphasises that gender differences are diminishing in terms of frequency: «the girls are catching up with the boys when it comes to frequency, while there is still a difference in the type of words, in that the rudest words are used by boys» (2014, p. 11). Murphy (2009) provides a quantitative and qualitative analysis of fuck (including its variations fucking, fucked and fucker) in Irish English, relying on data from the Corpus of Age and Gender differentiated spoken Irish English (CAG-IE), which allows comparisons to be made according to gender and three age groups: 20s, 40s and 70s/80s. The analysis found that the highest number of occurrences of fuck and its variations corresponded to males in their 20s (111), which was more than double the number for females in this age group (51). Male adults of the 40s group also used these forms much more than their female counterparts (65/18), while the oldest male and female groups did not use it at all. The qualitative analysis is focused on fucking as the most common form, which is said to fulfil two functions: as an 'amplifier' to express emotions and attitudes, and as a premodifying intensifier. Generally, Murphy stresses the more frequent use of fuck and its variations in the male data as a marker of masculinity, but it also seems worthy of mention that the number of occurrences of these forms in the female 20s group is close to the male 40s data (65/51). In sum, research based on the analysis of interactions confirms that men generally use expletives and taboo words more than women, even in the case of youths, according to gender-based traditional rules. However, as men's employment of this type of language is often emphasised as a marker of masculinity, the relatively high frequency of use among young female speakers that has also been found equally deserves due attention, as it suggests a possible sociolinguistic change in process and/or underlines the contextual variability of gender. In addition to this, more qualitative approaches, which allow us to observe in detail how expletives are used and can function in interaction, are lacking. Since many studies on expletives tend to prioritise quantification, they reduce the functional properties of these kinds of words, reaching results that they cannot explain. For example, in her study of taboo words in Spanish and English conversation among female teenagers, Stenström (2006) only differentiates on a discourse level between a phatic function of taboo words and a non-phatic function, «where the taboo words only reflect the speaker's moods» (2006, p. 129). In her conclusions, she comments: «it came as a surprise that the Spanish word joder occurred almost as frequently as English 'fuck', despite the fact that 'fuck' not only appears in more forms but is also used for more [grammatical] functions» (2006, p. 135). The reasons for this balance could have much to do with the fact that joder is used for more discursive and interactional functions than those differentiated in this study, in contrast to its grammatical invariability, as I aim to demonstrate in the following section. DATA AND ANALYSIS In this section I will analyse conversational data that were taken from the Corpus of Galician/Spanish Bilingual Speech of the University of Vigo (Corpus de Fala Bilingüe Galego/Castelán, abbreviated as CoFaBil; Rodríguez Yáñez & Casares, 2002, which was collected in Galicia (Spain) through participant observation with a hidden microphone. Overall, this corpus comprises around 250 hours of audio-recorded material. The data collection was made from an ethnographical perspective, that is, the main interest was to obtain naturally occurring interactions: a high proportion of the recordings were made at the home of the participants (chats over coffee, family meals, etc.) besides conversations between neighbours, housewives, students, returned emigrants, interactions between infants and carers, interactions in groups of friends (male, female and mixed), with strangers in the street, telephone conversations and also in all types of public settings: urban and village markets, groceries, department stores, chemist's shops, cafeterias, bars, hairdresser's, etc. (Rodríguez Yáñez & Casares, 2002). The CoFaBil was a part of research projects in which the author herself participated as a researcher. Specifically, I contributed to the data collection, acting as a participant observer in the natural interactions of my own groups of friends, relatives, and acquaintances (Acuña, 2009(Acuña, , 2011a 4 . I also collaborated in the data transcription, employing a system of conventions that is adapted from Álvarez Cáccamo (1990). As is usual in the transcription of data by conversation analysts, these conventions pay close attention to the facts concerning the turn-taking system, the most relevant prosodic phenomena, as well as to aspects of non-verbal communication that can be perceived in audio recordings, such as claps or whistles. Unlike other systems, however, each line of the transcription tends to correspond to a semantic-syntactic or subintonation unit as it coincides with an 'intentional movement' (Rodríguez Yáñez, 2007), which is defined as the minimal unit in the conversational process. In this way, ellipsis, segmented constructions or self-repairs are considered to be micromoments of the discursive construction that can be better visualised in the transcript (Rodríguez Yáñez, 2007, p. 44). The Appendix provides an abbreviated version of these conventions, in which I have only included the symbols that appear in the conversational extracts to be analysed here. The present study is based on five colloquial conversations among friends from the CoFaBil that were audio recorded in the late 1990s or early 2000s, as is the case of the Corpus de Lenguaje Adolescente de Madrid (COLAm) (Stenström, 2014, p. 2). The participants are all female friends in their early 20s, mostly undergraduate students, middle class and speakers of Spanish and Galician. The selection of these conversations was made because they constitute naturally occurring talk among young women and because of my previous work on them to address different issues, such as the construction of femininities (Acuña, 2009(Acuña, , 2012(Acuña, , 2017a(Acuña, , 2017b, complaints (Acuña, 2011a), humour (Acuña, 2012) and storytelling in conversation (Acuña, 2020). In doing this previous work, I realised that joder generally played an important role in these interactions and that it was used in different ways, so I initiated an analysis focused on this form, observing each of its occurrences in the five interactions to assign it a communicative function. I noted the position of joder in the turn, which was conceptualised as «a communicative unit in speech that is both communicatively and pragmatically complete» (Ronald & McCarthy, 2006, p. 928). Also, I paid attention to prosodic realisation, the communicative purpose and the interpersonal and/or textual level on which joder operates (Stenström, 2014) as well as to the type of conversational sequence or «more global communicative patterns» (Drescher, 1997, p. 239) in which it was used. In this way, I came to categorise six uses and functions of the interjection joder, which are addressed in the following subsections. There were cases that remained unclassified to which I plan to return in future studies. Reinforcing Speech Acts One of the most frequent occurrences of joder in the data is found in turn-final positions as the 'closing marker' (Drescher, 1997, p. 239) of a statement, directive or request, serving to strengthen the illocutionary force of these speech acts. Following Briz (1998, pp. 128-135), these uses would be included in the category of 'pragmatic modifiers' that intensify the speaker's attitude and are dialogically oriented to emphasise agreement or disagreement. For example, in (1), joder is used by Raquel to strengthen a directive and to emphasise agreement with previous turns of Silvia, who asked for a cigarette and expressed a desire to smoke after a long time without having done so. The performance of said directive by using the verb envenenar ('to poison') and its final reinforcement by means of joder can also be attributed a humorous intention (lines 711-712): (1) Extract (2) illustrates several uses of joder in a segment of direct reported speech. The speaker, Natalia, is staging a discussion she held with her cousin about the sexual problems the latter had with her boyfriend. Natalia tries to convince her cousin that she should leave this relationship (2): (2) Natalia describes her cousin's attitude towards her own boyfriend as that of a samaritana ('samaritan', line 79) and then employs the marker en plan ('like', Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women' s interactions in Spanish and Galician line 80) to frame a direct quotation of this character (Stenström, 2014, pp. 93-94). This quotation serves as a 'demonstration' (Clark & Gerrig, 1990) of her cousin's sympathetic attitude, as she defends her boyfriend, arguing that he has a problem (lines 81-82). Note that the quoted character employs joder in turn-initial position as a marker of her disagreement with Natalia, another function that is addressed specifically in the following subsection. The reporting continues with Natalia's response, which is constituted as an 'agreement-plus-disagreement turn shape' (Pomerantz, 1984, p. 72): it is prefaced by agreement components, including vale ('okay', lines 83-86) as a marker of concession (Briz, 1998, p. 182), followed by pero ('but') to articulate opposition (lines 87-88). After a micropause (line 89), Natalia claims the impossibility of the relationship, making a statement that is reinforced by using joder in the turn-final position (lines 90-91). She then uses sabes? ('you know', line 92) as a 'trigger' (Stenström, 2014, p. 58), inviting the participants to produce an understanding or agreement response with this perspective. One participant displays such affiliation by laughing (line 93), as Natalia did in the previous turns (line 91). Natalia reasserts her point of view, increasing the volume and using joder again as the closing marker of a statement (lines 94-95). This prosodic emphasis and the use of joder also function here as triggers, inviting agreement responses. Begoña provides such a response in the following turn (line 96) through a repetition of Natalia's previous statement. Marking Disagreement In her analysis of the ways in which agreement and disagreement turns are shaped in conversation, Pomerantz (1984, p. 72) notes that disagreement, as a dispreferred action, tends to be delayed or prefaced in some ways, for example, by means of vocalisations such as uhs or particles like wells, «thus displaying reluctancy or discomfort» (1984, p. 72). This corpus of young women's interactions provides some instances in which joder functions similarly as a disagreement marker in the initial position of a turn that is clearly reactive to the previous ones as a disagreement response, thus serving an interpersonal function. These kinds of turns have been found in sequences of discussion or in segments of direct reported speech in storytelling sequences that stage such conversational activity, as we saw in extract (2) regarding the first occurrence of joder. In comparison to the use of well and the vocalisations described by Pomerantz (1984), the use of joder as a disagreement marker may be surprising, from the perspective of politeness, because it is linked to strongly negative stances, so this does not seem the best way to introduce a non-preferred reaction, as it is disagreement. The examples of this use seem then to reinforce the normalisation of this expletive among the speakers, while there are cases in which it is followed by the vocative tío/a as a hedge or mitigator to emphasise social bonds between the interlocutors when performing face threatening acts (FTA) (De Latte & Enghels, 2019; see also Edeso Natalías, 2005) 5 . Extract (3) provides an example of this use of joder followed by the vocative tía in expressing disagreement. Eva is talking about a boy with whom she has recently started an intimate friendship, praising the fact that he gets along with his ex-girlfriends. There is a second use of joder in this extract that fulfils a different function: Eva explains her positive evaluation of the boy based on his good relationships with his ex-girlfriends, noting that she herself also will be an ex-girlfriend (lines 52-65). In overlap, Cris expresses disagreement by using joder tía ('fuck, girl', lines 66-67), followed by a questioning repeat (Pomerantz, 1984, pp. 71, 77) that displays strangeness and a critical attitude towards Eva's reasoning (line 66). While joder marks this disagreement, the vocative tía that follows serves as a politeness strategy to mitigate such FTA. Eva intends to respond to this implicit criticism by using no ('no') and pero ('but') with increased volume (lines 68-69), followed by joder (line 70), but then a micropause is produced (line 71) and Cris emphatically asserts her understanding of Eva's point of view (lines 72-73). In view of Eva's hesitation here in trying to reply to a disagreement with Cris (lines 68-70), the second use of joder by this participant in line 70 can be interpreted as a means to retain the turn and to take time to organise the discourse; that is, it can be interpreted as a 'filler' (Cortés, 1991) or 'stalling' device (Stenström, 2014, p. 92). This is another use of joder that will also be specifically addressed in another subsection. In extract (4), joder is followed by pero ('but'), which also serves to mark, or rather to reinforce in this case, a disagreement response (Briz, 1998, pp. 182-185). Once more, the participants are talking about sexual/affective Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women' s interactions in Spanish and Galician relationships with boys. Ana reproaches Begoña for not taking the opportunity to go with a boy to his friend's apartment: After a micropause (line 83), Natalia contributes to the discussion by using joder in turn-initial position, but what she says next is not intelligible (lines 84-85). This use of joder by Natalia seems to mark a disagreement response to her friends, as can be deduced from the following turns in which both Ana and Begoña disagree with this participant (lines 87-89), using no creo ('I don't think so') and qué va ('come on'), which is also described as a disagreement marker or 'objecting' device by Stenström (2014, pp. 77-80). The discussion continues, as Natalia introduces ah ya ('oh yes', lines 90-92) as a 'weak agreement component' (Pomerantz, 1984, p. 72), and next she uses pero ('but') to articulate disagreement. Ana disagrees with her (lines 93-99), using joder followed by pero (lines 93-94). Thus, joder as a disagreement marker can be reinforced with pero, which also serves to mark the transition from a weak agreement to a disagreement (lines 90-92). Generally, this extract (4) provides a good illustration of the use of different disagreement markers in a discussion sequence. Marking Complaints The emotive meanings of joder as an interjection that expresses 'irritation' or 'annoyance' (RAE & ASALE, 2014) make it useful as an 'affective key' (Ochs & Schieffelin, 1989) to contextualise the speaker's discourse into a complaint frame in such a way that it can be attributed a 'complaint marker' function in certain contexts. From a broad perspective, complaints have been defined «as expressive acts wherewith the speaker, or complainer or complainant, expresses a variety of negative feelings or emotions» (Padilla Cruz, 2019, p. 23) in relation to a situation and/or someone's behaviour. According to this definition, the use of joder as a complaint marker has been found in sequences in which the speaker is talking about someone's behaviour that made her feel bad or about something that is reported as an injustice. In these cases, joder appears in turn-initial position as an independent sustained or rising intonational unit, sometimes along with other prosodic features, such as vowel elongations or heightened volume. It seems to play a key role in marking the speaker's negative attitude to guide the listener in the interpretation of the discourse as a complaint to get empathy or support. Thus, these uses do not only fulfil an expressive function but also an interpersonal one. In extract (5) joder is used in a segment of direct reported thought (DRT), previously analysed in Acuña (2020). Eva is telling a story about what happened one day when she was home alone, thinking sadly about the fact that her male friend had not phoned her, although he knew she had an exam the next day. These thoughts are staged as a complaint that is prefaced by joder with vowel elongation, followed by utterances that also present this prosodic feature (lines 970-972) to communicate sadness. Such a complaint is not only oriented to get sympathy and support but also more generally to emphasise the point of the story, as Eva finally receives a message from the boy that radically changes her mood (lines 974-980): Extract (6) similarly shows the use of joder in storytelling. Lara is saying that she looked at some shoes in a store, and then she negatively evaluates their price as too high. Following this, she complains about this price as an injustice, claiming that she could find the same shoes in other stores for half the price. This claim is prefaced by joder, which here constitutes a rising intonational unit that is close to the exclamatory intonation linked to interjections (lines 94-99): Virginia acuña Ferreira Gender and expletives as discourse markers: Some uses of joder in young women' s interactions in Spanish and Galician In this extract, note that another participant, Iria, reacts to Lara's turn specifying the price of the shoes (lines 88-90), expressing astonishment by means of the Galician interjection arre carallo ('damn it', line 91) to support her friend's view that it was too high. As is shown in the following subsection, these data include some examples in which joder is also used for this purpose -to express surprise-as well as other emotive reactions. Expressing Minimal Emotional Assessments As noted in Section 2, Stenström (2006) attributes two discursive functions to taboo words in general: one phatic, serving to maintain contact between the speaker and the listener(s), and another non-phatic, which is oriented to communicate the speaker's feelings, the prototypical function of interjections (Drescher, 1997, p. 234). One example of the phatic function provided by Stenström (2006, p. 128) shows the use of joder as one participant's reaction to storytelling. In this case, joder constituted a turn by itself and expressed surprise, as we have just seen regarding arre carallo ('damn it') in extract (6). Extract (7) offers another similar example, in which joder is used for this purpose. As in (6), the participants are talking about clothes, and joder is employed as a reaction to the price of an item of clothing, indicated by the previous speaker (lines [1391][1392][1393]. In this case, the interjection is used by two participants in overlap. Also, note that it is produced with a marked elongation in the first vowel, which contributes to emphasising the meaning of this reaction as surprise: 1389 Begoña eso hice yo con una chupa de cuero en el corte → In this example, the display of surprise by means of joder in response to the price previously indicated by Begoña implies that both participants consider it very high. This use could be included along with other forms such as anda ('come on') and vaya ('wow'), which are classified by Stenström (2014, pp. 80-81) as reactive markers 'showing surprise'. However, there are examples in which joder is again used alone in response to a previous turn, displaying other emotive meanings. In extract (8), it is produced by Begoña (line 647) in reaction to a story told by Silvia about a woman who found a rose in her son's room. The woman thought it was a gift for her, but it was for the boy's girlfriend. Note that joder is also produced with a vowel elongation: Silvia In this case, joder does not display surprise but a feeling of sadness or sorrow in reaction to the reported event, thus implying that the speaker considers it worthy of empathy. The same participant, Begoña, reinforces this evaluation later by saying qué putada ('It's a real bugger', line 656), with vowel elongation, in line with a previous expression of sympathy by Silvia: pobriña ('poor woman', line 655). In agreement with Drescher (1997, p. 238), I thus consider that uses of this kind generally constitute «more than a purely phatic feedback» and should be more globally accounted for as «emotional assessments or declarations of attitudes which are typical minor speaker contributions» (Drescher, 1997, p. 238). Correcting and Stalling While the uses of joder examined so far are mainly oriented to interpersonal goals or operate on both an interpersonal and a textual level, the 'correcting' and 'stalling' functions to be addressed here are purely discourse oriented. These uses have been found to be frequent in the talk of a same speaker -here, Eva-especially with respect to the stalling function, as shown in extract (3). The correcting function was identified in extract (9), which provides an excellent example of how joder can be used like the marker o sea ('that is') to introduce a reformulation or self-repair (on the correcting function of o sea, see Cortés, 1991, pp. 59-60;Stenström, 2014, p. 86). Eva is trying to describe the personality of her male friend and partially repeats a self-repair to correct her use of raro ('odd man'), replacing it with difícil ('difficult man'). The first self-repair is introduced by o sea ('that is'), while the next one is preceded by joder (lines 521-527): (9) As previously explained, the use of discourse markers on a textual level as stalling devices (Stenström, 2014) or fillers (Cortés, 1991) means that they serve «to gain time and think of what to say next» (Stenström, 2014, p. 92), often when the speaker is hesitating. Cortés (1991, p. 29) also argues that if a filler is often used by the same speaker, it would constitute a muletilla ('pet word' or 'tag'). This seems to be the case for Eva. For example, extract (10) shows that this participant uses joder twice in this way, while talking about the dialogue she had with her intimate male friend about the future of their relationship: (lines 773, 777-782, 786-789), suggesting that such a relationship could not be very long (lines 786-789). The reporting is interrupted by the speaker, as she leaves the utterances unfinished, turning to comment on the moment the conversation had been maintained (lines 787-790). This parenthetical comment is contextualised by a prosodic change, as it is produced with a piano voice (line 790). In the following turns, the speaker's expressive hesitations become more noticeable, as there are repetitions of the connector y, but new information is not added (lines 791-795). The first use of y ('and') is followed by joder (line 791) as a filler, and then sabes? is produced as a contact marker to invite listeners' cooperation (Briz, 1998, p. 224f.;Stenström, 2014, p. 72), reinforcing the speaker's expressive difficulties. After a micropause (line 796), Eva summarises what the boy described as an informal relationship, employing the noun rollos ('flings', lines 797-798) and again hesitates in evaluating this, making repetitions and using joder once more as a filler (lines 799-801). According to Coates (1996), women frequently use different forms that serve as hedges in all-female interactions because they often talk about very personal and sensitive topics, while the same forms can also be interpreted as stalling devices because of the difficulties in talking about such issues (see also Stenström, 2014, p. 92). The sensitive topic that is talked about in this interaction can also explain the frequent use of joder as a filler by Eva, while it seems unlikely that this expletive could also be employed as a hedge. CONCLUSIONS The analysis given in this paper demonstrates that joder has developed multiple functions in interaction that are derived and/or linked to its primary use as a 'vulgar' interjection to express the speaker's feelings. This way of developing multifunctionality is characteristic of discourse markers (Blas Arroyo, 1995, p. 87), a pragmatic category in which joder also should be included. Thus, this study provides empirical support for arguments in favour of considering interjections as possible discourse markers in contrast to assumptions that these kinds of words only fulfil communicative functions to which they correspond in accordance with their grammatical category, such as the expression of the speaker's feelings. This assumption seems to be an important reason why research on expletives has generally used quantitative methods. In contrast to this, I conclude that claims concerning the functions of expletives and interjections in general should be based on empirical study and not on their grammatical categorisation. Furthermore, such empirical research should include more discourse-analytical approaches since these are precisely the ones that allow us to observe in detail the possibly different ways in which they are used in interactions. From a gender viewpoint, the analysis suggests that joder can also be attributed a sociolinguistic function as a marker of 'young femininities', as it has illuminated how this expletive is integrated into young women's speaking style in contrast to gender-based traditional rules and broader characterisations of 'women's talk' in Language and Gender research. If we also consider quantitatively oriented studies on female teenagers from London and Madrid (Stenström, 2006), the use of this expletive as a discourse marker provides a key to explaining its similar frequency to the use of fuck in English, which is much more grammatically variable. Also, it can explain that gender differences in the use of these words are diminishing in terms of frequency of use between girls and boys, according to Stenström (2014, p. 11). Future research should explore this apparent process of diminishing gender differences in other geographical areas as well as the possible explanations for it. In the following, I raise several hypotheses related to this. On the one hand, we can hypothesise that young females are, to some extent from a gender perspective, consciously triggering a sociolinguistic change in the use of expletives and taboo words by making regular use of them, apparently the least 'strong' words, to symbolise or to claim equality with boys. This is in line with suggestions made by other researchers (López García & Morant, 1991). Such a process can be related to the fact that in Spain, as in other countries and societies, people are becoming aware and critical of gender constructions, as gender equality has been playing an increasingly prominent role in recent decades in the political agenda and the media. On the other hand, we can also hypothesise that such a challenge is limited to the life stage of the speakers. After all, young women's actions against conventional ideas of femininity are not new, but they have been around for centuries, as Nakamura (2014) demonstrates in her historical discourse analysis of 'schoolgirl speech' in Japan at the end of 1880s. Generally, the use of taboo words by young speakers is interpreted as «a means to provoke the older generation and to oppose authority» (Stenström, 2006, p. 124; see also Stenström, 2014). However, it should also be remarked that girls, in contrast to boys, are not only challenging adult norms but also traditional rules on femininity. Pichler (2015) reviews recent ethnographic studies on young women's displays of verbal toughness, noting that «there does not appear to be a consensus about the extent to which this toughness ultimately empowers the girls» (Pichler, 2015, p. 198), as some of them opt for changing their speaking styles over the years because of new personal and professional situations. If rebellious performances of this type are not beneficial for them in the long term, young women's challenges to gender norms by using expletives and taboo words would lose strength as they grow older, and thus this phenomenon could be limited to this youthful stage. Lastly, we should also consider, following Murphy (2009), the influence of music, film and television, which «have pushed the boundaries of expletive use, where a word like FUCK, which was once considered taboo, is now being regarded as commonplace» (2009, p. 87). From this perspective, young women's use of joder and other expletives could be, if only partially, a reflection of these processes, while young men could continue to underline gender differences in talk by selecting and focusing on those forms that are still severely stigmatised. Research on the use of expletives could delve into gender issues by exploring these possibilities, employing different quantitative and qualitative research methods.
9,462.6
2021-01-01T00:00:00.000
[ "Linguistics" ]
Experimentation and modeling of soil evaporation in underground dam in a semiarid region In semi-arid regions, there is a high evaporation, which leads to soil dryness, interfering in the availability of water in the soil. Usually it is difficult to measure and model the evaporation due to the complexity of the available methods, the low soil water content and the low concentration of water vapor in the air. This can also make it difficult to monitor and simulate the evapotranspiration in these regions. Thus, the Portable Chamber method is used to directly measure evaporation and evapotranspiration, because this technique allows real time estimation and in short time intervals, giving a more detailed estimation of those processes. The objective of this study was to evaluate the evaporation through the mass transfer in the soil in an underground dam under different water table depths and conditions of the semi-arid environment of Pernambuco State in Brazil, through the values predicted by the SiSPAT model and measured by the portable chamber method. For the purposing of modeling and also to better know the soil behavior, soil hydraulic properties were determined though the Beerkan method. The portable chamber method was applied for one of the first times in a semi-arid region of Northeastern Brazil, and it was consistent with the potential evaporation of bare soil, reaching about 1,800 mm per year. The SiSPAT model was quite satisfactory for simulation of soil evaporation in different conditions of the water table depth. The values found for soil evaporation with the simulation of the SiSPAT and the Portable Chamber (PC) method differed in 1.43% and 4.44% for cases where the water table was at 0.20 and 1.20m of depth, respectively. INTRODUCTION The estimation of soil evaporation is fundamental for planning the irrigation and management of the agriculture. In semi-arid regions, the evaporation may reach high values such as 2000 mm/year, which causes problems of soil dryness and yield loss for some crops. These aspects are usually associated with social problems, which are related to water shortage. Some social technologies may be used in the semiarid to storage water, and evaporation may interfere in the storage and availability of water in the soil. Within the technologies used, underground dams (UD) are of fundamental importance in subsistence agriculture. The evaporation in these structures depends on the meteorological conditions, water table and soil hydraulic properties of the vadose zone. There are few studies on evaporation on this type of infrastructure (UD) also known as social technology based on rain water harvesting (QUILIS et al., 2009;LASAGE et al., 2008). Estimation of evaporation in these ecostructures is fundamental for quantifying the water balance, and then to help the adequate management aimed to water conservation purposes, to achieve higher crop yields and to control soil and groundwater and soil salinization processes, especially in the semiarid. In these regions, the water vapor movement is an important part of the total water flow, where the water close to the soil surface is usually scarce (BITTELLI et al., 2008). In these conditions, the measurement and modeling of evaporation have shown difficulties. That happens due to the complexity of the more readily available methods, the low soil water content and the low concentration of water vapor in the air. These aspects may turn difficult the estimation of the evapotranspiration in the region. Portable Chamber (PC) method is used to directly measure evaporation and evapotranspiration. This technique allows real time measurements and in short time intervals, giving a more detailed estimation of the evaporation and evapotranspiration processes. This technique has been used to measure water loss in tree tops (PONI et al., 1997), scrub (CENTINARI et al., 2009) and herbaceous crops (BALOGH et al., 2007;BURKART et al., 2007) and in forests in a semi-arid region (RAZ-YASEEF et al., 2010).There is no standardized form and size of such equipment, with a great variability in the contact area of those reported in the literature, such as 1.5m 2 (LUO et al., 2018), 0,35m 2 (MCLEOD et al., 2004), 0.28 m 2 (CENTINARI et al., 2009), 0.54m 2 (PICKERING et al., 1993). Despite this, usually, the consistency of chamber measurements has been confirmed in the field by a comparison with other methods. Mathematical modeling of water and vapor fluxes in soil may be performed through several numerical models. The SiSPAT model (Simple Soil-Plant Atmosphere Transfers) is an one-dimensional model (vertical), supplied with climate time series of temperature and air humidity, wind speed, global and atmospheric radiation and rainfall. Since its first presentation in the literature (BRAUD et al., 1995a), SiSPAT has been continuously validated in different types of vegetation and soil. It has been subject to different climate and environmental conditions (MORET; BRAUD; ARRÚE, 2007;SOARES et al., 2013). Particularly, the SiSPAT model has been applied with good performance in the semiarid of Brazil (SOARES et al., 2013;AMAZONAS et al., 2015). As usually, application of direct methods for estimation of evapotranspiration are difficult to apply and do not allow analyses of different management scenarios, validating a mathematical model is very important. The objective of this study is to evaluate the evaporation in a UD, through the water transfer in the soil considering different water table depths and in the climate conditions of Pernambuco State semi-arid region. Both Portable chamber and mathematical model are used. MATERIALS AND METHODS The study area is located in the physiographic zone of "Agreste", district of Mutuca, in the city of Pesqueira, semi-arid region of Pernambuco State, in Brazil. The Mutuca Valley is located within the semi-arid North East Brazil in which local communities rely on the groundwater as a resource for irrigation of crops, the predominant source of income. The climate of this location is classified according to Köeppen as BShw' hot semi-arid, hyperxerophilic savanna, with the annual average temperature around 27 °C, annual average relative air humidity of 73% and average wind speed of 2.5 m/s (MONTENEGRO; MONTENEGRO, 2006). This area is subject to high intensity rains, and occurring mostly during few months. These are concentrated in the first semester, with high variability in the rainfall regime, which has an annual average of approximately 630 mm. The evaporation in the dry months (September -November) corresponds to approximately 49% and 51% of the total annual evaporation in the nearby cities of Caruaru and Arcoverde, respectively (ALMEIDA, 2006). Through data from Class A pan evaporation of the surrounding cities, it was observed that the annual average evaporation is 2,400 mm in the city of Arcoverde (35 km away from Pesqueira) and 2,111 mm in the city of Caruaru (100 km away from Pesqueira). The goal of this article is to study the UD "Cafundó II", which has a maximum depth of 5.5 m, axis extension of 42 m and an upstream range of about 1,300 m (ALMEIDA, 2006). The topography of the area upstream the underground dam where the experiment was located is flat and slightly undulated. The underlying geology of the region is characterised by crystalline basement of low hydraulic conductivity and low infiltration capacity. The poor hydraulic properties of the underlying crystalline basement mean groundwater storage and extraction is limited to deposits of alluvium within the base of the valley. Valley sediments are approximately 4-10m deep, extending within the valley base approximately 300m in width and 15km in length (MACKAY et al., 2005). The high permeabilities of the alluvial sediments provide for rapid groundwater movement under natural conditions (UNITED KINGDOM, 2006). However, groundwater flow and storage has been controlled in part by natural barriers to flow where hard basement rocks are exposed at the surface within the valley base and complemented by a series of underground dams installed at roughly even spacing along the valley. Cafundó II is one of these UD. Soil physical characterization and experimental design Infiltration tests were performed aiming the estimation of the soil hydraulic properties at the study site. The infiltration tests under Beerkan methodology were performed along 9 points. In the upstream dam area were located the nine experimental points, A1 to A9, for performing the infiltration tests. Four points (B1 to B4) were chosen for setting the experimental device for the estimation of the soil evaporation (application of Portable Chamber method) (Figure 1). The preliminary characterization of the soils was done through sample collection and laboratory analysis. The objective was to obtain the granulometry and soil specific mass. The granulometry was obtained through acombination of analysis by sedimentation and screening. The clay and silt fractions were determined by sedimentation. The coarsest fraction by sieving (EMBRAPA, 1997). To obtain the soil specific mass standard volume samples were extracted (86.75 cm 3 ) using a cylindrical collector of UHLAND type (SOUZA et al., 2014). As the water table is at shallow depths, the soil was excavated so that the groundwater was reached. Then, the water table depth was measured using a metallic tape. Determination of soil hydrodynamic parameters: Beerkan Methodology For the modeling of water transportation in the soil, it is fundamental to know it´s hydraulic properties, such as water retention curvesθ(h) and hydraulic conductivity, K(θ). In this study, the hydrodynamic properties were determined by Beerkan infiltration method, detailed in Souza et al. (2008). This method estimates θ(h) and K(θ) parameters considering the texture and structure of the soil. In this semi-physical method, θ(h) and K(θ) are described analytically by five parameters: two related to shape, m or n and η, mainly related to the texture, and three normalizations θs, Ks and hg, conditioned to the soil structure. Beerkan method uses a simple circular ring of copper and provides the axisymmetric three -dimensional infiltration as a function of time, I3 (t). The vegetation of the surface is removed while the roots remain in place. Samples of the soil extracted with paddles and aluminum cans are collected to determine initial and final soil moisture. The ring is inserted into the soil to a depth of 1 cm in order to avoid lateral losses. In the beginning of the test, a known volume of water is added inside the ring and the time of infiltration of this volume is measured. When the first volume is completely infiltrated, a second water volume is added to the cylinder, and the time necessary for infiltration is measured, in a cumulative form until the difference between five successive volumes is constant. Shape parameters are obtained from the porosity and particles size distribution curve while normalization parameters are identified by the infiltration curve, using the Best method -Beerkan Estimation of Soil Transfer parameters (LASSABATÈRE et al., 2006). Portable Chamber Method With this method, the water vapor flow is measured for small areas between the soil surface and the atmosphere (DUGAS et al., 1997). This is done through the insertion of a known volume of vegetation, soil surface, or both, and the measurement of an increase of vapor density inside the Portable Chamber (PC). The maximum rate of change in water vapor density with time is proportional to the evapotranspiration flow of the surface delimitated by the Portable Chamber (STANNARD, 1988). The evapotranspiration rate, ET (mm/day), is calculated using the following equation 1: . MxVxC where: M: maximum slope of the water vapor density curve (g.s/ m 2 ); V: volume inside of the Portable Chamber (m 3 ); C: Portable Chamber calibration factor; A: surface area covered by the Portable Chamber (m 2 ); 86.4: factor that transforms g/m 3 s in mm/day, using the water density as 1g/cm 3 . Based on the model presented by Stannard (1988), the Portable Chamber was assembled by putting heat on an acrylic plate, which has a thickness of 4 mm, in a wooden mold. Allowing the production of a piece with a semi-circle shape, with 1 m of diameter and 0.02 m of edge, inside volume of 0.2618 m 3 , and covers an area of 0.7854 m 2 . Two fans were placed on opposite sides, inside of the Portable Chamber, to keep the movement of the air. This was needed to produce conditions inside of the Portable Chamber more similar to the naturals ones (HEIJMANS et al., 2004). A sensor was installed between each fan to measure the relative air humidity and temperature. This allows the calculation of the absolute humidity. Those sensors were also placed outside the Portable Chamber (Figure 2). Due to the importance of keeping the sides of the Portable Chamber sealed, a thermal blanket was used to avoid air flow caused by the wind (CENTINARI et al., 2009). This thermal blanket is composed of a double thermal insulation plastic with aluminum foil in the lower part and a rubberized material with a thickness around 1 cm in the top part. This last one is used to avoid reflection that could interfere with the Portable Chamber application. It would cause, for example, an increase of soil 4/11 surface temperature. With an internal and external radius of 1.0 m and 1.25 m, respectively, the purpose of the blanket is to ensure the thermal isolation as well as to avoid vapor pressure losses inside the Portable Chamber. The Portable Chamber was placed above four plain areas upstream the underground dam, which are marked here as B1, B2, B3 and B4 (Figure 1). A datalogger recorded measurements every second at average intervals of 8 min. Between the intervals, the Portable Chamber was lifted up for a minimum time of 1 min, in order to measure and standardize the air humidity and temperature. The fluctuations of the incoming solar radiation and the air vapor pressure can considerably influence the Portable Chamber measurements (CENTINARI et al., 2009). Therefore the measurements were performed on sunny days. The Portable Chamber calibration factor was determined in the laboratory using the methods described by Stannard (1988). The procedure involved the evaporation of a known quantity of water in a glass recipient placed inside the PC. Simultaneously, the PC above the beaker estimated the evaporation. This procedure was repeated many times in the laboratory. It was performed for each known evaporation rate produced by a heater, that was subjected to the following tensions: 50 V, 100 V, 150 V and 220 V. The measurements of evaporation by the scale and through the PC method were registered at each second and plotted one versus the other. The inclination of the best line that passes through the origin is used as a calibration factor. SISPAT Model (Simple Soil-Plant-Atmosphere Transfer Model) and simulation conditions The model is divided into four modules, which are: soil, atmosphere, the interface soil-plant-atmosphere and soil-plant (BRAUD et al., 1995a). SiSPAT is a computational code written in Fortran language. The detailed description of all water and energy transfer processes, as well as its equations, can be found in Soares (2009). The model may consider soil heterogeneity represented by homogeneous layers. In the current study, the soil was assumed homogeneous. The vegetation, when existent, is treated as a layer. Two energy balances are considered, one for the bare soil and another one for the vegetation. This study is not addressing areas with vegetation. The soil and atmosphere system is represented by two nonlinear equations, the soil surface energy balance and the continuity of mass flow through the soil surface, where soil matric potential and soil surface temperature are unknown (BRAUD et al.,1995b). The SiSPAT model was used to simulate the soil evaporation in the upstream area of Cafundó II UD, in the same conditions as points A1 and A2. The PC method was also applied to compare the two methods in the period between October 13th and 19th of 2011. Two situations were considered for the model application. The first one with a 20 cm homogeneous soil layer (A1), water table at this depth, measured through groundwater level meter. In the second situation, the water table was placed at a 1.20 m depth (A2). For the simulations of the transfer processes, it was considered the hypothesis that the soil profile is homogeneous. Prospections held on-site did not show heterogeneities on the observed scale. Therefore, the retention curve parameters (θr, θs, n, Kg, e Ks) and the hydraulic conductivity curve parameters resulting from the Beerkan method were considered unique for the whole soil profile. The atmospheric data (Figure 3) used as upper boundary conditions for the simulations in the SiSPAT model (10/13/2011 to 10/19/2011) were: global solar radiation (RG, W/m 2 ); atmospheric radiation (RA, W/m 2 ); air temperature (Ta, K); specific humidity (q, kg/kg); wind velocity (U, m/s). The values of RG, Ta and U were provided by ITEP laboratory, from a hydrometeorological station located 35 km away from the study area. The data of a station 35 km away was used because the nearest stations presented did not have time series with continuous measurement and with consistent data for the place where the experiments were carried out. More precisely, there are no meteorological stations/data collection platforms at a distance of less than 35km from the study site. However, studies indicate that the region is hydrologically homogeneous (KELLER FILHO et al., 2005). Despite the distance, the station used is within a region of same hydrological behavior as that of the study site. During the considered simulation period there weren't any rainfall events in the study area. RA value was obtained in function of Ta The vapor pressure was estimated through the air temperature. The atmospheric pressure (Patm) was determined as a function of the temperature. The specific air humidity (q) was estimated from this variable through equation (4). . . The soil water retention curves, θ(h), and the hydraulic conductivity, K(θ), were described, respectively, by Brooks and Corey (1964) and van Genuchten (1980) models (equations 5 and 6): where θ is the volumetric soil water content, θr and θs the saturated and residual volumetric soil water content, respectively; h the matrix soil water potential; hg the bubbling pressure, from where the water starts to drain the soil; n the shape parameters; Ks the soil saturated hydraulic conductivity and η the shape parameter for the hydraulic conductivity curve. Through the measured data, soil temperature and matric potential, a linear interpolation was performed in order to obtain the values at different depth at soil profile. The soil temperature profiles used as input data for the simulation of the initial conditions of both situations are presented in Table 1. The SISPAT Model uses as upper boundary condition the temperature information, which should be exclusively in the Celsius unit. Likewise, for the lower boundary condition, the SISPAT model uses the temperature data on the Kelvin unit. In the simulation for the two situations, a soil potential matrix profile in hydrostatic equilibrium conditions was used as initial setup (Table 2). The daily values of soil temperature used as input data for the lower boundary condition are presented in Table 3. The soil temperature data was based on values obtained from sensors installed at 10 and 20 cm depth, on October 15th 2011, represented by the third simulation day. The soil matric potential used as the lower boundary condition was assumed on the basis of saturation for the two modeling conditions. Soil characterization The soil physic characterization, from the analyses of the granulometric curve, shows that in the study area there is a prevalence of sandy material (Table 4). In table 5 the values of global density and porosity (ϕ) of the points in the study area used for simulation on the SiSPAT are presented. The global density was achieved by the volumetric ring method. Figure 4 demonstrates that the infiltration in point A1 occurred in 850 s to an accumulated infiltration volume of 160 mm, and in point A2 it took about 960 s for the total infiltration of the accumulated infiltration volume of 185 mm. The points presented an average infiltration velocity of 1.8.10 -2 cm.s -1 , reasonable for sandy soil. Point 5 was an exception. It presented an average value of infiltration velocity higher than 5.2.10 -2 mm.s -1 . This may have been due to the occurrence of stone in the layer adjacent to the surface. In Table 6 are presented the values of hydraulic characterization of the soil from the infiltration tests applying the Beerkan method. The saturated water content was estimated as equivalent to 90% of total porosity values, according to Braud et al. (2009). The parameters of the points A1 and A2 were used for the modeling purpose for the two different conditions. Figure 5 presents the measurements made with the PC method for the temperature inside the PC and also of relative air humidity inside and outside it for the first measurement of point B1 as a function of time. It exhibits the vapor density curve, which, according to Stannard (1988), has a proportional behavior as related to evaporation ( Figure 5A and 5B). The air temperature inside the Portable Chamber (PC) varied from 29.5 °C to 30.8 °C, that is, an increase higher than 4% in a time lap of 10 min. The maximum inclination of the increase of vapor density with time was 0.0797g/m 3 s, with R 2 equal to 0.9973. The decrease of relative air humidity inside the PC was triggered by its rise for about 1 minute, after each application, to obtain and standardize air humidity and temperature with the external environment. Soil evaporation -Portable Chamber Method The applications from group B1 and others groups are presented in Table 7, which shows the initial and final values of temperature and relative air humidity inside the PC, the maximum slope of vapor density with time and soil evaporation. The calibration factor found in the laboratory was 1.24. The area and volume involved by the PC were 0.785 m 2 and 0.262 m 3 , respectively. Analyses were performed in a representative period of the meteorological behavior of the region at the beginning of the dry period. The slopes of vapor density with time and evaporation of the bare soil were higher in the application points of the group B2. This happened because of: i) In the point, the water table location is at 0.20m; and ii) Measurements with the PC at point B2 were performed at the time of more influence of the solar radiation (from 00:20pm to 1:10pm). The highest daily evaporation was measured near to a time of 00:39 pm. Between the applications performed, the group B2 presented the highest average, 5.53mm.d-1. This point was close to the temporary rivulet, where the water table was at a depth of approximately 0.20m. As the soil was with high moisture content, the soil evaporation was specially conducted by the atmospheric meteorological conditions, independently of the soil physic properties. The water flow was in the liquid phase, due to the water level table being closer to the surface. At point B3, the water table was 1.20 m and the average evaporation was 1.7 mm.d -1 . Then, with a dry surface layer of the soil and the lower water table, the soil evaporation rate was lower and the flow occurred in the soil in two different ways: liquid water and vapor flow. In the condition of unsaturated soil or groundwater level at a deep depth, the process of evaporation also depends on the soil hydraulic properties, which are function of the structure and texture (SHOKRI; SALVUCCI, 2011). At point B3, close the A2, it was found the highest value of saturated hydraulic conductivity ( Table 6). The time of measurement with this method, between 11:32am and 2:42pm, can overestimate the value of the daily evaporation in the bare soil. Almeida (2006) evaluating the daily evaporation in four periods had verified that about 70% of it occurs in the second or third period of the day, that is, from 6am to 6pm, on the region. The daily evaporation of 1.6 to 8.3mm/day, with an average of 3.6 mm/day, results on about 1800 mm/year. That is the average range of potential annual evaporation of the bare soil of semi-arid region in Brazil. Silva and Souza (2011) obtained an average reference evapotranspiration value of 1,145.1 mm/year for the state of Pernambuco, using Penman-Monteith method. Souza et al. (2015) obtained actual evapotranspiration values of 1,277.5 mm/year for an area with caatinga vegetation. Rebouças indicates that the expected evaporation range for Northeast Brazil is between 1500 and 3000 mm/year. Krishnan et al. (2012) obtained evaporation values varying from 2.80 to 3.60 mm / day in the region of Arizona (1022 to 1314 mm / year). SISPAT Model The evolution of accumulated evaporation as a function of time obtained with the SiSPAT model simulation is presented in Figures 6A and 6B. The values of accumulated evaporation for the period of 7 days were 60.79 mm and 22.87 mm for the groundwater depth at 0.2 m (point A1) and 1.2 m (point A2), respectively. At the condition of the watertable at 0.20 m from the surface, the simulated average evaporation was 8.7 mm.d -1 . When the water table was at 1.20 m, the daily average evaporation simulated was 3.3 mm.d -1 . Therefore, on the dryer soil surface condition, the evaporation rate is, partly, controlled by the soil, in function of its capability to conduct water from the deeper layers to the surface. The greater is the thickness of the layer and deeper is the water table, more water is retained, that happens because it suffers influence of solar radiation. Thus, it happened a contribution of water vapor flow when the water table was at a higher depth, giving priority to the process of heat transfer by conduction on evaporation quantification, according to Boulet et al. (1997). In dry soils, the evaporation of water in the soil occurs at acertain depth under the surface (SHOKRI; SALVUCCI, 2011). The surface is then deeply regulated by the depth of the evaporation zone and not by the surface water content. It should be considered that the hydrodynamic characterization of the soil was performed only on the surface, what could have interfered on the results. For the water table nearest to the surface a higher evaporation was found, reaching 10.29mm at the sixty simulation day ( Figure 6A). The evaporation increased rapidly from 10 am onwards, in all days simulated due to the solar radiation. This simulation has verified that 79% of the evaporation occurred in the time interval of 6 am to 6 pm. This value matches with the evaporation percentage founded by Almeida (2006), who has identified a value of 70%. For the water table at 1.20m it was found a lower evaporation, with a maximum of 5.1mm at the last simulation day. The sharp increase in evaporation occurred from 10 am onwards, in all days of simulation, representing 81% of the bare soil simulation between 6 am and 6 pm ( Figures 7A and 7B). Evaporation in the upstream area of the underground dam The first simulated condition with the water table at 0.2m, used the soil hydraulic properties of point A1 (Table 6), similar to the conditions applied to the Portable Chamber of group B2. The values found for evaporation with both methods differ at 1.43%, which indicates that the estimation of evaporation by both methods are equivalent. The evaporation simulated on October 15th 2011 was 5.61mm.d -1 , and the measured value through the application of the PC, of group B2, was 5.53mm.d -1 . The simulation that was performed when the watertable was at 1.20m matches with the hydraulic soil conditions of point A2 (Table 5). At this point, the conditions are similar to the point B3, where the PC method was applied. The evaporation stimulated by the simulation in the SiSPAT on October 15th 2011 was 1.80mm.d -1 . The evaporation by the Portable Chamber method of group B3 was 1.72mm.d -1 , with a difference of 4.44%, meaning that there is an equivalence between the used methods. The results may have suffered interference from the following conditions: dry period on the measurement with the Portable Chamber, acritical time of application of the Portable Chamber, the absence of continuous monitoring of the water table depth, atmospheric data used was located far from the study area (35 Km). But the used methodologies considered important in a different area, although with similar condition sand intervening factors on the evaporation process, especially in semiarid regions, which are in general neglected by others simulation methods. CONCLUSIONS The SiSPAT model and the Portable Chamber (PC) method can be used to simulate different scenarios of management for the underground dam area, considering for example cultivation of crops. The SiSPAT model produced results in consonance with the region and situations analyzed. There was equivalence in the application of the two methods, the simulation with SiSPAT and PC method, even for cases where the water table was in different depths.
6,692
2019-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Damping Coefficient Induces Stochastic Multiresonance in Bistable System with Asymmetric Dichotomous Noise Stochastic resonance (SR) and stochastic multiresonance (SMR) phenomena as a function of the underdamping and overdamping coefficients in bistable system with asymmetric dichotomous noise are investigated numerically. By the efficient numerical simulation of the asymmetric dichotomous noise and the fourth-order Runge-Kutta algorithm, we calculate the system responses, the averaged power spectrum, and the signal-noise-ratio (SNR) that can be ameasure of the existence of SR and SMR phenomenon. And the effects of damping coefficients on the three characteristics are analyzed. Firstly, it is found that the periodic asymmetric distribution of the particle’s hopping between two potential wells in the system response is gradually weakened as underdamping coefficient is increased to overdamping coefficient. And it also displays the periodic asymmetric distribution under the circumstance of overdamping coefficient. Then the averaged power spectrum exhibits multiple sharp peaks, and the highest peak increases and decreases for underdamping coefficient which is added to overdamping coefficient. Finally, SNR versus the damping coefficient for the system parameters and the noise parameters are acquired and they show multiple peaks and valleys, which illustrates the obvious SMR phenomena in bistable system with asymmetric dichotomous noise. Introduction Previously, noise is regarded as an ingredient that has a negative effect on system.Nevertheless the appearance of SR phenomenon alters people's attitude, which assists the sequence of the disordered system to be well organized.Given this, SR has been investigated profoundly by a number of scientists.SR phenomenon was firstly proposed to explain the periodicity in the earth's ice ages [1].Then SR phenomenon was observed in various fields in nature, for instance, physics, chemistry, biology, ecology, graphics, and so on [2].Up to now, there are abundant publications on SR based on the original discovery of SR, and the conception of SR was developed in a broad sense, for example, SMR, quantum SR, autonomous SR, aperiodic SR, coherence resonance, and logical SR [3][4][5][6][7][8][9][10][11][12][13][14][15][16]. SR and SMR are thought to be a kind of the coherent manner among nonlinear system, random noise, and periodic force.And various characteristics can be used to measure the emerging of SR and SMR phenomena, such as the amplitude of system response, the output amplitude gain, the averaged power spectrum, the spectral power amplification, the signal-noise-ratio, the residence time distributions, and the information and probability of detection, by which SR and SMR phenomena can be easily explored [2].For example, Ray and Sengupta measured SR in underdamped bistable system by the power spectrum [17], Xu et al. found SR phenomena in a bistable system with Lévy noise by the SNR [18], and Zhang et al. demonstrated SMR phenomena in a linear system driven by multiplicative polynomial dichotomous noise by the SNR [19].In this paper, the amplitude of system response, the averaged power spectrum, and the SNR are the main characteristic indicators to demonstrate the existence of SR and SMR phenomena. As we know, SR and SMR phenomena in overdamped bistable system and underdamped bistable system have been researched widely both theoretically and experimentally for their applications [17,[20][21][22][23].However, fewer investigators pay attention to SR and SMR phenomena in the overdamped and underdamped bistable system simultaneously.Thus, we will focus on SR and SMR phenomena versus the damping coefficients.Besides, a mass of the researches focus on Gaussian noise or white noise just for its simplicity.But they are just a sort of ideal model of the actual noises, and they cannot represent noise with the exponential relevance.Nevertheless asymmetric dichotomous noise is a non-Gaussian color noise and is widespread for its manageability in some fields, which leaps between two fixed points (, −, (, > 0)); its waiting time is submitted to the exponential distribution, and this kind of the leaps inspires the nonequilibrium more easily than Gaussian noise [24].In addition, up to now SR and SMR phenomena about the dichotomous noise have been studied largely in theory.Jin, Li, and some other experts have done some great work theoretically on SR in typical systems with the dichotomous noise [25][26][27].And the research of the numerical simulation of SR and SMR phenomena is less relative.Consequently in this paper we will explore SR and SMR phenomena versus the damping coefficients in bistable system with asymmetric dichotomous noise numerically. This paper is organized as the following.In Section 2, the bistable system and asymmetric dichotomous noise are introduced.In Section 3, SR and SMR phenomena are researched in three aspects.In Section 3.1, SR phenomenon is investigated by the transition between the two potential wells of the response of the system driven by the asymmetric dichotomous noise.In Section 3.2, the averaged power spectrum is computed numerically, and the signature of SR phenomenon is obviously displayed.In Section 3.3, we obtain the SNR versus the damping coefficient.And there are multiple peaks in the SNR which demonstrates the existence of SMR phenomena in the bistable system with asymmetric dichotomous noise. Bistable System with Asymmetric Dichotomous Noise We consider a bistable system with a periodic signal driven by asymmetric dichotomous noise, which can be described by the following Langevin equation: where is the damping coefficient.It is an underdamped bistable system when < 1; it is an overdamped bistable system when > 1. () is the double potential well function defined as − 2 /2 + 4 /4 ( > 0, > 0), which has two stable points at ± = ±√/, and the height of potential barrier is Δ() = 2 /4.When we choose the parameters as = = 1, it is the standard double potential well function. And the two stable fixed points are ± = ±1, Δ() = 1/4.() is the periodic signal, which can be described as cos( + ), and and , respectively, are the amplitude and forcing frequency of the periodic signal.() is asymmetric dichotomous noise which jumps between two values and with mean waiting times and , and the rates of the switching can be obtained from the mean waiting; that is = 1/ , = 1/ . The master equation of this noise can be described as with the initial condition and the total probability condition: Then the solution of the master equation is And the stationary solution of (2) can be easily obtained as Then the stationary mean which can be obtained by using ( 4) is and the stationary correlation function is Here the mean function and the correlation function satisfy the following conditions: where is the noise intensity and is the noise correlation time.Thus, the noise intensity of the asymmetric dichotomous noise can be computed as Finally, the rates of the switching and of the asymmetric dichotomous noise are computed in terms of the state values and the noise intensity.Also the conditional probabilities and can also be computed as follows: And the numerical series of the asymmetric dichotomous noise can be obtained by the above formulas and the following relevant algorithm.The algorithm procedure of the asymmetric dichotomous noise can be described as follows.Firstly, the initial state of the asymmetric dichotomous noise can be supposed to be 0 = , ( 0 = ); a series of random numbers ( = 0, 1, 2, . ..) in the interval [0, 1] are generated in computer, which are compared with the conditional probability or .Then we consider that if 0 < ( 0 < ), we will ascertain 1 = , or else 1 = .Next, we also should decide that if 1 < ( 1 < ), we will ascertain 2 = , or else 2 = .And keep doing this.Finally, a series of random numbers are got in computer.Moreover, the timestep should be much lesser [28]. Stochastic Multiresonance We are devoted to researching SR and SMR phenomena of this bistable system induced by the change of the damping coefficients in this section.According to this complex twodimensional system, numerical simulation method is a good research approach, so the specific numerical simulation program is shown in regard to SR and SMR in bistable system with asymmetric dichotomous noise.And we study the system responses, the averaged power spectrum, and the SNR which can be used to reflect SR phenomenon. For the system responses, the particle oscillates at the bottom of the one potential well at a level of the certain damping coefficients, and then it oscillates between the two potential wells with the increasing and decreasing of the damping coefficients.That is to say, SR phenomenon can be discovered with the damping coefficients of the system.And the averaged power spectrum displays the relationship between the system, the cosine signal, and the asymmetric dichotomous noise for different underdamping and overdamping coefficients, from which we can find SR phenomenon.Moreover, we also devote ourselves to computing the SNR as one important symbol of SR phenomenon.If the SNR gives rise to one or more extreme values as we modulate the damping coefficients from the underdamping to overdamping at a certain range of parameters, it shows that the bistable system with asymmetric dichotomous noise has presented SR phenomenon or SMR phenomena. System Responses. In order to obtain the responses of this system, the bistable system is transformed into two onedimensional systems as and then the system responses are calculated by the discretization of the above form and the fourth-order Runge-Kutta algorithm. We fix the parameters of the system, the signal, and the asymmetric dichotomous noise as = 0.2, = 0.03, = 0.04, = 0.01, = 3.0, = −1.0.And the effects of the different damping coefficients including the underdamping coefficients and overdamping coefficients on the SNR are investigated.In Figure 1(a), the underdamping coefficients of this system is 0.2, and we can find that the particle oscillates in the + = 1 around, surmounts the potential barrier, jumps into and oscillates around the − = −1.Then the particle oscillates in the − = −1 around, surmounts the potential barrier, and oppositely jumps into and oscillates around the + = 1.For some time, the periodic transition of the particle between the two potential wells and the uniform asymmetry are obviously shown.With the increasing of the underdamping coefficients to 0.6, Figure 1(b) also clearly reflects the periodic transition of the particle between the two potential wells and the uniform asymmetry.When the underdamping coefficients of this system is increased to 1.0, the underdamped bistable system transforms into the overdamped one, the transition of the particle between the two potential wells and the asymmetry of the distribution can also be found but the clear periodicity is not occurred, and it is important that the particle remains mostly in the − = −1.Finally, when the overdamping coefficients are continually increased to 1.4, compared with the case of Figure 1(c), the transition between the two potential wells still remains, and the most important thing is that the asymmetry of the distribution is more severe and the vast majority of the states are located in the − = −1, yet the initial state is located in the + = 1 in Figure 1(d). According to three subfigures of Figure 2, we pay attention to the overdamped bistable system and the asymmetry of the system response.In Figure 2(a), the overdamping coefficient is adjusted to 1.3, and the asymmetric dichotomous noise parameter is increased to = 4.0, = −2.0.It is illustrated that the particle oscillates between the two potential wells, the intense asymmetry of the distribution exists, and most states are located in the − = −1; meanwhile the random vibration is intense compared with Figure 1(d).In Figure 2(b), we interchange the state value of the asymmetric dichotomous noise = 2.0, = −4.0; it is clear that the asymmetry of the distribution and the action of the random vibration still exist; nevertheless most states are located in the + = 1 conversely, compared with Figure 2(a).Moreover, with the asymmetric dichotomous noise parameter decreased to = 3.0, = −1.0, Figure 2(c) shows us the more regular asymmetry of the distribution as the action of the random vibration is weak. Averaged Power Spectrum. Power spectrum can reflect a kind of the coordination between the signal and the noise.But it includes a large number of random factors.So we make use of the method of average to eliminate the random factors and then obtain the averaged power spectrum which reflects the coefficient properties of the cosine signal and the asymmetric dichotomous noise.Accordingly, whether the damping coefficient including the underdamping and the overdamping can induce SR phenomenon in terms of the averaged power spectrum is the cure of our research in this section. The power spectrum density can be obtained by the following formula of the Fourier transform of the autocorrelation function: Next, it is found that ensemble averaging on 600 power spectrum trajectories is more sufficient to achieve the averaged power spectrum.Thus several averaged power spectrum figures are obtained as the following for the different damping coefficient. In Figure 3(a), we choose the damping coefficient = 0.05, which means that the system becomes the underdamped bistable system with the fixed parameters = 1.0, = 3.0, = −1.0, = 0.001, = 1.0, = 0.03.And it is easily observed that three distinct peaks appear on the averaged power spectrum due to the effect of the periodicity of the periodic signal and the asymmetry of the asymmetric dichotomous noise on the system, although the underdamping coefficient is smaller.They are marked 1 , 2 , and 3 from left to right so as to reveal the meaning that the figure contains conveniently and particularly.Between the three peaks, the value of the middle peak 2 is the highest; it is almost double 3 's, and the value of the left peak 1 is the lowest.Then the underdamping coefficient is added to 0.1 as other parameters are invariant, and three distinct peaks are still observed in Figure 3(b).Here the value of the highest peak 2 is rapidly increased, but the value of 3 is decreased and the value of 1 is decreased slightly.And as it is increased to 0.3, 0.6, 0.9, 1.0, and 1.3, the similar increase and decrease are clearly present in Figures 3(c), 3(d), 3(e), 3(f), and 3(g).In general, when the damping coefficient is increased gradually from the underdamping to overdamping, the value of 2 that is the highest peak increases sharply and then decreases slowly, the value of 3 that is the second highest peak decreases all the time, and the value of 1 that is the lowest peak decreases firstly and then increases slowly and disappears last.And all those variations of the three peaks with the change of damping coefficient reveal SR phenomena. 3.3. Signal-Noise-Ratio.SNR is a typical method to measure SR and SMR phenomena.And in this section as a function of damping coefficient it displays the conspicuous SR and SMR phenomena.At present, there are many several numerical simulation methods about SNR.We employ the following formula [29]: where () and () are the output power spectrum of the periodic signal and the asymmetric dichotomous noise, respectively.And in the following figures, the peaks of the SNR phenomenon are marked as 1 , 2 , 3 , and 4 similarly in order to represent those figures conveniently and particularly. In Figure 4, SNR as the functions of the damping coefficient for the different amplitudes of signal and the fixed system = 1.0, 0.7, 0.4, = 3.0, = −1.0, = 0.001, = 1.0, = 0.03, displays the conspicuous SR and SMR phenomena.It is clearly showed that, as the damping coefficient increases from 0.04 to 1.4, the SNR firstly increases sharply, next decreases slowly, and then increases more slowly.The nonmonotonic behaviors of the SNR obviously reveal the occurrence of SMR phenomenon.At the same time, the value of the peak descends slowly and then rapidly; also the value of the valley descends always and it moves towards the left, when the amplitude of the signal decreases from 1.0 to 0.7 and then to 0.4. In Figure 5, we observe the SNR versus the damping coefficient for the different forcing frequency of the periodic signal, and four subfigures represent the changes of SNR with the fixed system parameter = 1.0, = 3.0, = −1.0, = 0.001, = 0.03.In Figure 5(a), the obvious SR phenomenon is easily observed, but SR phenomenon demonstrates to us that there are two peaks; in other words, it is SMR phenomenon.And as the forcing frequency of the periodic signal is reduced from 1.5 to 1.4 and to 1.3, the peaks 1 and 2 heighten and move right gradually and they become more and more evident, while it barely heighten when is reduced from 1.4 to 1.3 according to 2 .And when it is decreased from 1.3 to 0.7, the similar move, increase, and decrease of the peaks are present, respectively, in Figures 5(b), 5(c), and 5(d). In short, as the forcing frequency of the periodic signal constantly lessens from 1.5 to 0.7, the SNR demonstrate the variation between two peaks and one peak, which can declare that the noteworthy SMR phenomena exist in the bistable system with asymmetric dichotomous noise. In Figure 6, similarly we focus on the SNR versus the damping coefficient for the different state of the asymmetric dichotomous noise with the fixed system parameters = 1.0, = 0.03, = 1.3, = −1.0, = 0.001.According to the states of the asymmetric dichotomous noise, the state is fastened and the state is changed continually.When the state is selected as = 4.0 and the other state is = −1.0, the noise is still the asymmetric dichotomous noise.It can be easily found that there are two peaks in the SNR, which go up increasingly but do not move toward left and right, when the state is reduced to 3.0 and then to 2.0.However, three distinct peaks turn up as the state is changed to 0.5.During the process of the decrease of the state , the SNR show us two peaks and three peaks, which demonstrates SMR phenomena. The effect of the different asymmetric states of dichotomous noise on the SNR versus the damping coefficient is just investigated for SMR phenomena.Now we research how the symmetric states of dichotomous noise influence SMR phenomena versus the damping coefficient.And the parameter of the periodic signal and the noise is fastened = 1.0, = 0.03, = 1.3, = 0.001.By the clear manifestation of Figure 7, it is found that the SNR versus the damping coefficient present multiple peaks with the different symmetric states of dichotomous noise.When it is = 3.0 = −3.0,there are a peak and a valley.As the states are lessened to = 2.0 = −2.0, the peak and the valley rise up but their positions do not shift to the left or the right; meanwhile another peak appears in the SNR.When the states are continuously reduced to = 1.0 = −1.0, the peak 1 and the valley go up and similarly their positions do not shift left and right; the peak 2 yet develops into two peaks.And then the three peaks 1 , 2 , and 3 get more remarkable, when the states are lessened lower = 0.5 = −0.5.So SMR phenomena are always present, when the state of the symmetric dichotomous noise is increased constantly.Finally, whether the noise intensity of the asymmetric dichotomous noise can cause the prominent influence on the peak of the SNR is researched.Figure 8 shows us how the SNR changes based on the fixed parameters = 1.0, = 0.03, = 1.3, = 0.5, = −1.0 with the increase of the noise intensity.It is demonstrated that three noteworthy peaks 1 , 2 , and 3 appear in the SNR, when the noise intensity is 0.001.And as it is increased to 0.01, the peak 2 disappears, and the peaks 1 , 3 descend.There are two peaks in the SNR, which are marked again as 1 , 2 .Then we continue to improve the noise intensity to 0.2 and find that the two peaks 1 and 2 both decline gradually.Until the noise intensity is increased to 0.3, the peaks 1 and 2 decline gradually once again, but it is particular that the peak 2 disappears.There are a peak and a valley in the SNR with = 0.03.And all give evidence of the existence of SMR phenomena. Furthermore, some situations in Figures 6, 7, and 8 are different from Figures 4 and 5 Discussion and Conclusion Stochastic resonance (SR) and stochastic multiresonance (SMR) phenomena versus the damping coefficient in bistable system with asymmetric dichotomous noise have been researched numerically in this paper.The system response, the averaged power spectrum, and the signal-noise-ratio (SNR) have been applied to investigate and demonstrate SR and SMR phenomena.Firstly, by the fourth-order Runge-Kutta numerical algorithm, it is found that the asymmetric dichotomous noise can induce the uniform asymmetry and the irregular asymmetry of the system response in the bistable system with the appropriately fixed parameters, as the damping coefficient is increased gradually from the underdamping 0.2 to the overdamping 1.4.Also the uniform asymmetry can be shown in the system response of the overdamped system, after the states values parameters of the asymmetric dichotomous noise are adjusted properly.Then we obtain the averaged power spectrum by the Fourier transform of the autocorrelation function.It is observed that there are three obvious peaks in the averaged powering spectrum.And when the damping coefficient is increased gradually from the underdamping to overdamping, the three peaks generate various transformations.The two parts above reveal SR phenomena in bistable system with asymmetric dichotomous noise. Finally, the SNR versus the damping coefficient is researched.It is found that several peaks appear in the SNR under some circumstances.The gradual augment of the amplitude of the periodical signal can make those peaks increase and the position toward the right.The decrease of the forcing frequency of the periodical signal can induce that the two peaks of the SNR rise and then decline and one peak disappears in the end; meanwhile the position of the peaks move toward the right.And one peak of the SNR develops to two and three peaks with the decrease of the states of the asymmetric and symmetric dichotomous noise.Also the reduction of the noise intensity of the asymmetric dichotomous noise can arouse two or three peaks of the SNR.Furthermore, the states and the intensity of the asymmetric dichotomous noise do not make the position of the peaks of the SNR move toward left and right.And this various situations of the peaks of the SNR versus the damping coefficient demonstrate the existence of SMR phenomena in the bistable system with asymmetric dichotomous noise. of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Figure 4 : Figure 4: SNR versus the damping coefficient for the different amplitudes of the signal. Figure 5 : Figure 5: SNR versus the damping coefficient for the different forcing frequencies of the periodic signal. Figure 6 :Figure 7 : Figure 6: SNR versus the damping coefficient for the different state values of the asymmetric dichotomous noise. Figure 8 : Figure 8: SNR versus the damping coefficient with the different noise intensity of the asymmetric dichotomous noise.
5,325.6
2014-10-19T00:00:00.000
[ "Physics" ]
Research on charging strategy of electric vehicle considering user and load curve . With the increasing popularity of electric vehicles, the disordered charging of large-scale electric vehicles will have a great impact on the safe operation of regional distribution network. In order to solve the security problems that may occur in the power grid, this paper uses the time-sharing pricing time division method for EV charging to meet the needs of EV users. Based on this method, a multi-objective optimization model is established, which takes the electric vehicle charging capacity and power as the constraints, and based on the minimum user charging cost and the minimum load curve variance. Then, the model is solved by non-dominated sorting genetic algorithm (NSGA -(cid:2)), and the optimal compromise solution is extracted by using fuzzy set theory. Finally, the correctness of the proposed model is verified by the example. Introduction With the country's strong support for the development of electric vehicles and the increasing awareness of environmental protection, the number of electric vehicles will increase dramatically in the future. However, under the existing technology, charging through the distribution network is the main charging method for EVs. This will increase the power supply pressure of the distribution network and become an important new load for the distribution system. At the same time, with the large-scale development and popularization of electric vehicles, it will inevitably have a major impact on the distribution network, which will cause problems such as overloading of transformers in the distribution network, voltage drops, and increased peak-to-valley differences. Therefore, the impact of large-scale electric vehicle charging 1 and discharging behavior on the power grid and its charging optimization strategy has become a current research hotspot [1][2][3] . The optimization of electric vehicle charging strategies mainly involves two objects: EV users and distribution networks.Reference [4] proposed the optimization model and method of the peak-valley electricity price period for the impact of electric vehicle charging and discharging on the peak and valley filling of the power grid, and solved the optimization problem of the peak-valley electricity price period. Reference [5] proposes to use dynamic interpolation to solve the function with the minimum load peak-valley difference as the target, so as to carry out orderly charging control, but the above studies are only from the perspective of the power grid, and do not consider the interests of users. Reference [6] from the perspective of the operator, with the goal of the most profitable charging station, establish a two-stage model to study the optimization strategy of electric vehicle charging. Reference [7] establishes an optimization model aiming at the minimum charging cost of the user and the earliest initial charging time of the battery. Although the user's satisfaction is fully considered, the peak-to-valley load difference is not effectively reduced. Although the above research considers the interests of users or charging stations, it will increase the peak-to-valley difference of the power grid and affect the safe operation of the power grid. This paper takes the conventional charging methods of electric vehicle charging stations as the research object, comprehensively considers the economics of EV user charging and the safe operation of the power grid, and establishes a multi-objective charging optimization model based on the minimum user charging cost and the minimum variance of the load curve, and adopts The nondominated sorting genetic algorithm (NSGA-ჟ) solves the established model and obtains the Pareto solution set of the multi-objective optimization problem, and then uses the partial fuzzy membership function to solve the Pareto solution set to obtain the optimal compromise solution. Finally, an example is given to verify the effectiveness of the proposed charging strategy. The TOU price of distribution network is set for the nonspecial load in a certain area. After electric vehicles are put into the grid, the local distribution network will be faced with the situation that the division of peak and valley time periods of TOU electricity price is different from the load fluctuation of actual load curve [7][8] .For example, when the electricity price of the power grid is in the normal period, the load curve of the local distribution network presents a peak state. If EV users charge under the guidance of this strategy, a large number of EVs may be charged during the peak load period, leading to the phenomenon that the local distribution network appears to exceed the peak. This is not conducive to the safe operation of the power grid, reduces the utilization rate of equipment, and increases the network loss of the local distribution network [12][13] .Therefore, on the basis of the original electricity price division period, the ordinary period is further divided, and the TOU electricity price division of electric vehicles is obtained as shown in Table 1: Charging optimization model Charging optimization model this paper mainly studies the centralized charging station of electric vehicles. When the electric vehicle users access the charging station for charging, the intelligent charging pile of the charging station collects the current remaining battery capacity, type, total capacity and other information from the EV battery management system. The user needs to set the time of vehicle pick-up and the desired state of electric quantity at the end of charging. At the same time, the user can forecast the original load curve of the local distribution network on that day according to the historical load [9] . This paper divides one day into 24 time periods with each interval being 1h, and sets the original load of the local area network as in the first time period, and assumes that the battery capacity of electric vehicles is Q. Objective function In order to reduce the adverse impact of the electric vehicle charging load on the overall operation of the power grid, the load variance is used as an objective function, and at the same time considering the economics of EV user charging, the minimum user charging cost is set as another objective function. Model solving method based on NSGAalgorithm Traditional genetic algorithms are mostly used to solve single-objective optimization problems, and the effect of solving multi-objective optimization problems is not ideal. Therefore, traditional algorithms are difficult to solve the problem of electric vehicle charging optimization models with two different optimization goals. Therefore, the NSGA-ჟ algorithm is used to solve it. Compared with the traditional GA algorithm, it introduces a non-dominated quick sorting method, the concept of crowding degree and elite retention strategy. The resulting solution has better convergence and robustness. The calculation steps of the NSGA-ჟ algorithm are as follows: (1) Initialize the population. 100 populations are randomly generated within the range of fixed decision variables. (2) Non-dominated sorting and crowding calculation. The newly formed populations are sorted non-dominated to generate different grades, and then the crowding degree is calculated between the different grades. (3) Selection, crossover and mutation. The selection operator selects individuals based on non-dominated ranking levels and crowded distance. The SBX (binary crossover) operator and polynomial mutation operator perform crossover and mutation operations on the population to form new offspring populations. (4) Population consolidation. The newly generated child population and the parent population are combined to generate a new population of 2N. Finding the best compromise The solution obtained by the NSGA-ჟ algorithm shown in Figure 1 is a set of Pareto solutions, and in actual operation, generally only one optimal solution is selected. Here, we use fuzzy theory to find the optimal compromise solution [11]. When solving the optimal solution of the model, because the optimization goal of the model sought is the minimum problem, a partial fuzzy membership function is selected, and each solution in the solution set is represented by the value of the fuzzy satisfaction function. The fuzzy satisfaction in the set The closer the degree function value is to 1, the closer the corresponding objective function value is to the optimal solution of the model. Simulation parameter setting In order to verify the optimization model and conclusion of this paper, we take all charging stations in a certain region (population of 1 million or so, 2% of the total population is predicted to be the number of electric vehicles) as an example. The typical daily load curve of this area is shown in Figure 2: The arrival time of electric vehicle approximately obeys the normal distribution of mean 9 variance 0.5. The initial SOC of the vehicle obeys the uniform distribution between (0.2-0.6). The battery capacity of the electric vehicle is 24 kw , and the charging power of the charger is 5 kw . Because the guidance degree of the user by the charging strategy is unknown, the response coefficient K of the user charging strategy is introduced. K is the percentage of the total number of electric private cars that can respond to the charging strategy affected by the charging strategy. Analysis of simulation results The population number is 100, the maximum number of iterations is 500, the cross rate is 0.8, and the variation rate is 0.2. The initial user response coefficient K is 0.5, and then use MATLAB software to solve the multi-objective optimization model, and finally get a group of results. As shown in Figure 3, the Pareto solution set obtained is relatively smooth, which shows the effectiveness of the algorithm for model solution; with the increase of charging cost, the load standard deviation is decreasing, because when the charging cost is very low, a large number of users charge in the valley price period, resulting in a new load peak, resulting in a large fluctuation of the grid, increasing the load variance. In order to study the effect of charging optimization strategy on different user responsiveness, 25%, 50% and 75% of user responsiveness are optimized and simulated respectively, and the results are shown in Table 2. It should be noted that when the user's response degree is 0, it corresponds to the disordered charging of the electric vehicle, that is, as long as the electric vehicle needs to be charged, it is connected to the charging pile to charge at the maximum power until the desired SOC is achieved. It can be seen from table 2 that the load variance of the power system is the largest when charging out of order, which leads to the decrease of the utilization rate of power resources, and will have a negative impact on the power grid. At the same time, the charging cost of users is the highest. The higher the response coefficient of the user's charging strategy is, the higher the vehicle's response to the charging strategy is, the smaller the variance of the total load is, reducing by 9.5%, 18.9% and 26.5% respectively, which shows that the charging strategy used in this model can smooth the load curve to a certain extent and reduce the variance rate of the load curve. At the same time, the user's charge cost decreases with the increase of the response degree of the charging strategy, which is 18.5%, 38.2% and 57.2% lower than the disordered charging. It can be seen that the charging optimization strategy can significantly reduce the user's charge cost and improve the user's satisfaction. In conclusion, with more and more vehicles responding to the charging strategy, the overall load variance of the system will gradually reduce, and the user charging cost will also reduce, which shows that the established optimization model can not only make the system run more safely, but also improve the user satisfaction. Conclusion Aiming at the problem of electric vehicle charging optimization, this paper presents a multi-objective charging optimization model, which considers the user's expected SOC and charging power constraints, and takes the minimum user's charging cost and the minimum load curve variance as the optimization objective. The simulation results obtained by studying the response of different charging strategies verify the feasibility and accuracy of the model, it is proved that the strategy can effectively reduce the charging cost and load variance, and enhance the economy and stability of the system operation. This paper assumes that all electric vehicles are of one type, and later research can be carried out for different types of electric vehicles.
2,823.8
2021-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Genetic elimination of dopamine vesicular stocks in the nigrostriatal pathway replicates Parkinson’s disease motor symptoms without neuronal degeneration in adult mice The type 2 vesicular monoamine transporter (VMAT2), by regulating the storage of monoamines transmitters into synaptic vesicles, has a protective role against their cytoplasmic toxicity. Increasing evidence suggests that impairment of VMAT2 neuroprotection contributes to the pathogenesis of Parkinson’s disease (PD). Several transgenic VMAT2 mice models have been developed, however these models lack specificity regarding the monoaminergic system targeting. To circumvent this limitation, we created VMAT2-KO mice specific to the dopamine (DA) nigrostriatal pathway to analyze VMAT2’s involvement in DA depletion-induced motor features associated to PD and examine the relevance of DA toxicity in the pathogenesis of neurodegeneration. Adult VMAT2 floxed mice were injected in the substancia nigra (SN) with an adeno-associated virus (AAV) expressing the Cre-recombinase allowing VMAT2 removal in DA neurons of the nigrostriatal pathway solely. VMAT2 deletion in the SN induced both DA depletion exclusively in the dorsal striatum and motor dysfunction. At 16 weeks post-injection, motor symptoms were accompanied with a decreased in food and water consumption and weight loss. However, despite an accelerating death, degeneration of nigrostriatal neurons was not observed in this model during this time frame. This study highlights a non-cytotoxic role of DA in our genetic model of VMAT2 deletion exclusively in nigrostriatal neurons. (1-méthyl-4-phényl-1,2,3,6-tétrahydropyridine) model is currently used. MPTP's metabolite MPP + , taken up by DA neurons inhibits the mitochondrial respiratory chain to induce ROS formation and elicits apoptotic neuronal death 8 . In both animals and humans, MPTP administration specifically induced dopaminergic neuronal loss and provokes parkinsonian motor symptoms [12][13][14] . One endogenous molecule thought to have similar cytotoxic properties to MPTP is DA itself. When free in the cytosol, DA auto-oxidation or its oxidation by the monoamine oxydase (MAO) results in ROS production, renders cells more vulnerable to other toxins [15][16][17] and ultimately provokes apoptotic cell death 18,19 . The type-2 vesicular monoamine transporter (VMAT2), a neuronal H + -ATPase antiporter, is of particular interest when studying DA neurodegeneration given its protective role from endogenous and exogenous toxicants 20,21 . The primary role of VMAT2 is to store monoamines into synaptic vesicles and to regulate stimulated monoamines quantal release 22,23 , resulting in protection of monoamines from cytoplasmic oxidation. Indeed, by regulating the amount of DA accumulated in the cytosol, VMAT2 protects cells from their own neurotransmitter toxicity 24 , suggesting that VMAT2 can modulate susceptibility of DA neurons to degeneration. Several different transgenic VMAT2 mice models have been developed to study the role of VMAT2 in monoaminergic signalling. The constitutive VMAT2 knockout (VMAT2-KO) mice, which die within few days after birth, displayed a 90-100% reduction in the total amount of monoamines in the entire brain confirming the impairment of monoamine storage and release induced by VMAT2 removal [25][26][27] . Moreover, the constitutive VMAT2 heterozygote (VMAT2-HET) mice, viable into adulthood with a 30-40% decrease in brain monoamines level, showed increase vulnerability of DA neurons to both MPTP and L-DOPA toxicity 26,28,29 . A recombinant event leading to the generation of a hypomorphic allele gave rise to VMAT2-knockdown (VMAT2-KD) mice which displays a 95% decrease in VMAT2 expression and function and have a 70%-90% decrease in brain monoamines. VMAT2-KD mice demonstrated an increase in DA-and MPTP-mediated toxicity that was sufficient to induce DA nigrostratial pathway neurodegeneration 29,30 . In human studies, VMAT2 gain of function haplotypes have been correlated with being protective for sporadic PD 31 . Moreover, DA uptake is reduced in PD patients suggesting that there may be an alteration of VMAT2 mediated vesicular filling in PD patients 32 . Although current studies have outlined the relevance of VMAT2 in the understanding of PD pathogenesis neurodegeneration, VMAT2 deletion models lack specificity. Indeed, VMAT2-KO mice only survive for few days at birth 25,27 , therefore making it difficult to evaluate the long term behavioral and pathological outcomes of a defect in VMAT2 expression. Although VMAT2-HET and VMAT2-KD mice are viable into adulthood, the deletions of VMAT2 expression are unspecific to the DA system and to the SN, inducing some secondary effects not relevant to PD. The purpose of the present study is thus to analyze VMAT2's involvement in DA depletion-induced motor features associated with PD and examine the relevance of DA toxicity in the pathogenesis of neurodegeneration. We created viable VMAT2-KO mice specific to the DA nigrostriatal pathway; VMAT2 floxed engineered mice were stereotaxically injected in the SN with an adeno-associated virus (AAV) expressing the Cre-recombinase, allowing VMAT2 removal in DA neurons of the nigrostriatal pathway solely. In this model, DA depletion was observed exclusively in the dorsal striatum and was associated with motor deficits starting at 8 weeks post-injection ongoing until 16 and associated at this time with decreased food and water consumption, weight loss and accelerate death. However, during this time frame, these symptoms were not associated with any degeneration of nigrostriatal neurons. This study highlights a non-cytotoxic role of DA in our genetic model of VMAT2 deletion exclusively in nigrostriatal neurons, suggesting that preventing DA storage in vesicles and therefore DA release may not be responsible for the neurodegeneration seen in PD. Results Genetic, neurochemical and behavioral validation of specific ablation of the VMAT2 gene in the SN. Conditional ablation of the VMAT2 gene in the SN was obtained by injecting the AAV2 viral vector expressing the Cre recombinase, in the SN of 2 months old VMAT2 lox/lox mice. The Cre recombinase spliced out the VMAT2 floxed gene specifically in DA neurons of the SN. To validate the specific conditional removal of VMAT2 in the SN, we assessed the efficiency of Cre-mediated splicing via radioactive VMAT2 in situ hybridization (Fig. 1A). A selective absence of VMAT2 mRNA labeling was observed in the SN starting at 8 weeks post-injection and ongoing, whereas VMAT2 mRNA was still expressed in DA neurons of the ventral tegmental area (VTA), demonstrating efficient and specific ablation of VMAT2 in the structure of interest. The absence of VMAT2 mRNA observed in the SN of VMAT2 lox/lox injected with the AAV2 expressing the Cre recombinase was associated with a marked decrease in the tissue levels of DA in the dorsal striatum (Caudate Putamen, CPu), as measured via high performance liquid chromatography (HPLC) ( Fig. 1B; Mann-Whitney U: AAV2-GFP vs AAV2-CRE-GFP: 8 weeks: U = 0, Z = 2.65, **p = 0.008; 16 weeks: U = 0, Z = 2.92, **p = 0.0034). In this same SN-projecting structure, the HVA:DA and DOPAC:DA ratios were increased in Cre-injected VMAT2 lox/lox mice compared with control mice (Fig. 1C; Mann-Whitney U: AAV2-GFP vs AAV2-CRE-GFP: HIAA/DA: U = 0, Z = −2.93, **p = 0.0034; DOPAC/DA: U = 6, Z = −2.07, *p = 0.038). This suggests that the amounts of DA was produced normally but quickly degraded due to the lack of VMAT2-dependent accumulation and protection in the vesicles. However, both DA level, HVA:DA and DOPAC:DA ratios were unchanged in the nucleus accumbens (NAc) and the prefrontal cortex (PFC), demonstrating DA nigrostriatal pathway specificity Behavioral consequences of DA depletion in the nigrostriatal pathway. The survival curve of bilaterally Cre-injected VMAT2 lox/lox mice indicated a progressive decrease of the survival fraction of mice starting at 16 weeks post-injection with 100% of mice dead after 19 weeks whereas a survival rate of 100% was observed in control mice ( Fig. 2A; Log-rank (Mantel-Cox) test: Chi square = 14,42, p = 0.0001). This effect on survival was associated with weight loss, at 16 weeks post-viral injection the weight of Cre-injected mice was 25.7 ± 1.5 g compared with 33.6 ± 1.9 for control mice ( Discussion The present study directly addresses a role for the specific deficit of DA vesicular storage in the nigrostriatal pathway in the etiology of neuronal cell death in neurodegenerative disorders, and most particularly Parkinson disease. To reach this objective, we used a unique genetic model to remove in adult mice VMAT2 expression in DA neurons of the SN specifically. In conditional VMAT2 KO mice model, we previously showed that early somatic deletion of VMAT2 exclusively in DA neurons causes a rapid postnatal death 33 , whereas the specific genetic deletion of VMAT2 in noradrenergic or serotonergic neurons does not prevent mice to live and grow up to adulthood [33][34][35] . This early postnatal death observed in conditional DA neurons-specific VMAT2 KO mice was similar to the one observed in constitutive VMAT2 [25][26][27] or DA-specific TH 36 KO mice. Therefore, to circumvent this limitation, we used floxed VMAT2 mice, that have no phenotypical alterations by themselves and we initiated the VMAT2 gene splicing by stereotaxic injection of a virus expressing the Cre-recombinase in adult mice. It is well known that unilateral DA depletion induces rotational behavior, however the direction of the rotation is dependent upon the model or the drugs used. Contralateral rotational behavior is exhibited by 6-OHDA-lesioned rats, whereas ipsilateral rotations are produced in animals with electrolytic lesions 37 . Moreover, in the 6OHDA model, while amphetamine induced ipsilateral rotation, contralateral rotation was observed with apomorphine 38 . It has also been shown that activation of the direct pathway (D1) induced contralateral rotation in contrast to activation of the indirect pathway (D2) which induced ipsilateral rotation 39 . In our model of genetic removal of VMAT2, unilateral injection of the Cre-expressing AAV2 in the SN shows the expected behavioral consequence of DA transmission imbalance; the contralateral rotations are of the same magnitude as those seen in unilateral 6-OHDA lesioned rats 37,40,41 , and these spontaneous rotations are increased upon cocaine administration. Interestingly, despite that VMAT2 mRNA signal disappeared as early as 8 weeks after bilateral injection, we did not reliably observed these spontaneous rotations before 16 weeks following the viral administration. Accordingly, at 8 weeks following the bi-sided virus injections, DA levels are already significantly decreased by 90%, and the mice show significant locomotion and coordination deficits, but have a normal food and water intake and no weight loss. 16 weeks following viral administration in the SN, VMAT2 lox/lox AAV2cre injected mice showed a total collapse of DA tissue-concentration in the striatal target region, but not in the ventral striatum nor frontal cortex that are mostly innervated by DA fibers originating from the VTA 42 . At this time, when DA levels are as low as only 4-5% of the control, mice stop eating and drinking and all died within the next two weeks. We hypothesizes that the manifestation of contralateral rotational behavior, at 16 weeks post-virus injection, may suggest that the <5% remaining DA preferentially activate the direct pathway, as a consequence of DA higher affinity for the D1 DA receptor 43 . Moreover, activation of the direct DA pathway is known to increase ambulation in mice 39 . Acting preferentially on this pathway could be a compensatory mechanism to counteract the induced motor dysfunction. The absence of vesicular DA release is not paralleled by any anatomical alterations, as assessed by the key DA markers: the dopamine transporter (DAT), the D2 dopamine receptor and the tyrosine hydroxylase (TH) enzyme in cell-bodies and terminals. In contrast, an increase in D2 dopamine receptor mRNA is observed in the SN and the VTA of DA-depleted mice. This is in agreement with the absence of a role of DA in the development and maintenance of DA circuitry that have been observed in TH KO mice 36,44 . Vesicular transporters are mostly localized in nerve terminals, at distance from the cell body from where they have to be relocated, and this may advocate that they should have a quite robust maintenance giving their essential role in transmission [45][46][47] . However, at the present time and to our knowledge, we have absolutely no information regarding their half-life time, for any of them. Since the VMAT2-KD mice, expressing only 5% of VMAT2 30,48 , survive for several months, we can infer that after 16 weeks, our dying VMAT2 lox/lox -cre mice must be below this 5% threshold of VMAT2 expression. Considering that it would take 5 half-life periods to reach 3.125% of expression and 6 half-time periods to reach 1.56%, and that mRNA depletion is reached in one week, it would imply that one half-life time period is comprised between 17.5 (15 Wk/6) and 21 (15 Wk/5) days. This value, that represents the first attempt to evaluate the life cycle of these transmembrane vesicular transporters, clearly indicates a very slow turnover rate and metabolism of this vesicular protein. Further experimental data, using complementary approaches, should now confirm this first report. To evaluate a cytotoxic role for cytoplasmic DA, we tested the idea that the absence of DA-vesicular storage and the sequential increase of cytoplasmic DA may possibly trigger cell death through mitochondrial dysfunction and ROS generation [49][50][51][52] . Supporting this hypothesis, a decrease of VMAT levels in Parkinson brains have been substantially found [53][54][55] . However, our findings would indicate that it is more likely a consequence of DA neurons death, rather than a cause for their degeneration. We observed that, compared to 6-OHDA or MPTP lesions of the DA nuclei 12-14 which induce a cellular death of DA neurons, the genetic deletion of VMAT2 we engineered here does not affect the anatomy of the DA system after 16 weeks, as observed by the absence of alteration in presynaptic markers of the dopaminergic system. The dopamine transporter (DAT) regulates extracellular DA level through reuptake of the release transmitter into presynaptic DA neurons. Tyrosine Hydroxylase (TH) is the rate-limiting enzyme in DA biosynthesis and its expression constitute a specific indicator of DA production 56,57 . The D2 presynaptic auto-receptor regulates DA transmission by inhibiting the probability of vesicular DA release 58 , decreasing DA synthesis 59 and altering the uptake of DA 60 . Accordingly, DAT, the D2 presynaptic receptor and TH are the most appropriate markers of damage to the striatal DA terminals in PD 61,62 . In our model, we did not find any alterations in DAT mRNA expression in the SN and TH immunoreactivity in both the striatum and the SN. These results are partially in accordance with the one observed in reserpine treated mice where no effect was found on DAT immunoreactivity but a decrease in striatum TH immunoreactivity was observed 63 . However, transgenic mice expressing only 5% of VMAT2 (VMAT2-KD mice) present an age-dependent degeneration of nigrostriatal dopamine neurons. In this model of unspecific disruption of DA storage, mice exhibit decreased DAT and TH immunoreactivity in the striatum associated with increased oxidative damage 30 . Finally, in our model of dopamine depletion, we find an increased expression of the D2 dopamine receptor mRNA in the SN and the VTA. In accordance with this effect, it was found that in mice lacking the dopamine transporter DAT, which display biochemical and behavioral dopaminergic hyperactivity, the D2 autoreceptors mRNA, measured by in situ hybridization is reduced in the ventral midbrain 64,65 . In DAT-KO mice, this decrease was found mice to counteract the increased DA neurotransmission. In our experimental condition, we hypothesize that in absence of DA transmission, increased D2 autoreceptor expression may compensate the dramatic change in DA homeostasis. Genetic and pharmacological reduction of VMAT2 resulted in lower tissue level of striatal DA. Consistently, in DA terminals of the striatum originating from the SN, we observed a dramatic collapse of DA concentration. As observed in VMAT2-KD mice, this reduction is accompanied by an increase in the ratios of DA metabolites to DA (HVA/DA and DOPAC/DA) suggesting an increase in DA turnover. This highlights that the genetically engineered deficit of DA storage, rather than uprising the accumulation of cytoplasmic DA, is accelerating the metabolic outcome of DA. What is the actual value of animal models to assess the molecular and cellular etiology of PD? One good illustration of these timescale is by comparing the genetic deletion of the DAT in human and mice. In humans, mutations of the DAT gene is responsible for a very severe early onset of PD, which first motor symptoms usually appeared during the first postnatal year and young children are dying before the age of 10 years [66][67][68] . In mice, the knockout of DAT does not trigger such a dramatic phenotype 64 , even though it has been reported a sporadic death of about one third of DATKO mice, usually between the 20 th and 50 th postnatal week 69 when the mice where on a C57BL/6 background, whereas such high proportion of death is not seen when the DATKO mice are on a C57BL/6xDBAD/2 hybrid background 70 . Actually, beside the neurotoxin-induced animal models, where DA neuronal death occurred quite rapidly, none of the genetic models of PD, based on gene invalidation or mutation, show any degeneration of DA neurons 71 . In our VMAT2 KO mice targeted to the SN, death occurred within 4 to 5 months, whereas in human, it takes decades for the catecholamines neurons to die upon the reaching of motor symptoms. In summary, we engineered a new transgenic mice model of VMAT2 removal-induced DA depletion specifically in the nigrostriatal pathway. DA homeostasis alteration induced in this model reproduce motor deficit observed in PD, however, it is not sufficient to reveal DA cell loss and neurodegeneration characterizing PD physiopathology. Although this model of DA depletion does not fully recapitulate the complexity of the human disease, it constitutes the first model of dissociating DA depletion pathway. Here, we are able to target exclusively the nigrostriatal pathway without affecting the mesocorticolimbic pathway. This innovative model could help figure out the specific involvement of these two distinct DA pathways in both motor and non-motor function and dysfunction. Methods Animals. Animal housing, breeding, and care were operated in accordance to the Canadian Council on Animal Care guidelines (CCAC; http://ccac.ca/en_/standards/guidelines) and all methods were approved by the Animal Care Committee from the Douglas Institute Research Center under the protocol number 5570. All methods were performed in accordance with the relevant guidelines and regulations. The mice were kept under standard conditions at 22 ± 1 °C, a 60% relative humidity, and a 12-h light-dark cycle with food and water available ad libitum. The floxed VMAT2 mouse strain were obtained from the Mouse Clinical Institute (Institut Clinique de la Souris, MCI/ICS, Illkirch, France). Heterozygous VMAT2 floxed mice (VMAT2 lox/+ ) were crossed to generate homozygote mice (VMAT lox/lox ) necessary for Cre-expressing viral vector injection. VMAT2 lox/lox mice were maintained on a C57BL/6 J background. After weaning and sexing, mice were housed in group of 4-5 animals per cage. Male VMAT lox/lox mice were used for stereotaxic surgery at 2 months of age. Quantitative in situ hybridization. High Performance Liquid Chromatography. HPLC was performed on micropunches of 1 mm diameter from the dorsal striatum (CpU), the nucleus accumbens (NAc), and the prefrontal cortex (PFC) of VMAT lox/lox at 8 and 16 weeks post viral injection. After decapitation, brains were collected, frozen in isopentane at −30 °C and stored at −80 °C. Micropunches of specific structures were homogenized in a solution containing 45 μl of 0.25 M perchlorate and 15 μl of DHBA (100 mg/ml) which served as an internal standard. Following centrifugation at 10,000 rpm for 15 minutes at 4 °C, the supernatant was isolated to detect DA, dihydroxyphenylacetic acid (DOPAC), homovanillic acid (HVA), NE, serotonin (5-HT), and 5-hydroxyindolacetic acid (HIAA) using high pressure liquid chromatography with electrochemical detection (HPLC-EC). Samples were run through a Luna C18 (2) 75 × 4.6 mm 3 μm analytical column at a flow rate of 1.5 ml/min and the electrochemical detector (ESA Coularray, model #5600 A) was set at a potential of −250 mV and +300 mV. The mobile phase consisted of 6% methanol, 0.341 mM 1-octanesulfoic acid sodium salt, 168.2 mM sodium acetate, 66.6 mM citric acid monohydrate, 0.025 mM ethyenediamine-tetra-acetic acid disodium (EDTA) and 0.71 mM triethylamine adjusted to pH 4.0-4.1 with acetic acid. Using ESA's CoulArray software, the position of the peaks for each metabolite was compared to an external standard solution containing 25 ng/ml DHBA, DA, NE, 5-HT, DOPAC, HVA, and 50 mM acetic acid. In parallel, pellets were reconstituted in 50 μl of 0.1 N NaOH and kept for protein quantification using a BCA TM Protein Assay Kit (Fisher Scientific, Ontario, Canada). Each analyzed sample was measured in µg/g of protein. Immunofluorescence labeling. Mice were perfused with 0.9% NaCl followed by 4% paraformaldehyde (PFA). Brains were collected and post-fixed in 4% PFA for 2 hours, and kept in a 15% sucrose solution at 4 °C before being sliced in coronal sections (40 μm thick) using a cryostat (Leica CM3050S). Free-floating slices of VMAT lox/lox mice injected with the AAV2-GFP or the AAV2-CRE-GFP into the SN were rinsed in PBS 0.1 M and incubated overnight with a rabbit anti-TH (1/4000; Santa Cruz, sc-14007) primary antibody diluted in PBS 0.1 M, 2% normal goat serum (NGS), and 0.3% triton. After washes, slices were incubated for 2 hours in secondary antibody, a goat anti-rabbit alexa 555 (1/500; Life technologies, A-21429) diluted in PBS 0.1 M with 2% NGS, and 0.3% triton. Sections were then rinsed and mounted onto gelatin-coated slides under Vectashield mounting medium. The number of TH expressing cells in the SN and the VTA was counted bilaterally on every three sections (total of 6 sections per animal) throughout the entire nucleus from bregma −2.70 to bregma −3.88. The density of TH positive fibers was analyzed bilaterally on every three slices for a total of seven sections per animal throughout the NAc from bregma −1.70 to bregma 0.74 and on nine sections throughout the CPu from bregma −1.70 to bregma 0.14. Spontaneous and evoked rotations. At 16 weeks' post unilateral viral injection in the SN, VMAT2 lox/lox mice were placed in a cylindric open field (40 cm diameter). Ipsilateral and contralateral rotation, defined as each 360° rotations that contain no turn of more than 90° in the opposite direction, were recorded during 5 minutes in baseline conditions and after cocaine injection (10 mg/kg, recording started 10 minutes after injections). Cocaine hydrochloride (Sigma Aldrich) diluted in a NaCl 0.9% was administrated intraperitoneally. SCientifiC REPoRts | 7: 12432 | DOI:10.1038/s41598-017-12810-9 Locomotor activity and motor coordination. Spontaneous locomotion was measured before the viral injection and every 4 weeks after the viral injection in an Omnitech digiscan activity monitor. Plexiglas open-field chambers (40 cm 2 ) with photocells placed on bottom and lateral surfaces allowed to measure the total distance travelled at 5-minute intervals for 2 hours. Motor coordination was assessed by an accelerating rotarod (ROTO-ROD, Series 8, IITC Life Sciences) before the viral injections and every 4 weeks after the viral injection. The mice latency to fall was recorded over 4 consecutives trials at a rotating speed range from 4 to 25 accelerating rpm for a maximum of 5 minutes. Gait analysis. At 8 and 16 weeks' post-viral injection, the gait of VAMT2 lox/lox mouse during spontaneous walk/trot locomotion was analyzed to identify specific paw step (stride length; stride width, and stride angle). Grip test. 16 weeks post viral injection, VMAT lox/lox mice were placed in the center of the wire mesh screen (12 mm squares of 1 mm diameter wir) and the screen were rotated to an inverted position over 2 sec, with the mouse's head declining first. The screen was held steadily 40-50 cm above a padded surface. The time when the mouse falls off was recorded, or it was removed when the criterion time of 300 sec was reached. Statistical analysis. The results are expressed as mean ± SEM (standard error of the mean). No statistical methods were used to pre-determine sample sizes, but our sample sizes were similar to those generally employed in literature for the same paradigms. Statistical analyses were performed using Statistica software. Since the sample sizes were small (n < 30) and/or the variables did not follow a normal distribution (Shapiro-Wilk test) and/or the variances were not equal among groups (Leven test), we used a nonparametric statistical analysis. For 2 × 2 comparisons, we performed the Mann-Whitney U test for two independent samples (AAV2-GFP vs AAV2-CRE-GFP) and the Wilcoxon matched pairs test for dependent samples (8 weeks vs 16 weeks; Ipsilateral vs contralateral). For multiple repeated measures analysis (Total distance travelled and latency to fall in the rotarod), we used the Friedman test followed by the Wilcoxon test for the 2 × 2 comparisons. The comparison of the survival distribution between AVV2-GFP and AAV2-CRE-GFP groups was analyzed using the log-rank (Mantel-cox) test. Optic density was quantified using MCID for VMAT2, D2 and DAT mRNA labelling and the number of TH positive fibres in the striatum (CPu vs NAC) with image J. A P value < 0.05 was taken to indicate statistical significant differences between groups.
5,745.6
2017-09-29T00:00:00.000
[ "Biology" ]
An Overview of Cotton Gland Development and Its Transcriptional Regulation Cotton refers to species in the genus Gossypium that bear spinnable seed coat fibers. A total of 50 species in the genus Gossypium have been described to date. Of these, only four species, viz. Gossypium, hirsutum, G. barbadense, G. arboretum, and G. herbaceum are cultivated; the rest are wild. The black dot-like structures on the surfaces of cotton organs or tissues, such as the leaves, stem, calyx, bracts, and boll surface, are called gossypol glands or pigment glands, which store terpenoid aldehydes, including gossypol. The cotton (Gossypium hirsutum) pigment gland is a distinctive structure that stores gossypol and its derivatives. It provides an ideal system for studying cell differentiation and organogenesis. However, only a few genes involved in the process of gland formation have been identified to date, and the molecular mechanisms underlying gland initiation remain unclear. The terpenoid aldehydes in the lysigenous glands of Gossypium species are important secondary phytoalexins (with gossypol being the most important) and one of the main defenses of plants against pests and diseases. Here, we review recent research on the development of gossypol glands in Gossypium species, the regulation of the terpenoid aldehyde biosynthesis pathway, discoveries from genetic engineering studies, and future research directions. Introduction Cotton plants in the genus Gossypium possess pigment glands, which are often referred to as "gossypol glands". These gossypol glands appear as black or brownish-red dots in all the tissues of cotton plants, with the exception of pollen and the seed coat [1,2]. Gossypol is a unique secondary metabolite that can enhance the resistance of cotton plants to pests and diseases [3]. The toxicity of cottonseeds to humans and ruminants due to the presence of gossypol limits their use as a source of protein and oil [4,5]. Several previous studies have examined the genesis and development of gossypol glands in cotton and the production and accumulation of gossypol during gossypol gland development [6,7]. The gossypol glands of cotton plants emerge from a mass of meristematic cells as a group of secretory cells, and mature gossypol glands consist of a subcuticular storage cavity surrounded by one to three layers of secretory cells [8]. The mechanism of gossypol gland formation involves the dissolution of gossypol gland cells into gland primordium, followed by the degradation of the gland primordium into the gossypol gland cavity [9]. Several materials produced at different points throughout this process mediate the degradation of the gossypol gland primordium and the development of the gossypol gland cavity [10]. Previous studies in cotton have shown that programmed cell death (PCD) plays a key role in the production of gossypol glands [11]. PCD of gossypol gland cells promotes the production and accumulation of gossypol; the products of the PCD of gossypol glands might thus contribute to the production of gossypol. Gossypol and its derivatives, as well as other secondary metabolic products in the gossypol gland, are the main chemicals that are stored in the gossypol gland and confer its color [12,13]. Cotton plants possess internal and external glands. Nectar glands are external glands that are often present on the skin, whereas internal glands (radius of 0.1-0.4 mm) are oval and spherical and are either black, brilliant yellow (orange), yellow-brown (yellowish-brown), green (red-brown), or purple depending on the species [5,7,14]. The glands of cottonseeds contain mostly gossypol with traces of deoxyhemigossypol, and the glands of cotton leaves contain hemigossypolone, a gossypol derivative [15,16], and heliocides. Gossypol is a triterpenoid aldehyde that has been used in medicine for its anti-tumor and anti-carcinogenic activities. It has also been used as an antifertility agent for male patients; as a pesticide to combat insect, fungal, and bacterial pests, and as an ingredient in various cosmetic products [17,18]. Developmental Changes in the Morphology of the Gossypol Gland in Cotton Plants Gossypol glands are present in all tissues of cotton plants, with the exception of the pollen and seed coat. Gossypol glands in older plants are also present in the phloem rays of the bark [19][20][21]. The number of gossypol glands is particularly high in Gossypium barbadense. The density and size of gossypol glands vary in different parts of cotton plants as well as among species and races. The gossypol glands of G. barbadense are darker and more conspicuous compared with those in other cultivated species. To characterize developmental changes in the morphology of the gossypol gland in cotton plants, we studied the microstructure of the gossypol gland in cotyledons at 2, 12, 36, and 72 h and embryos at 22,25,29,35, and 50 days post-anthesis (DPA) that were collected and fixed in formaldehyde alcohol acetic acid from the near-isogenic line GI1 (G. barbadense with gossypol glands in both the seeds and plants). The tissues were dehydrated through a graded series of ethanol, embedded in paraffin, and cut into sections. The histological structure of the gossypol glands was characterized using a scanning electron microscope [22,23]. Several changes in the gossypol glands on the surface of the cotyledon of GI1 occurred during embryo formation. At 22 and 25 DPA, the gossypol gland was absent on the surface of the cotyledon ( Figure 1A,B); at 29 DPA, the gossypol gland on the surface of the cotyledon was black ( Figure 1C); at 30 DPA, the gossypol gland was light brownish-red ( Figure 1D); and at 35 to 50 DPA, the amount of gossypol in the pigment gland cavity continued to grow, nearly filling the entire pigment gland cavity. (Figure 1E,F). The morphological changes of the gossypol glands during embryonic development were also characterized ( Figure 1G-I). At 22 DPA, the cotyledon cells were closely arranged, and no special cells were present ( Figure 1G). The gossypol gland primordium began to form around 25 DPA, and it consisted of a dozen cells with a black hue and a thick protoplasm that formed a ring around the gland. The gland primordium was spherical, and its cells were densely packed together in two to three layers on the surface. There were typically two to three large central cells with a visible cell wall and nucleus [2,6]. There were usually one to two peripheral cells, which were smaller than the central cells. The peripheral cells were compressed into long ovals, and the central cells were round ( Figure 1H). Signs of differentiation appeared at 29 DPA, and a portion of the gland primordia tissue disintegrated into the gossypol gland cavity. Irregularities were observed in the core cells of the gland primordia that would later dissolve. When they began to dissolve, the distinction between the cell wall and the nucleus became less pronounced. The peripheral cells were compressed into a thin strip, which began to generate a small cavity ( Figure 1I). The fuzzy appearance of the cells in the gossypol gland cavity stemmed from the destruction of the nuclei of gland primordia cells. Staining revealed a deep and thick wall, which might have developed due to the cell debris that accumulated following the degradation of gland primordia tissue. Filamentous material was present in the cavity because the peripheral cells had been compressed into long strips ( Figure 1J). The disintegration of the core cells continued between 30 and 50 DPA, and the width of the gossypol gland cavity increased during this period ( Figure 1K). At 50 DPA, a developing secretory cavity was visible in the center of the gossypol gland [19,21,24], and it was surrounded by several disintegrating secretory cells ( Figure 1L). The microstructure of the gossypol gland during seed germination was also characterized. Two h after seed germination, the gossypol gland cavity on the surface of the cotyledon was visible, and it consisted of two to three layers of cells. The peripheral cells resembled long strips due to the extrusion process ( Figure 1M). The long, extended morphology of the peripheral and central cells of the gossypol gland cavity was maintained between 12 to 36 h after seed germination. The barrier between the cell nucleus and the cell wall was blurry, and some filamentous material was present in the gossypol gland cavity ( Figure 1N,O). The quantity of filamentous material in the gossypol gland cavity was higher 72 h after seed germination, and the color of this material darkened. The diameter of the gossypol gland cavity changed slightly during the germination stage ( Figure 1P). The major cause of the color change in the gossypol gland between seed germination and embryo formation was the difference in the staining time between these two stages of development [6,15,19,22]. Functions of the Gossypol Gland in Cotton Plants The growth of the global human population will require the increased production of food, fiber, and feed. However, crop production is limited by climate conditions, including the increased drought frequency and salinity in agricultural lands [5,25,26]. Cotton is an important crop for the production of fiber and food, and a total of 49 species belong to the genus Gossypium L. Cottonseed, soybeans, and rapeseed account for 6.9% of the world's production of protein meal [27]. Cottonseed production on a global scale can potentially provide the yearly protein requirements for half a billion people, as cotton plants generate 1.65 kg of seed for every kg of fiber produced [28]. However, all cotton species possess lysigenous glands that produce terpenoid aldehydes, and the most dangerous is the sesquiterpenoid gossypol, which is toxic to both nonruminant animals and humans [29]. These gossypol glands store terpenoid aldehydes, such as gossypol, on the surfaces of cotton organs and tissues [2,28,30]. The structure of gossypol was first identified by Adams et al. (1938); cotton plants make gossypol, which is a phytoalexin. These glands increase the resistance of plants to pests and diseases [31,32]. Gossypol has been shown to have anti-carcinogenic, anti-HIV, and antibacterial activities and reduce fertility in males in vitro [33,34]. However, the full nutritional potential of the protein and oil of cottonseed has not been exploited due to its toxicity; gossypol also discolors cottonseed oil [35]. Cotton varieties in which seeds and plants lack gossypol glands have been developed, and these varieties either possess no gossypol or have extremely low gossypol levels [29]. Because the protein and oil derived from glandless seeds are free of gossypol and thus suitable for direct consumption [25], the resistance of these glandless cotton varieties to pests and the yield of cotton fibers are reduced; thus, glandless cotton varieties have not been widely cultivated [7,36]. Cotton has long been thought to be a crop that could provide a valuable source of fiber and food. Future breeding efforts are needed to develop cotton varieties that possess gossypol glands inside the roots, leaves, and stems (which aid the resistance of cotton plants to pests and diseases) and lack glands in the seeds to permit their safe consumption [37]. Alternatively, an ideal cotton cultivar might be characterized by delayed gossypol production until germination, which is a trait that has only been observed in a few Australian Gossypium species [7,36,38]. Many molecular biology and genetic engineering studies have examined the relationship between gossypol glands and gossypol production, and this work has led to several new findings that will have a major effect on the breeding and planting of cotton, the industrial processing of cottonseed, and even its use as animal feed and human food. These discoveries will greatly aid the development of agriculture and the economy and ensure global food security [2,5,31,39]. The Gossypol Biosynthetic Pathway Cotton plants produce a group of lineage-specific sesquiterpenoids, such as gossypol and hemigossypolone, that have antifungal, antibacterial, or insecticidal activity against a variety of herbivores, including the lepidopterans cotton bollworm and beet armyworm [40][41][42]. Gossypol is the major, if not only, sesquiterpene phytoalexin present in cotton seeds, and hemigossypolone is more abundant than gossypol in leaves [43,44]. Terpenoid aldehydes are produced in the lysigenous glands of cotton plants [45,46]. Gossypol is the main substance in the glands of Gossypium hirsutum in achlorophyllous plant sections; by contrast, gossypol methyl and dimethyl ethers are the most common substances in the glands of G. barbadense. Hemigossypolone is the major terpenoid aldehyde in the glands of the immature green tissues of G. hirsutum, and a novel quinone, hemigossypolone-7-methyl ether, has been identified in G. barbadense [15,47]. There is substantial variation in terpenoid quinones and their helicoid derivatives in wild Gossypium spp. and allied Gossypieae taxa. Numerous cadinene sesquiterpenoids and helicoids (sesterterpenoids) involved in disease and insect resistance are present in the gossypol glands of cotton plants [23,48,49]. Gossypol was originally thought to be produced from acetate through the isoprenoid pathway based on its structure. Previous studies have investigated the incorporation of mevalonate-2-14C into gossypol, which is a key step in the isoprenoid pathway, as well as the distribution of radioactivity in gossypol ( Figure 2) [5,50,51]. Cotton terpenoid aldehydes and cadalene derivatives are sesquiterpenes (C15) generated by terpenoid metabolism in the cytosol through the mevalonate (MVA) pathway [36,52]. Farnesyl diphosphate (FPP) is converted into the linear carbon skeleton of sesquiterpenes in cotton [53]. FPP is cyclized by numerous sesquiterpene synthases to generate the molecular framework for distinct types of sesquiterpenes [54]. In cottonseed, gossypol is the main sesquiterpenoid produced, and the concentrations of desoxyhemigossypol (dHG) and hemigossypol are low. Hemigossypolone is produced from dHG in cotton leaves [48,55]. The cadinene enzyme, which is a soluble hydrophobic monomer with a molecular mass of 64 to 65 kD, has been isolated from a glandless cotton mutant [53,56]. Various cadinene sesquiterpenoids and helicoids (sesterterpenoids) are present in the gossypol glands of cotton and contribute to the resistance of cotton plants to disease and insect pests [39]. Figure 2 illustrates a possible mechanism by which these chemicals are produced. Infection of cotton stele tissue with Verticillium dahliae conidia increases the abundance of 3-hydroxy-3-methylglutaryl-CoA reductase (HMGR) mRNA and HMGR activity, suggesting that HMGR plays a key role in the production of sesquiterpenoids [33]. The enzymatic product of E via E-FDP cyclization in cotton extracts was later shown to be (+) cadinene (CDN) [53,57,58]. The enzymatic mechanism by which CDN synthase generates the cadinene structure of cotton sesquiterpenoids has been shown to involve the isomerization of FDP to a nerolidyl intermediate; cyclization to a cis-germacradienyl cation; a 1, 3-hydride shift; cyclization to a cadinanyl cation; and deprotonation to form CDN [33]. At a branch point in the MVA pathway, CDN synthase catalyzes the final step in the production of cadinene sesquiterpenoids from FDP. The cycloaddition of myrcene or ocimene to hemigossypolone to generate heliocides then occurs through a Diels-Alder reaction; monoterpenes and their precursors are produced in plastids through the 1-deoxy-5-xylulose-5-phosphate pathway ( Figure 2) [5,15,50]. Many studies have examined the biosynthesis of gossypol and its derivates in recent years. One of the precursors to hemigossypol in G. hirsutum is 8-hydroxy (+) d-cadinene [59]. The cytochrome P450 monooxygenase CYP706B1, a (+) d-cadinene-8-hydroxylase involved in cotton sesquiterpene biosynthesis, is expressed in the aerial tissues of glanded cotton cultivars but is not expressed or expressed at an extremely low level in the aerial tissues of a glandless cultivar. The expression pattern of CYP706B1 and the site at which CYP706B1 hydroxylates (+) d-cadinene indicate that CYP706B1 functions in an early stage of gossypol biosynthesis and thus that CYP706B1 could be a target for genetically engineering cotton plants to produce cottonseeds with higher gossypol levels [34,60,61]. Desoxyhemigossypol plays an important role in the production of these chemicals. A methyltransferase (S-adenosyl-L-met:desoxyhemigossypol-6-O-methyltransferase) has been identified, purified, and described in cotton stele tissue infected with V. dahliae. Desoxyhemigossypol-6-methyl ether is used to synthesize methylated hemigossypol, gossypol, hemigossypolone, or heliocides ( Figure 2) [5,48,51]. Molecular Cloning of Genes Associated with Gossypol Synthesis and Gossypol Glands Various genes involved in the terpenoid biosynthesis pathway have been cloned. Cadinene genes were initially cloned and functionally characterized from the A-genome of diploid cotton Gossypium arboreum; they are now part of a large multigene family in cotton [33,34], similar to the terpene cyclase genes that have been identified in other plants [51,62]. Numerous allelic and gene family variants of cadinene genes have been identified from both G. arboreum [33,49,63] and the allotetraploid (A + D genomes) G. hirsutum. The activity of cadinene enzymes and the expression of cadinene transcripts are induced in cotton stems inoculated with V. dahliae [34,56], as well as suspensor cultures of cotton treated with V. dahliae elicitors [57]. Cadinene appears to be controlled by the same genes during the development of these two cotton species. The expression of these genes increases during seed development and is linked to the production and deposition of gossypol in the lysigenous glands of the embryo [48,64,65]. The cadinene gene family in Gossypium has been proposed to comprise two main subfamilies, cdn1-A and cdn1-C, based on sequence similarity and differences in transcriptional control [33]. The structure of cadinene genes, including the number, location, and size of the exons and introns, is highly conserved, which is consistent with the genomic clones of other terpene cyclase genes, such as tobacco (5-epi-aristolochene synthase) (Nicotiana tabacum L.). Janga et al. studied the genes regulating gland development in cotton plants and sequenced approximately 2 kb of the promoter regions from each genomic clone; they found low sequence conservation between the promoter regions. Isolated areas of similarity were concentrated around the TATA box and transcription start site [51,63]. Luo et al. cloned and determined the function of (+)delta-cadinene-8-hydroxylase, a cotton sesquiterpene biosynthetic cytochrome P450 monooxygenase; they also cloned a 1.9-kb P450 that encodes a 522-amino acid protein with 48% similarity to soybean cytochrome P450 82 A3 and with a consensus heme-binding motif and an oxygen-binding pocket sequence [49,53,57]. This P450 is present in the leaves of the glanded cotton species G. hirsutum but not in the leaves of glandless cotton. The gossypol glands and terpenoids associated with gossypol glands are absent from the leaves of glandless plants. This indicates that this P450 enzyme is involved in the biosynthesis of terpenoids in cotton [60]. Suppressive subtractive hybridization and other approaches have been used to generate a subtractive library and a normalized cDNA library from a cotton mutant, Xiangmian 18, of gland morphogenesis-related genes. Some important genes have been cloned, such as the gene encoding the RanBP2 zinc finger protein in upland cotton [49,66]. Furthermore, the genes encoding G. barbadense desoxyhemigossypol-6-Omethyltransferase and the gene encoding cytochrome P450 associated with gossypol glands have been cloned and studied [16,53,63,67]. Transcriptional Regulation of Cotton Gland Morphogenesis and Pigmentation by CGP1, GoPGF, and CGFs Given the importance of cotton as a fiber crop, understanding the development of gossypol glands and the synthesis of secondary metabolites, as well as how they can be used to improve the production and quality of cotton, have been major goals of current research [2,31,38,68]. However, the challenges associated with generating novel glandless mutants and cloning related genes using map-based cloning methods have impeded research progress. The cloning and characterization of GoPGF [68,69], as well as other CGF genes, in recent studies have provided important new insights into the formation of gossypol glands. GoPGF/CGF3 regulates both gland morphogenesis and gossypol synthesis, CGF1 plays a role similar to GoPGF/CGF3, and CGF2 regulates the density of gossypol glands. GoPGF/CGF3 also regulates gossypol synthesis and production ( Figure 3). The silencing of GoPGF leads to the cessation of gossypol gland development in cotton, which results in negligible gossypol levels. Although preliminary observations suggest that the silencing and knockout of CGP1 in glanded cotton produces a phenotype similar to that of the gopgf mutant, detailed analyses have revealed that cgp1 mutants possess normally structured gossypol glands and numbers of gossypol glands similar to wild-type plants, which indicates that CGP1 does not play a role in gland morphogenesis. The absence of glands in cgp1 plants is the most likely explanation for their lack of colored pigments. The deletion of CGP1 results in the down-regulation of many gossypol biosynthesis genes as well as a significant decrease in gossypol levels [2,31,45,48]. The development of gossypol glands does not appear to depend on gossypol production given that transgenic cotton lines with low gossypol levels (induced via the silencing of the key gossypol biosynthesis gene CYP706B1) show normal gland growth [68]. Gao et. al. showed that GoPGF controls both gland morphogenesis and gossypol production independently by binding to the promoters of WRKYs and terpene synthases (TPSs), respectively. The MYB transcription factor CGP1 regulates gossypol accumulation but not gland development, and it possesses transcriptional activity and interacts with GoPGF in the nucleus [45,48,49]. MYB proteins tend to form homodimers and heterodimers, which increases their affinity and specificity for DNA binding [57,69,70]. Thus, CGP1 and GoPGF might form heterodimers to regulate gossypol synthesis as well as the production of other terpenoids; however, they do not form heterodimers to regulate glandular growth (Figure 3). Although GoPGF is highly expressed throughout cotton plants, the expression of CGP1 in the roots is low, indicating that GoPGF might form homodimers or dimers with other transcription factors in the roots. The G-box motif in the promoters of several WRKY and TPS genes has been shown by yeast one-hybrid assays to be a binding site for GoPGF [38,68,71]. Whether the presence of CGP1 increases the affinity of GoPGF to the promoters of WRKY and TPS genes or the in vivo transcription activation of target genes requires further study. Although the knockout of GoPGF results in the complete absence of gossypol, cgp1 mutants possess some residual gossypol, which suggests that CGP1 plays an important, but not essential, role in the regulation of gossypol synthesis. In addition to gossypol, several secondary metabolites are present exclusively in gossypol glands and confer their characteristic intense color [31,53]. 1 Figure 3. Regulation of gland formation and gossypol biosynthesis. The regulation of gland formation and gossypol biosynthesis. GoPGF protein, as a master regulator, controls the specification and differentiation of gland cells by regulating the expression of downstream genes. CGF2, a NAC transcription factor, also plays an important role in gland development and terpenoid biosynthesis. Moreover, GoPGF interacts with CGP1, an R2-R3 MYB transcription factor, to regulate the biosynthesis pathway of gossypol. Challenges, Conclusions, and Future Directions Ensuring food security has long been a major goal for mankind, and this is being challenged by human population growth and the increasing scarcity of arable land. There is thus a pressing need to develop ways to more efficiently utilize cotton plants as a source of fiber and food. Over the past few years, research on the gossypol glands and gossypol in cotton has focused on the production of glandless seeds and glanded foliage using molecular cloning and genetic engineering techniques so that cottonseeds can be directly consumed. [72,73]. The long-term goal of future research should be to understand the mechanisms by which genes control gland development and gossypol synthesis, as well as enhance the resistance of cotton plants to pests and pathogens, as this will increase the utility and economic value of cottonseeds. The development of cotton plants lacking glands in the seeds and leaves will have a substantial effect on the breeding, cultivation, and consumption of cotton. In the future, cotton will become a more valuable source of fiber, food, and oil, which will increase the world's food security. Previous research has also enhanced our understanding of the roles of secondary compounds in plant tissues and the molecular mechanisms that control them. For example, the roles of artemisinin, a substance with antimalarial properties, in southernwood plants are similar to those of gossypol in cotton, and they are both stored in specific glands present in various tissues. Clarifying the molecular mechanisms underlying the synthesis of useful secondary compounds will benefit both farmers and the general public. Funding: This research was supported by the National Natural Science Foundation of China (31670233). We apologize to colleagues whose work could not be cited due to the space limitation of our review. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data supporting the findings of this study are available within the paper published online. Conflicts of Interest: The authors declare no conflict of interest.
5,579
2022-04-28T00:00:00.000
[ "Biology" ]
Lipid Composition but Not Curvature Is the Determinant Factor for the Low Molecular Mobility Observed on the Membrane of Virus-Like Vesicles Human Immunodeficiency Virus type-1 (HIV-1) acquires its lipid membrane from the plasma membrane of the infected cell from which it buds out. Previous studies have shown that the HIV-1 envelope is an environment of very low mobility, with the diffusion of incorporated proteins two orders of magnitude slower than in the plasma membrane. One of the reasons for this difference is thought to be the HIV-1 membrane composition that is characterised by a high degree of rigidity and lipid packing, which has, until now, been difficult to assess experimentally. To further refine the model of the molecular mobility on the HIV-1 surface, we herein investigated the relative importance of membrane composition and curvature in simplified model membrane systems, large unilamellar vesicles (LUVs) of different lipid compositions and sizes (0.1–1 µm), using super-resolution stimulated emission depletion (STED) microscopy-based fluorescence correlation spectroscopy (STED-FCS). Establishing an approach that is also applicable to measurements of molecule dynamics in virus-sized particles, we found, at least for the 0.1–1 µm sized vesicles, that the lipid composition and thus membrane rigidity, but not the curvature, play an important role in the decreased molecular mobility on the vesicles’ surface. This observation suggests that the composition of the envelope rather than the particle geometry contributes to the previously described low mobility of proteins on the HIV-1 surface. Our vesicle-based study thus provides further insight into the dynamic properties of the surface of individual HIV-1 particles, as well as paves the methodological way towards better characterisation of the properties and function of viral lipid envelopes in general. Introduction Human Immunodeficiency Virus Type-1 (HIV-1) is an enveloped retrovirus. It acquires its lipid membrane from the plasma membrane of the infected cell during the budding process driven by the assembly of the viral structural protein Gag [1]. In the budded, morphologically mature HIV-1 particle (Figure 1a), this combination of lipids, viral structural proteins, and membrane incorporated viral fusion protein Env and other cellular proteins creates a unique lipid/protein surface environment that is highly curved due to the size (<140 nm) of the virus particle. Lipidomic studies of isolated viral lipids have shown that in comparison to the plasma membrane, the HIV-1 membrane is enriched in sphingomyelins (SMs), glycosphingolipids, cholesterol (Chol), and phosphoinositides, such as phosphatidylinositol 4,5-bisphosphate lipid (PIP2) [2,3]. Such an environment is characterised by a high degree of lipid packing and therefore low polarity within the lipid bilayer. When such a membrane is studied with polarity-sensing probes such as Laurdan, this results in a blue-shifted emission spectrum of the probe [4]. Such spectral changes are often compacted into the General Polarization (GP) parameter, spanning values between −1 and 1: blue-shifted fluorescence results in high GP values, indicating the existence of rigid, highly packed membranes, whereas red-shifted fluorescence results in low GP, signifying fluid, less-packed lipid environments. Laurdan-based spectrophotometric studies of bulk purified particles [5] or spectral scanning microscopy-based analysis of individual particles [6] have indeed shown that HIV-1 membranes have a very high GP value of 0.5. This indicates a very high level of rigidity that has also been reported in model membranes highly enriched with cholesterol and sphingomyelin [7] or giant plasma membrane vesicles [8]. All these data support the idea that HIV-1 may acquire its lipids and bud from the pre-existing domains of highly packed lipid environment, so called "lipid rafts". On the other hand, a recent study on supported lipid bilayers suggested that instead of budding from existing lipid rafts, Gag may instead create its own specialised lipid environment, by selectively trapping cholesterol and PIP2 at virus assembly sites [9]. Despite this knowledge of the overall characteristics of the HIV-1 envelope, very little is known about the dynamic characteristics of this environment, such as the mobility of the individual molecules on the surface of individual particles. This is due to the fact that such measurements would require a high spatial and temporal resolution to measure the diffusion of molecules confined within the <140 nm diameter of the virus particles, which is well below the diffraction limit of a conventional optical microscope. Notably, it precludes the application of high temporal resolution spectroscopic techniques such as (confocal) fluorescence correlation spectroscopy (FCS), which relies on evaluating intensity fluctuations arising from fluorophores' transits through the focused excitation laser beam. However, this shortcoming has recently been addressed thanks to recent advances in the field of super-resolution microscopy. Techniques such as Stimulated Emission Depletion (STED) microscopy can now reach the desired sub-100 nm resolution that is required to image subviral structures [10]. Furthermore, the combination of STED with FCS allows the determination of the molecular diffusion coefficients in the sub-diffraction-sized observation spots [11,12]. In the field of virology, this technique represents a promising opportunity for the study of the molecular dynamics in the context of individual viruses. Recently, STED-FCS in combination with fast line beam-scanning (scanning STED-FCS, sSTED-FCS) has been applied to study the mobility of Env proteins on the surface of HIV-1 particles [6]. This study has established that the HIV-1 envelope is an intrinsically highly immobile environment where Env and other molecules, such as major histocompatibility complex class-I (MHC-I) and glycophosphatidylinositol (GPI)-anchored proteins, all exhibit a very low diffusion coefficient (D ≈ 0.002 µm 2 /s). Of note, this mobility is two orders of magnitude slower than that of the same proteins in the cellular plasma membrane [6]. Such a low mobility appears to be due to the combination of the highly ordered and highly curved nature of the viral lipid envelope, the tight packing of internal membrane-interacting virus proteins (MA domain of Gag), and the passive incorporation of many types of cellular proteins during virus budding. Interestingly, the mobility is very low in both immature and mature viruses, despite the differences in Gag organisation (in the case of Env, mobility is even further decreased in immature viruses due to its tight links with the immature Gag shell) [6]. compositions, characterised by different degrees of lipid packing, we find that lipid composition and packing, but not the membrane curvature, play an important role in lowering the mobility of lipid molecules, at least for 0.1-1 µm sized vesicles. This effect may thus contribute to the low protein mobility on the HIV-1 surface. Furthermore, the STED-FCS approach employed to observe the lipid dynamics in small-sized vesicles opens the way for future studies on bona fide HIV-1 and other membrane enveloped virus particles. Representative STED microscopy image of the extruded LUV-Ld preparation. Imaging was used to locate LUVs of a comparable size and brightness followed by the acquisition of the fluorescence fluctuation data. The white star marks the LUV used for analysis. Lateral fluorescence intensity profiles of representative LUVs are shown in Figure S1. Scale bar: 500 nm. (c) Representative raw (grey) and gated & bleaching corrected (black) autocorrelation curves obtained from fluorescence fluctuation data for an extruded LUV. Gated & bleaching corrected autocorrelation curves were fitted using a generic 2D diffusion model (red). While these findings have provided new insights into previously inaccessible and thus unexplored dynamic aspects of the HIV-1 envelope, there are still many unanswered questions such as whether, similarly to surface proteins, lipids in the HIV-1 envelope also display a reduced mobility, and what the main factors are that may affect their behaviour in highly curved sub-diffraction-sized HIV-1 membranes. Here, we used synthetic lipid vesicles of 100 nm to 1 µm in diameter as a model to examine two such factors: membrane curvature and lipid composition. However, the faster diffusion of lipids compared to membrane proteins required a substantial modification of the experimental approach: instead of the scanning variant [6], we herein applied single-point STED-FCS with a higher temporal resolution. Though this is an established method to measure lipid diffusion properties in large membrane structures, such as giant vesicles or cells [11,13,14], it is (to our knowledge) the first application to small virus-like membrane particles, and the experiments involved the development of a non-standard acquisition and analysis pipeline to combat artefacts resulting, e.g., from significant photobleaching. By measuring the mobility of lipids on the surface of vesicles of different sizes and chemical compositions, characterised by different degrees of lipid packing, we find that lipid composition and packing, but not the membrane curvature, play an important role in lowering the mobility of lipid molecules, at least for 0.1-1 µm sized vesicles. This effect may thus contribute to the low protein mobility on the HIV-1 surface. Furthermore, the STED-FCS approach employed to observe the lipid dynamics in small-sized vesicles opens the way for future studies on bona fide HIV-1 and other membrane enveloped virus particles. Preparation of Large Unilamellar Vesicles (LUVs) LUVs were prepared from the desired lipid mixture: POPC (LUV-Ld), POPC:Chol (67:33 molar ratio; LUV-Lo), or DOPC:Chol:SM (37:46:17 molar ratio; LUV-HIV-like). For immobilisation on the glass surface and lipid diffusion measurements, lipid derivatives DSPE-PEG-biotin and Atto647N-DPPE were included, each at approx. 1 molecule per 1000 lipids, whereas for GP experiments, C-Laurdan was added at the same concentration. All lipid stock solutions were prepared in chloroform and stored at −20 • C. LUVs of heterogeneous sizes (unextruded LUVs) were formed by drying the lipid mixture by evaporating the organic solvent in a vacuum desiccator for 1 h and rehydrating it in PBS while vortexing vigorously for 10 min, yielding a suspension of approx. 0.5 mM. To obtain LUVs of sizes approx. 200 nm in diameter, part of the heterogeneous vesicle suspension was passed through 200-nm pores (Whatman, Maidstone, UK) 20 times using a manual mini-extruder (Avanti Polar Lipids) preheated to 45 • C. Vesicles were stored at 4 • C and used in experiments within two days. Preparations of Supported Lipid Bilayers (SLBs) SLBs were prepared by spin-coating, as previously described [15], using the same lipid mixtures as for LUV preparations, but without the biotinylated lipid and at a lower concentration of the fluorescent lipid probe (1 molecule per 10 4 lipids). Every solution of lipids in chloroform and methanol (1:1 volume ratio, 1 mg lipids/mL) was spin-coated on a piranha solution-cleaned round 25-mm coverslip (thickness #1.5 by VWR, Lutterworth, UK) for 45 s at 3200 rpm and rehydrated with SLB buffer (10 mM HEPES by Sigma Aldrich, 150 mM NaCl, pH 7.4). Prepared SLBs were kept hydrated in the SLB buffer and used immediately for the measurements. LUV Immobilisation for STED-FCS Measurements For microscopy and FCS experiments, LUVs containing the biotinylated lipid were immobilised in eight-well glass-bottomed chambers by ibidi (Martinsried, Germany; glass thickness #1.5), exploiting the biotin-streptavidin-biotin sandwich linker. This approach prevents vesicle flattening and rupture and does not influence their lipid membrane mobility [16]. For this purpose, the chambers were coated with a mixture of BSA and biotinylated BSA (5:1 molar ratio, 1 mg/mL) for 1 h, washed several times with PBS, incubated with streptavidin (500 ng/mL) for 1 h, and then washed with PBS again multiple times. Thereafter, ten-fold diluted PBS suspension of LUVs was added in the prepared chambers for incubation for approximately 30 min. Finally, the non-adhered LUVs were removed by carefully washing the chambers with PBS prior to measurements. Acquisition of TCSPC Data Experiments were performed at room temperature using a Leica SP8 STED instrument (Mannheim, Germany) equipped with a 100×/1.4 NA oil immersion STED objective. The lipid probe Atto647N-DPPE was excited by the 633-nm line from the white light laser pulsing at 80 MHz (average power 0.2 or 0.6 µW), depleted with a donut-shaped 775-nm pulsed STED laser (average power 55 mW), and recorded with a hybrid detector in the wavelength range of 640-730 nm. Under these conditions, a 3.2-fold improvement in lateral resolution with respect to confocal images was achieved (estimated from the ratio of transit times of freely diffusing fluorescent lipids in SLBs in confocal vs. STED mode) [15,17,18], resulting in an effective observation spot of approx. 75 nm in diameter (full-width-at-half-maximum, FWHM; see supplementary materials for details). Within each sample, vesicles of a comparable brightness and size were selected in confocal images for further measurements of lipid diffusion around 0.1-0.2 and 0.3-1 µm in diameter for the extruded and non-extruded LUVs, respectively (Figure S1 in supplementary materials). Time-correlated single photon-counting (TCSPC) streams from the sites of selected LUVs were acquired for 10-30 s using Hydraharp 400 electronics and SymPhoTime software (both by PicoQuant, Berlin, Germany). Analysis of STED-FCS Data The acquired TCSPC data were analysed using FoCuS-point [19] and FoCuS-scan [20] software ( Figure S2). First, time-gating was applied to minimise the effects of residual confocal fluorescence and scattered laser light, which would deteriorate the spatial resolution and quality of FCS curves. Next, the resulting heavily decaying time traces were corrected for photobleaching by the local averaging method [20,21]. The calculated correlation functions were then fitted with a 2D diffusion model. From the obtained lateral transit times, diffusion coefficients were calculated using the known diameter of the STED-FCS observation spot (75 nm, see above). Time traces with artefacts due to vesicle movement, and FCS curves with a low signal-to-noise ratio or high fit parameter errors, were discarded based on single-datapoint evaluation, prior to any comparison to avoid bias. Please see the supplementary materials for a detailed description of the analysis procedure, together with an in-depth discussion of various technical aspects, such as possible effects of size and curvature in acquisition [22,23] and analysis [6]. Results To demonstrate the strength of the STED-FCS approach in the virological context, we aimed at determining the effect of the membrane curvature and lipid composition on the molecular mobility on the virus-like surface, utilising a synthetic lipid vesicle system. LUVs were generated using either POPC only ( value and molar ratio of Chol and SM to those found in the real virus [7]. For measurements of lipid mobility in membranes with different curvatures, LUVs of a similar brightness and size were manually selected in images of heterogeneous LUV populations, resulting in two LUV size classes: those with diameters around 0.3-1 µm for non-extruded LUVs and 0.1-0.2 µm for extruded LUVs, respectively ( Figure S1 in the supplementary materials). The membrane curvatures of the latter were comparable to those of the HIV-1 viruses (Figure 1a). To analyse the lipid mobility in each of these conditions, LUVs were doped with a fluorescent lipid analogue DPPE-Atto647N and immobilised on BSA-coated glass coverslips (Figure 1b) using a biotin-streptavidin-biotin sandwich linker, which also prevented the direct contact of the lipids with the glass surface and thereby preserved the diffusion rates of lipids in free-standing membranes [16]. In this study, we used fluorescence correlation spectroscopy (FCS) to investigate lipid diffusion. In FCS, diffusion coefficients of fluorescently-labelled molecules are determined by analysing fluctuations in the fluorescence signal arising as those molecules diffuse in and out of the microscope's observation spot [24,25]. To realise such measurements on vesicular structures smaller than 250 nm (i.e., smaller than the observation spot of the conventional confocal microscope), we employed FCS on a super-resolution STED microscope generating observation spots <100 nm in diameter (STED-FCS) [11,12]. In our current STED-FCS measurements, the STED microscope was tuned to yield an effective diameter of the observation spot of around 75 nm (see Materials and Methods), thus well below the size of the smallest measured vesicle. Due to the low copy number of Env proteins per individual virus, our previous study of Env mobility on the HIV-1 surface utilised scanning STED-FCS (sSTED-FCS) [6], which minimized photobleaching and enabled the accurate recovery of diffusion coefficients for slowly moving molecules only (such as Env or other proteins in the viral membrane). The number of fluorescent lipid analogues per individual LUV was much higher for our current measurements, thus making photobleaching a less critical issue. Consequently, we used point STED-FCS in this case (with a fixed instead of moving beam during acquisition), offering the high temporal resolution needed to follow the 1000-fold faster diffusion of lipids on the LUV surface compared to protein diffusion in viral membranes. Photobleaching still had to be minimized by using a very low excitation power (10-25-fold lower compared to the sSTED-FCS study [6]). Lipid mobility data was acquired for individual fluorescent vesicles of diameters of 100-200 nm (extruded) or 0.5-1 µm (non-extruded) using point STED-FCS in the time-correlated single photon-counting mode (TCSPC). TCSPC mode allowed us to remove non-depleted confocal contribution and residual laser scattering by fluorescence lifetime-based filtering, thus increasing the spatial resolution and signal-to-noise ratio of the acquired signal [26,27]. In addition, photobleaching correction was also applied using a local-averaging method ( Figures S2 and S3) [20,21]. The resulting autocorrelation curves (Figure 1c) were fitted with a generic two-dimensional (2D) diffusion model to obtain the average transit times of fluorescent lipids through the sub-diffraction-sized observation spot and to derive the diffusion coefficient for each sample. Firstly, we compared LUV lipid mobility with the behaviour of the same lipid mixes in supported lipid bilayers (SLB, a lipid bilayer spin-coated onto the microscope cover glass)-a standard model for the measurements of lipid mobility (Figure 2). The results showed that in all LUV formulations, the recorded lipid diffusion coefficients were consistently higher (~2-3 fold) than in the corresponding SLBs with the same lipid composition, as was reported previously [28]. This reduction in lipid mobility observed in supported SLBs compared to free-standing vesicular membranes is caused by Van der Waals interactions of lipid headgroups with the glass surface [29] and not by the incomplete modelling of the molecular diffusion on spherical lipid surfaces of LUVs by a simple 2D diffusion equation (discussed in more detail in the supplementary materials) [6]. SLBs thus do not represent the true zero-curvature control samples in terms of absolute values of diffusion coefficients. Nevertheless, the extracted diffusion coefficients are in good agreement with the values from other independent studies on SLBs and giant unilamellar vesicles (GUVs) [30,31]. Moreover, consistent relative differences between LUV and SLB samples of different lipid compositions indicated that the STED-FCS approach also reliably estimated diffusion coefficients for samples as challenging as LUVs. To further confirm the importance of increased lipid packing, but not the curvature of the vesicle membranes in the modulation of lipid envelope mobility, we measured the emission spectra of a polarity-sensitive dye C-Laurdan [32] in the membranes of all tested LUV compositions and sizes (Figure 3a). The extracted GP values for POPC vesicles were similar to values reported previously [5]. Moreover, extruded LUVs consistently showed red-shifted spectra and thus lower GP values (~0.07 unit difference) than the corresponding larger unextruded LUVs, indicating, on average, less dense lipid packing in smaller vesicles. However, the differences in GP values were much larger for different lipid compositions than for different sizes within each composition. In fact, increasing GP values (indicative of denser lipid packing) correlated well with the decrease in LUV lipid diffusion coefficients (Figure 3b). sized LUVs). These results suggest the lipid composition of HIV-1 membranes, rather than their curvature, as one of the major factors responsible for the reduction of mobility of the molecules on the HIV-1 surface. To further confirm the importance of increased lipid packing, but not the curvature of the vesicle membranes in the modulation of lipid envelope mobility, we measured the emission spectra of a polarity-sensitive dye C-Laurdan [32] in the membranes of all tested LUV compositions and sizes (Figure 3a). The extracted GP values for POPC vesicles were similar to values reported previously [5]. Moreover, extruded LUVs consistently showed red-shifted spectra and thus lower GP values (~0.07 unit difference) than the corresponding larger unextruded LUVs, indicating, on average, less dense lipid packing in smaller vesicles. However, the differences in GP values were much larger for different lipid compositions than for different sizes within each composition. In fact, increasing GP values (indicative of denser lipid packing) correlated well with the decrease in LUV lipid diffusion coefficients (Figure 3b). Discussion Little is known about the dynamic characteristics of virus surfaces, which is, to a large extent, due to the fact that experimental techniques capable of probing the properties of molecular diffusion with a sufficient spatiotemporal resolution have only recently emerged. Similarly to previous work [6], we herein exploited the combination of the high temporal resolution of FCS with the sub- Discussion Little is known about the dynamic characteristics of virus surfaces, which is, to a large extent, due to the fact that experimental techniques capable of probing the properties of molecular diffusion with a sufficient spatiotemporal resolution have only recently emerged. Similarly to previous work [6], we herein exploited the combination of the high temporal resolution of FCS with the sub-diffraction spatial scale of the observation spot provided by STED microscopy (STED-FCS). Even if STED-FCS has become a well-established method to study the diffusion properties of lipids and proteins in model and cellular membranes, its application to tiny structures such as virus(-like) particles still presents a considerable experimental challenge, mainly due to low signal levels and rapid photobleaching. For the slow diffusion of proteins, we have previously used scanning STED-FCS [6], which minimised photobleaching at sufficient temporal sampling. Herein, we complemented the toolbox by establishing an efficient workflow to measure the faster diffusion of lipids in virus-like membrane particles by single-point STED-FCS. This study validated the pipeline on a simpler model system and refined the findings of the previous work that described the low mobility nature of proteins within the HIV-1 lipid envelope [6]. By measuring diffusion coefficients of lipids and the membrane polarity of LUVs of different sizes and compositions, we herein established that the tightly packed nature of the HIV-1 lipid envelope, rather than its high curvature, plays a major role in the creation of a very low mobility environment on the virus particle surface. Considering that the membrane curvature of these particles is, in comparison to the membrane thickness of approx. 4 nm, still relatively mild (see insets in Figure 1a), stronger effects of the curvature are expected, and have indeed been observed, at even smaller sizes of membrane structures-typically below 50 nm [33][34][35], but were nearly absent in less curved membranes [35][36][37]. Though some other methods, such as micropipette aspiration, might be better suited to study diffusion properties in highly curved model membranes, they can induce curvature-driven lipid and protein segregation [38], and, most importantly, cannot be directly applied to native virus-like small membrane structures. To this end, we herein demonstrated that the careful application of STED-FCS can yield meaningful information in a virologically relevant setting. However, despite observing a four-fold reduction in lipid mobility between most fluid and most rigid LUV compositions, it is clear that this factor is only part of the reason for the even further 700-fold lower molecular mobility of proteins previously observed in bona fide HIV-1 particles (exemplified by a dashed line and a star in Figures 2 and 3b, respectively; unfortunately, we were unable to measure the diffusion of fluorescent lipid analogues in HIV-1 particles, also due to their low efficiency of incorporation). Even accounting for the fact that proteins diffuse approximately 10-fold slower in cell membranes than lipids [6,11], this difference alone is insufficient to explain the extremely low mobility of Env observed in the HIV-1 membrane (D ≈ 0.002 µm 2 /s). This discrepancy may be explained by other features of HIV-1 particles that are not present in LUVs, such as tightly packed internal virus structures [39], lipid-MA protein interactions, and the incorporation of Env, as well as a large variety of cellular proteins on the virus surface during the budding process [40]. Such crowded environments knowingly slow down the diffusion rates of both proteins [41][42][43] and lipids [44], and specific membrane-deforming lipid-protein interactions can further enhance this effect [45]. The importance of the above factors on HIV-1 surface lipid mobility can be investigated in future studies employing the STED-FCS approach described here while utilising more sophisticated vesicle models or bona fide HIV-1 particles. By investigating lipid mobility in small-sized synthetic vesicles, our current study has provided additional insights into the dynamic properties of the HIV-1 surface. It further highlighted the importance of rigid and ordered lipid composition as the contributing factor to the dynamic behaviour of molecules on the HIV-1 virus surface [6], which, in turn, was previously shown to underscore the ability for HIV-1 to successfully fuse with the target cell [10]. Future studies of this still relatively unexplored aspect of the HIV-1 replication cycle may provide a novel therapeutic approach that could potentially prevent virus entry by subtle alterations in lipid packing of the HIV-1 envelope. Author Contributions: J.C., I.U., and C.E. conceived the study and designed the experiments. I.U., J.B., and D.S. performed the experiments. I.U. analysed the data. D.W. provided technical support for the analysis of the data. I.U., J.C., and C.E. wrote the paper.
5,745.6
2018-08-01T00:00:00.000
[ "Biology", "Chemistry" ]
Detemplated and Pillared 2-Dimensional Zeolite ZSM-55 with Ferrierite Layer Topology as a Carrier for Drugs The present studies were conducted to show the potential of 2D zeolites as effective and non-toxic carriers of drugs. Layered zeolites exhibit adjustable interlayer porosity which can be exploited for controlled drug delivery allowing detailed investigation of the drug release because the structure of the carrier is known exactly. This study was conducted with model drugs ciprofloxacin and piracetam, and ZSM-55 with ca 1 nm thick layers, in detemplated and pillared forms. The release profiles differed from the commercial, crystalline forms of drugs—the release rate increased for ciprofloxacin and decreased for piracetam. To understand the dissolution mechanisms the release data were fitted to Korsmeyer-Peppas equation, showing Fickian (for pillared) and anomalous (for detemplated sample) transport. FT-IR studies showed that strong interaction carrier-drug may be responsible for the modified, slowed down release of piracetam while better solubility and faster release of ciprofloxacin was attributed to formation of the protonated form resulting in weaker interaction with the zeolite than in the pure crystalline form. Two independent tests on L929 mice fibroblasts (ToxiLight and PrestoBlue) showed that ZSM-55, in moderate concentrations may be safely used as a carrier of drug molecules, not having negative effect on the cells viability or proliferation rate. Introduction One of the principal challenges of modern pharmaceutical technology is drug formulation aimed not only at the convenience of use, but also providing the optimal concentration of a drug at the site of action [1][2][3][4][5][6][7]. In an ideal situation, the concentration should rapidly increase right after the application/ingestion and maintain a constant level over the time period required to achieve the desired therapeutic effect; afterwards the drug should be eliminated from the body [8,9]. Inert supports, carriers, are used to deliver therapeutic substances and then release them at the right moment and over a period of time sufficient to achieve the therapeutic goal. There is an ongoing effort in the area of materials to generate new carrier solids with high drug capacity and selectivity depending on the properties of molecules being adsorbed, i.e., guests [1,3,10,11]. The release of therapeutic molecules from porous matrices may occur by different mechanisms. The extreme cases are, at one end, slow degradation of the drug-carrier composite initiated by surface erosion or, at the other end, simple diffusion of drug molecules out of the host matrix depending on its porous structure [12]. The most common mechanism combines both cases [13] and consequently the process of drug release is difficult to characterize. Considering this, we wanted to examine modes of interaction between the carrier and drug molecules, since the nature of their interplay influences subsequent process of drug release. The model carrier materials chosen for this study, namely 2dimensional (2D) zeolites, which are a relatively new class of layered materials that are especially suited for that purpose. They consist of layers with thicknesses not greater than a few nanometers with internal structures (topologies) of zeolites. The typical forms of 2D zeolites are analogous to other classes of layered materials, namely stacks of layers equally spaced with separations depending on intercalated guest compounds. These structures have been visualized by TEM as illustrated by many examples in the literature [14][15][16]. 2D zeolites possess interlayer spaces large enough to accept bulky guest molecules and well-defined structures and surfaces allowing examination of the drugcarrier interactions [17][18][19]. The reports of 2D zeolites investigations as carriers for therapeutic substances are practically non-existent in the literature, and to the best of our knowledge, there have been no articles dealing directly with this problem so far. The carrier chosen for the testing of storage and release of drugs, is designated ZSM-55. It is a borosilicate composed of layers with FER (zeolite ferrierite) topology with embedded choline cations serving as a template, organic structure directing agent, during the synthesis. ZSM-55 condenses upon calcination to produce the 3D zeolite framework designated CDO with unidimensional 8-ring pores, thus is formally a layered precursor to zeolite CDO [20]. Its interlayer space and pore structure can be modified by detemplation (interlayer template extraction with an acid), swelling and then pillaring [21,22]. According to the literature one of the most important factors influencing the drugcarrier interactions is the surface topography of the carrier [23]. There were two main reasons for choosing ZSM-55 zeolite as a carrier. Firstly, the ferrierite layers with thickness 0.9 nm are nonporous, thus the interaction with a drug can only take place in the interlayer spaces, which simplifies interpretation of the adsorption and release processes. Secondly, we wanted to compare different types of supports based on the same layer: detemplated and pillared. In the case of detemplated material, the interlayer space may be flexible as the ferrierite layers are linked by hydrogen bonds which have to be broken to accommodate drug molecules. Upon intercalation, the layers are separated with the interlayer distance depending on the size and amount of introduced molecules (drug). For the pillared form, the interlayer distance is fixed and should not change upon drug introduction or release, as long as the zeolite matrix is not damaged, which is the most likely scenario -the pillaring is complete upon calcination at over 500 °C leading to a very robust porous material. Two drugs of low bioavailability but differing in their solubility in water and body fluids were chosen: ciprofloxacin, representing poorly soluble substance and piracetam, representing highly soluble drug. In both cases, formation of composites based on a zeolite support would allow the drug to gradually pass into solution, increasing its bioavailability. Ciprofloxacin ( Figure 1a) is a representative of the fluoroquinolones, which are broad-spectrum antibiotics for both Gram-positive and Gram-negative bacteria, hence used to combat many types of infections and inflammation. Antibiotics of this type are used in the treatment of lower respiratory tract diseases and skin inflammation [24]. Ciprofloxacin (molecular mass 331.35 g/mol) is practically insoluble in water (<1 mg/mL) [25,26] but its solubility is improved by formation of salts (citrates, tartrates, malonates, succinates), since ionic compounds are more easily solvated (hydrated) compared to neutral forms [25][26][27]. Alternatively, as proposed here, ciprofloxacin solubility may be increased by formation of alternative interactions between a high-surface carrier and ciprofloxacin molecules. Ciprofloxacin is available in forms of oral tablets, ophthalmic (eye drops), otic (ear drops) or oral and intravenous suspensions. The oral or intravenous route of administration, although very simple, have serious limitations, the most important is first-pass effect and numerous side effects (vomiting, headache, dizziness, hallucinations, convulsions) which preclude administration to a substantial number of patients. Ciprofloxacin is also a well-established broad-spectrum antibiotic indicated for the treatment of exacerbations of respiratory tract infection, especially in Chronic Obstructive Pulmonary Disease (COPD), cystic fibrosis and bronchiectasis [28]. Unfortunately, the classic ways of administration, due to inter-individual variability in ciprofloxacin pharmacokinetics leads to inadequate drug levels and suboptimal pharmacodynamic exposure, which usually is prevented by increase of the daily dose, escalating the frequency and severity of the side effects. Inhalable application allows for individualization of the dosing to optimize efficacy and to prevent development of resistance [29]. Piracetam (2-(2-oxopyrrolidinyl) acetamide, molecular mass 142.16 g/mol, Figure 1b) is a nootropic drug from the pyrrolidone group [10], with neuroprotective and anticonvulsant properties. Its efficacy is documented in cognitive disorders and dementia [30] and is also used to aid with the learning difficulties. It exhibits linear and time-dependent pharmacokinetic properties with low inter-individual variability over a wide dose range and is absorbed quickly and extensively after oral administration [31,32]. However, approximately 80-100% of the total piracetam dose is excreted in the urine, 90% of which is unchanged (not-metabolized) [33]. Piracetam is a model molecule for drugs with relatively small sizes, a fast release profile and relatively good solubility (72 mg/mL) [34]. The present studies were conducted to show the potential of 2D zeolites as effective and non-toxic carriers of drugs, both of hydrophilic and hydrophobic character. Results and Discussion Piracetam and ciprofloxacin solutions were contacted with two forms of ZSM-55: acid treated detemplated material with layers connected by weak hydrogen bonds and the expanded pillared form with interlayer pores generated by silica props (pillars) introduced by the standard pillaring with TEOS (tetraethylorthosilicate) and calcination [14]. The scheme of ZSM-55 modification is presented in Figure 2. Ciprofloxacin is available in forms of oral tablets, ophthalmic (eye drops), otic (ear drops) or oral and intravenous suspensions. The oral or intravenous route of administration, although very simple, have serious limitations, the most important is first-pass effect and numerous side effects (vomiting, headache, dizziness, hallucinations, convulsions) which preclude administration to a substantial number of patients. Ciprofloxacin is also a well-established broad-spectrum antibiotic indicated for the treatment of exacerbations of respiratory tract infection, especially in Chronic Obstructive Pulmonary Disease (COPD), cystic fibrosis and bronchiectasis [28]. Unfortunately, the classic ways of administration, due to inter-individual variability in ciprofloxacin pharmacokinetics leads to inadequate drug levels and suboptimal pharmacodynamic exposure, which usually is prevented by increase of the daily dose, escalating the frequency and severity of the side effects. Inhalable application allows for individualization of the dosing to optimize efficacy and to prevent development of resistance [29]. Piracetam (2-(2-oxopyrrolidinyl) acetamide, molecular mass 142.16 g/mol, Figure 1b) is a nootropic drug from the pyrrolidone group [10], with neuroprotective and anticonvulsant properties. Its efficacy is documented in cognitive disorders and dementia [30] and is also used to aid with the learning difficulties. It exhibits linear and time-dependent pharmacokinetic properties with low interindividual variability over a wide dose range and is absorbed quickly and extensively after oral administration [31,32]. However, approximately 80-100% of the total piracetam dose is excreted in the urine, 90% of which is unchanged (not-metabolized) [33]. Piracetam is a model molecule for drugs with relatively small sizes, a fast release profile and relatively good solubility (72 mg/mL) [34]. The present studies were conducted to show the potential of 2D zeolites as effective and nontoxic carriers of drugs, both of hydrophilic and hydrophobic character. Results and Discussion Piracetam and ciprofloxacin solutions were contacted with two forms of ZSM-55: acid treated detemplated material with layers connected by weak hydrogen bonds and the expanded pillared form with interlayer pores generated by silica props (pillars) introduced by the standard pillaring with TEOS (tetraethylorthosilicate) and calcination [14]. The scheme of ZSM-55 modification is presented in Figure 2. The extent and quality of ZSM-55 modifications were evaluated by XRD and nitrogen adsorption ( Figure 3 and Table 1). ZSM-55 was detemplated by reaction with 1 M HCl in methanol at 50 • C for overnight. The obtained sample had very low specific surface area (71 m 2 /g measured after calcination) and sorption capacity (0.05 cm 3 /g), as expected for multilamellar stacking of non-porous (ferrierite) layers. The low angle line position in XRD varied depending on drying conditions indicating variable interlayer space with the degree of hydration. The 002 peak position was variable between 8.3 to 10.5 • 2θ (Cu Kα radiation throughout), d-spacing 1.06 to 0.84 nm, and could be shifted to d-spacing values lower than in the complete framework FER/CDO structures (d = 0.91 nm). The pillared material showed interlayer distances expanded to ca. 3.8 nm, with high adsorption capacity of 0.69 cm 3 /g and specific surface area of 1194 m 2 /g. In the XRD patterns for the pillared and detemplated materials, the maxima corresponding to intralayer reflections (hk0) are clearly visible, confirming the preservation of the layers internal structure. These reflections are identified as: (020) at 12.6 • , (011) at 13.5 • , (031) at 22.5 • , (040) at 25.4 • 2θ. For the pillared form, they have reduced intensity and are broadened, which may be interpreted as due to the presence of additional silica (pillars) and reduction of the interlayer stacking order in the mesoscale [14]. The extent and quality of ZSM-55 modifications were evaluated by XRD and nitrogen adsorption ( Figure 3 and Table 1). ZSM-55 was detemplated by reaction with 1 M HCl in methanol at 50 °C for overnight. The obtained sample had very low specific surface area (71 m 2 /g measured after calcination) and sorption capacity (0.05 cm 3 /g), as expected for multilamellar stacking of non-porous (ferrierite) layers. The low angle line position in XRD varied depending on drying conditions indicating variable interlayer space with the degree of hydration. The 002 peak position was variable between 8.3 to 10.5° 2θ (Cu Kα radiation throughout), d-spacing 1.06 to 0.84 nm, and could be shifted to d-spacing values lower than in the complete framework FER/CDO structures (d = 0.91 nm). The pillared material showed interlayer distances expanded to ca. 3.8 nm, with high adsorption capacity of 0.69 cm 3 /g and specific surface area of 1194 m 2 /g. In the XRD patterns for the pillared and detemplated materials, the maxima corresponding to intralayer reflections (hk0) are clearly visible, confirming the preservation of the layers internal structure. These reflections are identified as: (020) at 12.6°, (011) at 13.5°, (031) at 22.5°, (040) at 25.4° 2θ. For the pillared form, they have reduced intensity and are broadened, which may be interpreted as due to the presence of additional silica (pillars) and reduction of the interlayer stacking order in the mesoscale [14]. The observed efficiency of drug intercalation by the applied straightforward solution-solid interaction depended more on the type of drug than the layered zeolite structure -whether detemplated or pillared. On the other hand, the latter influenced the apparent mechanism of release. Piracetam, with a small size and high solubility, was easily incorporated in substantial quantity in Table 1. Basic textural parameters for detemplated and pillared ZSM-55. External surface area (S ext ) and pore volume (V T-plot ) were calculated by the t-plot method, combining micropores and small mesopores, while total volume (V tot ) was calculated at p/p 0 = 0.95. ZSM-55 S BET , m 2 /g S ext , m 2 /g V T-plot , cm 3 /g V tot , cm 3 The observed efficiency of drug intercalation by the applied straightforward solution-solid interaction depended more on the type of drug than the layered zeolite structure-whether detemplated or pillared. On the other hand, the latter influenced the apparent mechanism of release. Piracetam, with a small size and high solubility, was easily incorporated in substantial quantity in both detemplated and pillared ZSM-55 (loading: 21 and 26%, respectively). The sorption of bulky, poorly soluble ciprofloxacin was significantly smaller, 4 and 8% for the detemplated and pillared ZSM-55, respectively. Ciprofloxacin was introduced in acidic conditions, which should make it cationic and favorable to interact with negatively charged zeolite layers [35] but nevertheless did not improve its loading. It is however important that for both types of composites interactions between the drug and the carrier were significant as determined by FT-IR and they influenced the molecular structure of the drug. XRDs of the solids isolated after reactions of piracetam and ciprofloxacin with detemplated sample did not show increased d-spacing suggesting little or no intercalation between layers (Figure 3). The presumed drug molecule location is on the surface of the outer layers. Loading of the drugs inside the pillared ZSM-55 also did not change the XRD pattern but since the structure is rigid, drug molecules just fill the available spaces (as indicated by increased amount in the sample). In the XRD patterns for composites no reflections characteristic for piracetam or ciprofloxacin are observed, indicating that no drug crystallites with sizes exceeding the standard XRD detection limit, i.e., 2-2.5 nm [36][37][38][39] are present, even with high loading, i.e., 26% by weight of piracetam. The release of both drugs to PBS solutions in Franz cells was tested for both detemplated and pillared ZSM-55 samples ( Figure 4). The release profiles were different than for the commercial, crystalline forms. In agreement with the literature reports [25,26,34,40], crystalline piracetam dissolved in the PBS solution immediately, while ciprofloxacin was released slowly with full solubilization achieved after 50 h. The piracetam release from both forms of ZSM-55, detemplated and pillared, was slowed down. Half-time of release was achieved after 1.2 and 0.5 h, respectively ( Table 2). Molecules 2020, 25, x FOR PEER REVIEW 5 of 14 both detemplated and pillared ZSM-55 (loading: 21 and 26%, respectively). The sorption of bulky, poorly soluble ciprofloxacin was significantly smaller, 4 and 8% for the detemplated and pillared ZSM-55, respectively. Ciprofloxacin was introduced in acidic conditions, which should make it cationic and favorable to interact with negatively charged zeolite layers [35] but nevertheless did not improve its loading. It is however important that for both types of composites interactions between the drug and the carrier were significant as determined by FT-IR and they influenced the molecular structure of the drug. XRDs of the solids isolated after reactions of piracetam and ciprofloxacin with detemplated sample did not show increased d-spacing suggesting little or no intercalation between layers ( Figure 3). The presumed drug molecule location is on the surface of the outer layers. Loading of the drugs inside the pillared ZSM-55 also did not change the XRD pattern but since the structure is rigid, drug molecules just fill the available spaces (as indicated by increased amount in the sample). In the XRD patterns for composites no reflections characteristic for piracetam or ciprofloxacin are observed, indicating that no drug crystallites with sizes exceeding the standard XRD detection limit, i.e., 2-2.5 nm [36][37][38][39] are present, even with high loading, i.e., 26% by weight of piracetam. The release of both drugs to PBS solutions in Franz cells was tested for both detemplated and pillared ZSM-55 samples ( Figure 4). The release profiles were different than for the commercial, crystalline forms. In agreement with the literature reports [25,26,34,40], crystalline piracetam dissolved in the PBS solution immediately, while ciprofloxacin was released slowly with full solubilization achieved after 50 h. The piracetam release from both forms of ZSM-55, detemplated and pillared, was slowed down. Half-time of release was achieved after 1.2 and 0.5 h, respectively ( Table 2). (3)) and coefficient of determination R 2 for the double-logarithmic fitting presented in Figure 4. Drug content determined by elemental analysis and UV/Vis experiments of total release of the drug from the composites (see Section 3 for details). Loading capacity (LC) and loading efficiency (LE%) were calculated using formulas presented in Section 3 (Equations (1) and (2)). ZSM-55 Drug For both forms of ZSM-55 the release of piracetam was complete and no drug was irreversibly trapped inside the support. The situation was opposite for ciprofloxacin-only about one-third of the drug was released from the pillared sample; the detemplated sample was not tested due to very low loading of ciprofloxacin. Notably, ciprofloxacin release rate was accelerated, compared to the pure drug. To understand the dissolution mechanisms from the composites, the release data were fitted using the empirical equation proposed by Korsmeyer and Peppas [41,42]. It contains the following two parameters: K, a constant incorporating structural and geometric characteristics of the system, and n, the release exponent, indicative of the mechanism of drug release. The release of piracetam may be interpreted in terms of Fickian (n = 0.50) and anomalous transport (0.50 < n < 1). The latter case is a superposition of two mechanisms-diffusion and degradation, including not only erosion of the matrix but also its relaxation. For piracetam in pillared sample, the n value is equal to 0.43, suggesting almost pure diffusion; the pillared zeolite matrix is rigid and does not change (degrade) in the PBS solution, which also holds true for the ciprofloxacin release. Piracetam (and probably also ciprofloxacin), is loaded mainly into the pores, formed by silica pillars between layers. For ciprofloxacin the n value is very low (n = 0.2), characteristic for sponge-like materials for which the penetration of solvent is easy [42], and may be due to low concentration of the trapped ciprofloxacin leaving more space for the solvent. In the case of detemplated sample with piracetam, the n value is much higher, 0.53, suggesting anomalous transport, combining diffusion with matrix flexibility. In the case of this material, piracetam was not intercalated between layers but accommodated in the (meso)pores formed between packs of layers. Such structural arrangement may undergo spatial change when drug molecules are released, which is reflected in the n value, suggesting anomalous transport. The K value is also different for both forms of ZSM-55, equal to 46 and 66, for piracetam release from detemplated and pillared ZSM-55, respectively. Since K depends on structural and geometric characteristics of the system, it also reflects the role of different layer arrangement in the mechanism of piracetam release. The rate of drug release should also depend on the interaction between intercalated molecules and the framework of the matrix. FT-IR was used to determine such interactions. Pure zeolites and the corresponding composites were activated in vacuum at 110 • C, i.e., at temperature high enough to remove most of the adsorbed water (small intensity of 1630 cm −1 band of H-O-H bending vibrations, Figures 5 and 6) but not the drug. Desorption of water made it possible to observe direct interactions between zeolite and the adsorbed drugs. Two separate vibrations, which in the crystalline piracetam are attributed to C=O stretching (1695 cm −1 ) and N-H bending (1650 cm −1 ) vibrations, appear in the spectra of the composites as a single, wide band at 1675 cm −1 ( Figure 5). This is consistent with the formation of hydrogen bonds resulting in blue-shift of the former and red-shift of the latter. It has been proven that upon hydrogen bond formation (both at the C=O and N-H functionalities) the amide I band (predominantly C=O stretching vibrations) is red-shifted, while the amide II band (predominantly N-H bending vibrations) is blue-shifted [43] -two bands may merge together, as in our case. This behavior suggests strong interaction between the carrier and the drug and may be responsible for the modified, slowed down release of piracetam from the ZSM-55 matrix. Two separate vibrations, which in the crystalline piracetam are attributed to C=O stretching (1695 cm −1 ) and N-H bending (1650 cm −1 ) vibrations, appear in the spectra of the composites as a single, wide band at 1675 cm −1 ( Figure 5). This is consistent with the formation of hydrogen bonds resulting in blue-shift of the former and red-shift of the latter. It has been proven that upon hydrogen bond formation (both at the C=O and N-H functionalities) the amide I band (predominantly C=O stretching vibrations) is red-shifted, while the amide II band (predominantly N-H bending vibrations) is blue-shifted [43] -two bands may merge together, as in our case. This behavior suggests strong interaction between the carrier and the drug and may be responsible for the modified, slowed down release of piracetam from the ZSM-55 matrix. Two separate vibrations, which in the crystalline piracetam are attributed to C=O stretching (1695 cm −1 ) and N-H bending (1650 cm −1 ) vibrations, appear in the spectra of the composites as a single, wide band at 1675 cm −1 ( Figure 5). This is consistent with the formation of hydrogen bonds resulting in blue-shift of the former and red-shift of the latter. It has been proven that upon hydrogen bond formation (both at the C=O and N-H functionalities) the amide I band (predominantly C=O stretching vibrations) is red-shifted, while the amide II band (predominantly N-H bending vibrations) is blue-shifted [43]-two bands may merge together, as in our case. This behavior suggests strong interaction between the carrier and the drug and may be responsible for the modified, slowed down release of piracetam from the ZSM-55 matrix. The carboxylic acid group located in the quinoline ring of ciprofloxacin can be protonated reversibly. The band at 1715 cm −1 (Figure 6) is characteristic of protonated H-bonded form, while the band at 1590 cm −1 is due to the deprotonated -COO-group [44]. In the case of crystalline ciprofloxacin the molecules form dimers, the -COOH group is deprotonated and the proton is transferred to the -NH group of the adjacent ciprofloxacin molecule [45]. Thus, in the spectrum of crystalline drug the band at 1590 cm −1 is present, while the one at 1715 cm −1 is absent. After incorporation of the drug into pillared ZSM-55, the carboxyl group is protonated, probably due to interaction with weakly acidic Si-OH groups present at the layer surface. Ciprofloxacin is therefore hydrogen-bonded to the zeolite matrix, and the strength of this interaction is lower than in crystalline form, where the proton transfer takes place. Better solubility and faster release of ciprofloxacin may be therefore caused by (i) amorphization of the drug, (ii) formation of its protonated form, and (iii) weaker interaction of the ciprofloxacin molecule with the zeolite than with other drug molecules in the crystalline form. A viable drug carrier, besides being capable to encapsulate drug molecules and unload the cargo in the patient body, should be non-toxic and biodegradable or not metabolized [46,47]. It is thus imperative to test toxicity of potential carriers and the influence on cells morphology, proliferation and viability. To determine the influence of ZSM-55 on the morphology of the model cells-mouse fibroblasts, microscopic images were taken using an inverted microscope (Figure 7). For these tests the calcined, parent ZSM-55 zeolite was chosen. Previously fixed cells were stained with hematoxylin and eosin. With incubation time, increasing from 24 to 72 h, the number of cells increased, retaining the elongated shape characteristic of mouse fibroblasts, which indicates the lack of toxic effect of the tested carrier. The carboxylic acid group located in the quinoline ring of ciprofloxacin can be protonated reversibly. The band at 1715 cm −1 (Figure 6) is characteristic of protonated H-bonded form, while the band at 1590 cm −1 is due to the deprotonated -COO-group [44]. In the case of crystalline ciprofloxacin the molecules form dimers, the -COOH group is deprotonated and the proton is transferred to the -NH group of the adjacent ciprofloxacin molecule [45]. Thus, in the spectrum of crystalline drug the band at 1590 cm −1 is present, while the one at 1715 cm −1 is absent. After incorporation of the drug into pillared ZSM-55, the carboxyl group is protonated, probably due to interaction with weakly acidic Si-OH groups present at the layer surface. Ciprofloxacin is therefore hydrogen-bonded to the zeolite matrix, and the strength of this interaction is lower than in crystalline form, where the proton transfer takes place. Better solubility and faster release of ciprofloxacin may be therefore caused by (i) amorphization of the drug, (ii) formation of its protonated form, and (iii) weaker interaction of the ciprofloxacin molecule with the zeolite than with other drug molecules in the crystalline form. A viable drug carrier, besides being capable to encapsulate drug molecules and unload the cargo in the patient body, should be non-toxic and biodegradable or not metabolized [46,47]. It is thus imperative to test toxicity of potential carriers and the influence on cells morphology, proliferation and viability. To determine the influence of ZSM-55 on the morphology of the model cells -mouse fibroblasts, microscopic images were taken using an inverted microscope (Figure 7). For these tests the calcined, parent ZSM-55 zeolite was chosen. Previously fixed cells were stained with hematoxylin and eosin. With incubation time, increasing from 24 to 72 h, the number of cells increased, retaining the elongated shape characteristic of mouse fibroblasts, which indicates the lack of toxic effect of the tested carrier. Microscopic observations are only qualitative, while quantitative data are supplied by ToxiLight TM test (toxicity effect) and PrestoBlue test (providing proliferation rate and viability of Microscopic observations are only qualitative, while quantitative data are supplied by ToxiLight™ test (toxicity effect) and PrestoBlue test (providing proliferation rate and viability of cells). After 24 h of incubation the cells relative viability depends almost linearly on the ZSM-55 concentration (Figure 8, upper panes). For the highest ZSM-55 concentration (1.20 mg/mL) the cells viability was reduced to ca. 65%. After extension of the incubation time to 72 h, the viability decreased to ca. 45% for the highest zeolite concentration, compared to untreated cells. For the control, fluorescence intensity (measure of the concentration of living cells) measured after 24 and 72 h of incubation increased by a factor of two, while for the highest ZSM-55 concentration fluorescence intensity increased by a factor 1.4. That means that even at the highest concentration, ZSM-55 only inhibited cells viability but did not cause complete inhibition of proliferation. Analysis of the data obtained using the ToxiLight test ( Figure 8, lower panes) confirmed the results obtained with the PrestoBlue test. After 24 h, for the highest applied concentrations of tested material (1.2 mg/mL), a reduced number of cells and an increased level of cytotoxicity was observed (increase by ca. 30% between control and the highest ZSM-55 concentration). After 72 h of incubation, the toxicity of the ZSM-55 sample increased only slightly compared to the level reached after 24 h (41 vs 49% for the highest ZSM-55 concentration), however the number of cells after 72 h was higher than after 24 h, which means that even high material concentration, equal to 1.2 mg/mL did not stop proliferation, only slowed it down. For the control, luminescence (measure of the concentration of living cells) measured after 24 and 72 h of incubation increased by a factor of two, while for the highest ZSM-55 concentration it increased by a factor of 1.7. Analysis of the data obtained using the ToxiLight test (Figure 8, lower panes) confirmed the results obtained with the PrestoBlue test. After 24 h, for the highest applied concentrations of tested material (1.2 mg/mL), a reduced number of cells and an increased level of cytotoxicity was observed (increase by ca. 30% between control and the highest ZSM-55 concentration). After 72 h of incubation, the toxicity of the ZSM-55 sample increased only slightly compared to the level reached after 24 h (41 vs 49% for the highest ZSM-55 concentration), however the number of cells after 72 h was higher than after 24 h, which means that even high material concentration, equal to 1.2 mg/mL did not stop proliferation, only slowed it down. For the control, luminescence (measure of the concentration of living cells) measured after 24 and 72 h of incubation increased by a factor of two, while for the highest ZSM-55 concentration it increased by a factor of 1.7. The results, obtained in two independent tests show that ZSM-55, in moderate concentrations may be safely used as a carrier of drug molecules, not having negative effect on the cells viability or proliferation rate. Sample Preparation ZSM-55 was prepared and transformed into detemplated, swollen and pillared forms as described earlier [14]. In a typical preparation, the mixture of 1.36 g boric acid in 120 g of water, 10 g of 50% NaOH, 40 g choline chloride and 65 g of colloidal silica Ludox LS30 was heated in a Teflon-lined autoclave at 150 • C for 90 h with rotation. For template extraction (detemplation) 5 g of ZSM-55 was contacted with 100-150 mL of the 1:10 mixture of concentrated HCl and methanol at 50 • C, overnight. To obtain the pillared product, a detemplated sample was swollen at room temperature with a 50/50 mixture of solutions of 25% hexadecyltrimethylammonium chloride and hydroxide (solid to solution ratio 1:20 w/w). After swelling, the solid was centrifuged, washed with water and air-dried at 65 • C. Pillaring was carried out by mixing the swollen material with TEOS (1:100 w/w), stirring overnight at room temperature and solid isolation by centrifugation and air-drying. Calcinations were carried out for 6 h at 550 • C with heating ramp of 2 • C/min. All chemicals were obtained from Sigma Aldrich Poland (Wrocław, Poland). Drugs Intercalation Ciprofloxacin intercalation: 0.5 g of the ZSM-55 derivative, detemplated or pillared, was stirred with 50 cm 3 of 0.1 M HCl containing 0.5 g of ciprofloxacin on a magnetic stirrer for 24 h at room temperature, washed with small amount of 0.1 M HCl and air-dried at room temperature. Piracetam intercalation: 0.5 g of the ZSM-55 derivative, detemplated or pillared, was mixed with 10 cm 3 of 10% aqueous solution of the drug on a magnetic stirrer for 24 h at room temperature, washed with small amount of water and air-dried at room temperature. Loading capacity (LC%) and loading efficiency (LE%) were calculated on the basis of the following formulas: LC = total entrapped drug mass zeolite mass ·100% (1) LE = total drug added − free drug in supernatant total drug added Drugs Release The drug release study was carried out in Franz cells [15]. The donor part of the cells (containing the composite) was separated from the acceptor part (solution) by a 0.8 µm cellulose acetate filter (Sartorius, Göttingen, Germany), simulating a contact layer. Approximately 5 mg of the test material were placed on the filter, the acceptor part was filled with 5 mL of phosphate buffered saline (PBS) at pH 7.4 [16]. The experiment was carried out at constant temperature of 37 • C (simulating human body temperature) with continuous mixing at 200 rpm. Samples were taken from a cell using a 0.2 mL syringe at various time intervals (0.25; 0.5; 1; 1.5; 2; 4; 6; 8; 10; 24; 48; 72 h). To maintain constant volume, the system was supplemented each time with the same amount (0.2 mL) of fresh PBS solution. Three tests were carried out for each composite in order to obtain reliable and reproducible results. The control was performed by releasing pristine crystalline drug in the amount corresponding to the drug content in the tested materials. The total content of piracetam and ciprofloxacin in the tested materials was determined by mixing the composites (5 mg) in a PBS solution (20 mL) for 72 h at 37 • C. The resulting solution was then centrifuged to separate the solid from the solution. A 2 mL sample was taken, diluted and concentration of the drug released into the solution was measured using UV/Vis spectroscopy (Lambda spectrometer, Perkin-Elmer, Waltham, MA, USA). The percentage of the drug obtained that way was considered as the baseline (100%) of the content of the drug in a given carrier. Release curves were fitted to experimental points by Origin software (Origin 2018, Northampton, MA, USA) using the Korsmeyer-Peppas law [48] to determine the mechanism of molecule release-estimating the exponent n in Equation (3): where M ∞ is the amount of drug at the equilibrium state, M i is the amount of drug released over time t, K is related to the release velocity constant, and n is the exponent of release (related to the drug release mechanism) in function of time t. The exponent was determined from the portion of release curve until the deflection point in order to provide sufficient number of measured points although it exceeded the cutoff at 60% of the released drug (by weight), recommended for the application of this law. The enhancement in the number of points did not affect the value of the exponent n [49], which is best visualized by the linear course of the release curves in double logarithmic scale ( Figure 4). Physicochemical Characterization The structure and crystallinity of obtained samples were determined by X-ray powder diffraction (XRD) using a MiniFlex diffractometer (Rigaku, The Woodlands, TX, USA) in reflection mode, using CuK α radiation (λ = 0.154 nm). The XRD patterns were collected with steps of 0.02 • . Nitrogen adsorption-desorption isotherms were determined by the standard method at −196 • C (liquid nitrogen temperature) using an ASAP 2020 (Micromeritics, Norcross, GA, USA) static volumetric apparatus. Before adsorption the samples (ca. 200 mg) were outgassed at 110 • C overnight using turbomolecular pump to remove adsorbed water. For FT-IR studies, the samples in the form of thin layer deposited on silicon wafers (ca. 10 mg) were dehydrated at 110 • C under vacuum in a custom-made IR cell enabling in situ treatment at variable temperatures. IR spectra were recorded on a Tensor 27 spectrometer (Bruker, Ettlingen, Germany) equipped with a MCT detector, working with spectral resolution of 2 cm −1 . Cytotoxicity and Cells Viability Proliferation of cells and the cytotoxicity effect of the studied material was determined by ToxiLight™ assay (Lonza, Greenwood, SC, USA). The test was used to calculate the concentration of adenylate kinase (AK) in the supernatant (representing damaged cells) and lysate (representing intact adherent cells). Viability of cells was examined by resazurin-based reagent PrestoBlue™ (Invitrogen, Carlsbad, CA, USA). The fluorescent product of the resazurin reaction and luminescence from luciferase in the presence of AK (ToxiLight™ test) were detected using a POLARstar Omega microplate reader (BMG Labtech, Ortenberg, Germany). Cells morphology was observed under inverted microscope CKX53 (Olympus, Tokyo, Japan) after eosin/hematoxylin staining. ToxiLight™ tests were performed after 24 and 72 h of incubation of fibroblasts with the tested materials, and the assay procedure was performed according to the producer protocol.
8,572.2
2020-07-31T00:00:00.000
[ "Chemistry" ]
Quantum Variational Principle and quantum multiform structure: the case of quadratic Lagrangians A modern notion of integrability is that of multidimensional consistency (MDC), which classically implies the coexistence of (commuting) dynamical flows in several independent variables for one and the same dependent variable. This property holds for both continuous dynamical systems as well as for discrete ones defined in discrete space-time. Possibly the simplest example in the discrete case is that of a linear quadrilateral lattice equation, which can be viewed as a linearised version of the well-known lattice potential Korteweg-de Vries (KdV) equation. In spite of the linearity, the MDC property is non-trivial in terms of the parameters of the system. The Lagrangian aspects of such equations, and their nonlinear analogues, has led to the notion of Lagrangian multiform structures, where the Lagrangians are no longer scalar functions (or volume forms) but genuine forms in a multidimensional space of independent variables. The variational principle involves variations not only with respect to the field variables, but also with respect to the geometry in the space of independent variables. In this paper we consider a quantum analogue of this new variational principle by means of quantum propagators (or equivalently Feynman path integrals). In the case of quadratic Lagrangians these can be evaluated in terms of Gaussian integrals. We study also periodic reductions of the lattice leading to discrete multi-time dynamical commuting mappings, the simplest example of which is the discrete harmonic oscillator, which surprisingly reveals a rich integrable structure behind it. On the basis of this study we propose a new quantum variational principle in terms of multiform path integrals. Introduction Discrete integrable systems [1] have started to play an increasingly important role in deepening the understanding of integrability as a mathematical notion, thereby forging new perspectives in both analysis (e.g. the discovery of difference analogues of the Painlevé equations), geometry (the development of discrete differential geometry, [2]) and algebra (e.g. the development of cluster algebras through the so-called Laurent phenomenon). In physics, at the quantum level, discrete integrable systems appear in connection with random matrix theory and quantum spin models of statistical mechanics, and in aspects of relativistic many-body systems [3], but more directly in approaches to establish integrable quantum field theories on the space-time lattice [4]. Integrable systems are important not only because they can be treated by exact and rigorous methods, but also because they appear to be universal: they have a rare tendency of emerging in a large variety of contexts and physical situations, such as in correlations functions in scaling limits, random matrices and in energy level statistics of even chaotic systems. Furthermore, their intricate underlying structures gave rise to new mathematical theories, such as quantum groups and cluster algebras, revealing novel types of combinatorics. Thus, one could argue, letting these systems "speak for themselves" the stories they tell us will lead us to new principles and insights, even perhaps about the structure of Nature itself. One such story is about their variational description in terms of a least-action principle and its connection to one of the key integrability features, multi-dimensional consistency (MDC). The latter is the phenomenon that integrable equations do not come in isolation, but tend to come in combination with whole families of equations, all simultaneously imposable on one and the same field variable (the dependent variable of the equations). Such equations manifest themselves as higher or generalized symmetries, as hierarchies of equations or as compatible systems, their very compatibility being the signature of the integrability. In fact, it is this very feature that forms a powerful tool in the exact solvability of such equations through techniques such as the inverse scattering transform (a nonlinear analogue of the Fourier transform), Lax pairs and Bäcklund transformations. This story about the variational description of integrable systems started with the paper [5], where the Lagrangian structure of a class of 2D quadrilateral lattice equations was studied, which are integrable in the sense of the MDC property. It was shown that for particularly well-chosen discrete Lagrangians for those equations, embedded through the MDC property in higher-dimensional space-time lattice, the Lagrangians obey a closure property, suggesting that these Lagrangians should be viewed as components of a discrete p-form that is closed on solutions of the quad equations. This remarkable property led to the formulation of a novel least-action principle in which the action is supposed to attain a critical point not only w.r.t. variations of the field variables, but also the action being stationary w.r.t. variations of the space-time surfaces in the higher-dimensional lattice of independent discrete variables on which the equations are defined. This allows one to derive from this extended variational principle not one single equation (in the conventional way on a fixed space-time surface) but the full set of compatible equations that possess the MDC property. Furthermore, this property was also shown to extend to corresponding integrable differential equations defined on smooth surfaces in a multidimensional space-time of independent continuous variables, as well as on systems of higher dimension and of higher rank, [6][7][8] as well as to many-body systems [9][10][11]. Further extensions and deepening understanding of these results were obtained in a number of papers, cf. [12][13][14]. A natural question is whether the Lagrangian multiform structure described above extends also to the quantum regime, since, after all, a canonical quantization formalism for reductions of quadrilateral lattice equations and higher-rank systems, using non-ultralocal R matrix structures, was already established some while ago [15,16], as well as for a quantum lattice Hirota type system [17], cf. also [18]. However, the natural setting for a Lagrangian approach in the quantum case is obviously the Feynman path integral [19], which has remained curiously unexplored in the context of integrable systems theory where there has been a predilection for the Hamiltonian point of view. However, when dealing with discrete systems, e.g. systems evolving in discrete time, the Hamiltonian view point is no longer natural, and the Lagrangian point of view may become preferable. The further advantage is that in discrete time, path integrals are no longer marred by the infinite time-slicing limit which causes such objects to be notoriously ill-defined in general. Thus, first steps to set up a path integral approach for integrable quantum mappings 3 , i.e. integrable systems with discrete-time evolution, were undertaken in [21,22]. However, the main aim of the present paper is to arrive at an understanding of the Lagrangian multiform structure on the quantum level. In order to achieve that, and to avoid analytical complications arising from the nonlinearities, we restrict ourselves in this initial treatment to the case of quadratic Lagrangians, associated with linear multidimensionally consistent equations. Although this may seem restrictive, the quadratic case is surprisingly rich and exhibits most of the properties of the wider classes of nonlinear models when it comes to the MDC aspects. Those reveal themselves in the way the lattice parameters govern the compatible systems of equations, and it is there where even these linear equations exhibit quite non-trivial features. In fact, an interesting role reversal between discrete independent variables and continuous parameters allows the corresponding quantum propagators to be interpreted at the same time as discrete as well as continuous path integrals. The periodic reductions are particularly noteworthy, since they lead to propagators that can be readily computed, and it is here that the humble quantum harmonic oscillator makes its reappearance in quite a new context. The outline of the paper is as follows. In section 2 we describe the classical quad equation, i.e. a 2-dimensional partial difference equation defined on elementary quadrilaterals, and its Lagrangian 2-form structure. In section 3, we consider its periodic reductions on the classical level, and construct commuting flows for the lowest period cases. The simplest (3-step) reduction leads to the harmonic oscillator, but even this case there is a non-trivial Lagrangian 1-form structure on the classical level. Next, in section 4 we consider the quantization of the reductions through discrete-time step path integrals which at the same time provides a natural discretization of the underlying continuous-time model in terms of the lattice parameters. The MDC property here is reflected in a path-independence property of the propagators. This leads us to suggest a quantum variational principle which we expect may extend to models beyond the quadratic case. In section 5 we return to the quad lattice case, which resembles a quantum field type of situation, and we establish surface-independence of the relevant propagators, suggestive of a quantum variational principle in the field theoretic case. Finally, in section 6 we discuss some possible ramifications of our findings, and how they connect to some ongoing questions regarding quantum mechanics and foundational aspects. Linearised Lattice KdV Equation Our starting point is a 2 dimensional quadrilateral lattice equation, whose dependent variable u(n, m) is defined on lattice points labelled by discrete variables (n, m), which are variables shifting by units, and with lattice parameters p and q, each associated with the n and m directions on the lattice respectively. We adopt the shift notation by accents and , i.e. for u := u(n, m),we have u := u(n + 1, m), u := u(n, m + 1). The equation of interest in this paper is in the linear quadrilateral equation: This quadrilateral equation is supposed to hold on every elementary plaquette across a 2 dimensional lattice; the elementary plaquette is shown in figure 1. This is something of a "universal" linear quad equation, being the natural linearisation of nearly all the integrable quad equations of the ABS list [23]. This equation can be derived via discrete Euler-Lagrange equations on the three-point Lagrangian where, for the action, we sum across every plaquette in the lattice: (2) is also the natural linearisation of the Lagrangians for the non-linear quad equations of the ABS list from which (1) can be derived. In fact, the standard variational principle on (2) produces two copies of (1). In order to regain precisely the linearised KdV equation, we must make use of the multiform variational principle introduced by Lobb and Nijhoff [5,12]. (1) can be consistently embedded into a multidimensional lattice, with directions labelled by subscripts i, j, k. Across an elementary plaquette in the i − j plane, (1) takes the form: where u i indicated u shifted once in the i direction on the lattice, and p i is now the lattice parameter associated to the i direction. This equation has multidimensional consistency, which can be checked by establishing closure around the cube [24] -field variables at any point in the multi-dimensional lattice can be calculated via any route in a consistent manner. In the variational principle proposed in [12], the action is defined as the sum of Lagrangians on elementary plaquettes across a 2-dimensional surface σ, embedded in the multidimensional space. To derive the equations of motion, we then demand the action be stationary not only under the variation of the field variables u, but also under the variation of the surface σ itself. For this to hold, we require closure of the Lagrangian: if we consider the combination of oriented Lagrangians on the faces of a cube, we require that on the equations of motion, the Lagrangians sum to zero. In other words, where we have used the shorthand L i j (u) := L(u, u i , u j ; p i , p j ), and the final equality in (5) holds only when we apply (4). According to [12], such a system must be described by a Lagrangian of the form where we require C i j to be antisymmetric under interchange of i and j. Notice that the Lagrangian (2) is already in this form. By using the multidimensional consistency, a set of Euler-Lagrange equations are derived, which simplify on a single plaquette to: This yields precisely the equation (1). This structure allows us to describe the mutliple consistent equations (4) in a single Lagrangian framework -that of the 2-form. This is then the appropriate variational structure to describe multi-dimensionally consistent systems [5]. In fact, the Lagrangian (2) is the almost unique quadratic Lagrangian with a 2-form structure (i.e. exhibiting the closure property). Considering the general form for a three-point Lagrangian 2-form and equation of motion (6), we restrict our attention to quadratic Lagrangians and have the general form: where we require δ ji = −δ i j . Here, subscripts on coefficients indicate dependence on the lattice parameters p i and p j . This Lagrangian yields the equation of motion: This is a quad equation, and as such we require it to be symmetric under the interchange of i and j. This leads to the conditions Noting that the Lagrangian (7) already obeys the closure relation (5) on the equations of motion above, we use our freedom to multiply by an overall constant to let c = 1, and hence the general Lagrangian is given by: We can see this has the same form as (2), but with a more general dynamical, anti-symmteric parameter δ i j , and the free parameter a i that does not effect the equations of motion. Periodic Reduction Reductions of lattice equations to integrable symplectic mappings have been considered since the early 1990s [25][26][27][28]. Here, we are considering a linearised version of the lattice KdV equation as our starting point, and follow the same reduction procedure as has been considered previously for non-linear quad equations. The reduction is obtained by imposing a periodic initial value problem, where the evolution of the data progresses through the lattice according to a dynamical map, or equivalently a system of ordinary difference equations, which is constructed by implementing the lattice equation (1). We begin with initial data u 0 , u 1 and u 2 , and let u 2 = u 0 , according to figure 2. This unit is then repeated periodically across an infinite staircase in the lattice. This is the simplest meaningful reduction we can perform on the lattice equation. Applying the linear lattice equation (1) to each plaquette, we can write equations for the dynamical mapping (u 0 , u 1 , u 2 ) → ( u 0 , u 1 , u 2 ): This is a finite-dimensional discrete system. We introduce the reduced variables x := u 1 − u 0 , y := u 2 − u 1 and, by eliminating y, write the second order difference equation: where the underhat x indicates a backwards step. This equation can be expressed by a Lagrangian-type generating function, with the equation arising from discrete Euler-Lagrange equations: and so is symplectic, d x ∧ d y = dx ∧ dy. The map also possesses an exact invariant: The equation (10) is a discrete harmonic oscillator. It is not difficult to see that the most general solution to (10) is given by where m is the discrete variable. This has a clear relation to the solution for the continuous time harmonic oscillator. This solution can alternatively be written as By considering derivatives with respect to the parameter b, we can then derive the equations: Eliminating x yields the second order differential equation in b: A remarkable exchange has taken place: the parameter and independent variable of the discrete case, b and m, have exchanged roles to become the independent variable and parameter of a continuous time model. Note that (15) can be simplified by taking µ := cos −1 (−b) as the "time" variable, so that: d 2 x/dµ 2 + m 2 x = 0. This is the equation for the harmonic oscillator, with a quantised frequency ω = m. Commuting Discrete Flow Recall that the linear lattice equation (4) can be embedded in a multidimensional lattice. From the periodic reduction in the plane (figure 2) we consider the embedding within a three dimensional lattice. The third lattice direction has lattice parameter r, and we introduce shifted variables u i , as shown in figure 3. To derive the mapping, we now use the lattice equations (4): which, in terms of the u i , yield where t := p−r p+r , t ′ := q−r q+r . Figure 3: The variables u i extend from the plane in a third direction. Again, we use reduction variables (x, y), which yield the map (x, y) → (x, y). This map can be written in a matrix form, from which it can be shown to be area preserving, dx ∧ dy = dx ∧ dy. Eliminating y again produces a second order difference equation in x: This equation has the same form as (10), that of a discrete harmonic oscillator, along with invariant I a (x, x) = We can write both maps (x, y) → ( x, y) and (x, y) → (x, y) in matrix form: x = S x , x = T x , x := (x, y) T . It is then clear that the two maps commute, ( x, y) = ( x, y) , since we have [S, T] = 0 . This last relation relies on the parameter identity, stt ′ = s − t + t ′ , which is easily shown using the definitions for s, t and t ′ . Our equations are slightly simplified by introducing the parameters P := p 2 + pq , Q := q 2 and R := r 2 , in terms of which a = (P − R)/(P + R) , b = (P − Q)/(P + Q). By returning to earlier evolution equations in terms of x and y and eliminating y in a different manner, we derive "corner equations" for the evolution, linking x, x and x; or x, x and x respectively. Thus: Thus we have multiple equations of motion (10), (17), (18) all holding simultaneously on the same variable x. Lagrangian 1-form structure A recent development in understanding discrete integrable systems with commuting flows has been the Lagrangian multiform theory [12,5,9,10,29,11,14]. A system with two or more commuting, discrete flows can be described by a Lagrangian 1-form structure, which provides a way to obtain a simultaneous system of equations for a single dependent variable from a variational principle. Thus, the Lagrangians generating the flows x → x and x → x should form the components of a difference 1-form, each associated with an oriented direction on a 2D lattice. The action functional is then defined as a sum of elementary Lagrangian elements over an arbitrary discrete curve Γ in the 2D lattice, as shown in figure 4. m n Γ Figure 4: A curve Γ in the discrete variables. The usual variational principle demands that, on the equations of motion, the action S be stationary under the variation of the dynamical variables x. In addition, we also demand that S be stationary under variations of the curve Γ itself. This principle leads to the compatability of equations of motion and corner equations, under the condition of closure of the Lagrangians. That is, on the equations of the motion, the action should be locally invariant under changes to the curve Γ and therefore: where this last equality holds only on the equations of motion. In the model we are considering, we already have compatible flows with consistent corner equations, and so it is natural for us to seek a Lagrangian form exhibiting closure. However, if we naively seek to satisfy the closure relation (20) with any simple Lagrangian yielding the equations of motion, we will find that this does not suffice -we must seek a more specific form. By considering the general form for the quadratic Lagrangians: we can apply the closure L = 0 as a condition. Recall that we require closure only on the solutions to the equations of motion, so we apply the corner equations (18) to L, and then compare coefficients of the remaining terms. Demanding that α, a 0 and β, b 0 be independent of Q and R respectively, we find the conditions on the coefficients: where γ is some overall constant, and f (P) is a free function of P. f does not make any contribution to what follows, and so we ignore it: we let a 0 = a/2 and b 0 = b/2. This yields the Lagrangians: By construction, these obey the condition L = 0 on the equations of motion, and also yield the equations of motion (10) and (17) by the usual variational principle. This eliminates a great deal of the usual freedom in choosing our Lagrangian: the closure condition mandates a specific form of the Lagrangian. In fact, not only the equations (10) and (17) arise from a variational principle on this action, but also the corner equations (18). We have four elementary curves in the space of two discrete variables, shown in figure 5 4 . Across each curve, we can define an action, and then a variation with respect to the middle point, which leads to an equation of motion. Figure 5: Simple discrete curves for variables m and n. The action and Euler Lagrance equation for curve 5(i) are which is compatible with equations (18). Similarly, for curve 5(ii): which is equation (17) (i.e. this is a "standard" Euler-Lagrange equation). Curves 5(iii) and (iv) yield similarly (10) and the other part of (18). We therefore have, for the specific choice of Lagrangians described, a consistent 1-form structure, yielding the equations of motion and corner equations, and obeying a Lagrangian closure relation. The discrete harmonic oscillator then, despite its simplicity, nonetheless has an underlying structure of a Lagrangian one-form expressing commuting flows: this is the simplest example yet discovered of such a structure. Recall the invariants, it is straightforward to show using the equations of motion that both invariants are preserved under both evolutions, It is not clear, however, that these invariants are necessarily equal: I b has an apparent dependence on Q, and I a on R, that must be resolved. Taking our special choice of Lagrangians (23), we can then define canonical momenta, and rewrite our invariants in those terms. Writing X a as the mometum conjugate to x in L a , and X b similarly for L b , we find: As a direct consequence of the corner equation (18) we then have precisely that X a = X b =: X . In other words, we can define a common conjugate momentum for both evolutions. If we then write our invariants in terms of x and X we find after multiplication by a constant (which clearly does not change the nature of the invariants) that Note that in this form I a , I b appear Q and R independent, and are nothing other than the Hamiltonian for the continuous harmonic oscillator, with angular frequency ω = 2 √ P. This form is Lagrangian dependent. A different choice of Lagrangian yields different conjugate momenta that are no longer equal, and where the equivalence of the invariants is no longer apparent. Requiring equality of the invariants turns out to be an equivalent condition to demanding Lagrangian closure. The compatibility of the two discrete evolutions and their corner equations (guaranteed by the Lagrangian 1-form structure) allows us to consider a joint solution to the equations x m,n . We allow m to label the hat evolution, and n to label the bar evolution, such that x = x m,n , x = x m+1,n , x = x m,n+1 , and so on. Requiring x m,n to obey (10), (17) and (18), we have the joint solution for the evolutions: In the same way as the parameter b generates a continuous flow compatible with the discrete evolution (15), so we can find a continuous flow in the parameter a: Now the joint solution (28) Using the corner equations (18) these Lagrangians exhibit continuous multiform compatibility, obeying the relations So, by considering the discrete parameters a, b now as continuous variables, we find a continuous-time 1-form structure. As in [31], the harmonic oscillator continues to display surprising new features. On the discrete level, we discover compatible flows that can be expressed through the structure of a Lagrangian form, even for this very simple case. This deeper structure then extends beyond the discrete case also into compatible continuous flows and we have an interplay between these discrete and continuous one-form structures. Having endowed the harmonic oscillator with these multi-dimensional structures, how are they revealed in the quantum harmonic oscillator case? The P = 2 case yields a 1 dimensional mapping that is entirely equivalent to the case we have considered in section 3.1, except the lattice parameters combine in a slightly different way to give the coefficient of the harmonic oscillator. The P = 3 case is the next case of interest, as here we find a system of coupled harmonic oscillators in x 1 and x 2 , with two commuting invariants and a similar commuting flow structure. In a similar manner to (10) we can derive equations for a discrete flow in variables x 1 and x 2 : As in section 3.2, we can also derive a commuting flow for the evolution: Commutativity of these evolutions can be easily shown from the first order form (with x and y variables) by writing each evolution in matrix form; the resulting matrices commute. The evolution then also possesses corner equations, which can be derived using the eliminated y variables. These allow us to write closed form Lagrangians, such that L = 0 (20) on the equations of motion (32,33,34): recalling the relation of s, t, t ′ . A Lagrangian 1-form structure as in section 3.3 follows. Note that L 2 represents a Bäcklund transform with parameter r. The Lagrangians (35,36) allow us to define the momenta conjugate to with respect to which we have the invariant Poisson structure {x i , X j } = δ i j , preserved under the mappings. We could also write expressions for X i using L 2 , with equality of these expressions producing the corner equations. We can additionally derive two quadratic invariants of the mapping I 1 , I 2 , which are invariant under both maps. The canonical structure of (37) allows us to show the critical integrability property that the two invariants are in involution with each other, with respect to the canonical Poisson bracket: The invariance and involutivity of these can be shown by direct calculation. I 1 and I 2 will thus generate two commuting continuous flows to the mapping. For both the hat and the bar evolutions (32), (33), (34) it is possible to write explicit solutions, and indeed we can find a joint solution to the discrete evolutions: where cos µ ± = −3s/4 ± 1 2 1 − 3s 2 /4 and We find x 1 (m, n) similarly as a linear combination of shifts of x 2 . By considering derivatives with respect to the parameters s and t (recalling t ′ is not independent of s, t), we can therefore derive commuting continuous flows from the solution structure (39). We observe then again the interchange between continuous and discrete parameters and variables, as in the lower periodic case. We expect this will lead to a continuous Lagrangian 1-form structure, but defer further investigation to a later paper. The Quantum Reduction In section 3.3, the discrete harmonic oscillator model, arising as a special reduction from the linearised lattice KdV equation (1), albeit a simple linear model nonetheless displays commuting discrete flows. In the classical case, the Lagrangian 1-form structure captures these commuting flows in a variational principle. A natural question is: what is the quantum analogue for such a structure? Since the harmonic oscillator is well known and understood, it forms a good first toy model for investigating Lagrangian form structures at the quantum level. Integrable quantum mappings, arising from the quantisation of mapping reductions from lattice equations, were constructed and studied within the framework of canonical quantization and (non-ultralocal) R-matrix structures in [15,20,[33][34][35]. In a pioneering paper [36] Dirac took the position that the Lagrangian approach to Physics is the more natural one and proposed the first steps towards incorporating the Lagrangian into quantum mechanics, a route that was later pursued by Feynman leading to his concept of the path integral [37]. Concurring with Dirac's point of view, we seek here to understand the extended Lagrangian multiform variational principle on the quantum level, leading naturally to problem of finding a path integral version of that formalism in order to capture its natural quantum analogue. To make first steps in that direction the simple case of the quantum mappings derived in the previous section is a good starting point, exploiting the well-known formal techniques of path integrals, cf. e.g. [19,38,39]. As we will point out later there are some similarities with ideas developed by Rovelli in [40,41] who also uses the harmonic oscillator to develop ideas on reparametrisation invariant discretisations within the path integral framework, in particular the natural emergence of conservation of the energy of the coninuous model within a time-slicing discretisation. Feynman Propagators Beginning from our Lagrangian L b (23) we write the conjugate momenta X := X b (26) and X = ∂L b /∂ x. In canonical quantisation, position x and momentum X become operators x and X, such that [x, X] = i . The momentum equations (26) become operator equations of motion: To understand the discrete time evolution we wish to express the evolution (x, X) → ( x, X), in terms of a time-evolution operator U b , such that This is a canonical approach to discrete quantisation, see for example [15]. Considering (41), it is not hard to see that an appropriate choice of U b is given by: In other words, a separated form for U b exists, but it is required to have three terms. Note that (42) is not a unique form for U b . In discrete time, the one time-step propagator is then given by K b (x, n; x, n + 1) = n+1 x|x n = x|U b |x , where we have moved in the second equality from time-dependent, Heisenberg picture eigenstates to timeindependent, Schrödinger picture eigenstates. Since we have an explicit form for U b , we can calculate this expression by inserting a complete set of momentum eigenstates: The second line results from a Gaussian integral: the linearity of our system justifies taking the integration region over the whole real line (we make some assumptions here on the Hilbert space). The final line recalls the Lagrangian (23). This is what might be expected for a "one-step" path integral (such as in [42,21]) noting that this approach also specifies the normalisation constant. This is sufficient to define the discrete-time path integral. By iterating (43) over N steps, we can write precisely the propagator for our discrete system: In this discrete case, equation (44) gives a precise definition to the path integral notation: Notice in particular that the normalisation associated to the measure is here unambiguous. In our quadratic regime, we can now calculate this explicitly. Details are given in Appendix A, but we first expand our quantum variables around the classical path, where the classical action can be evaluated as Evaluating the discrete path integral as a series of N Gaussian integrations, and recalling the normalisation constant in (44), we calculate the propagator: Note that this has the same form as the propagator for the continuous time harmonic oscillator. Dependence on the parameter b is evident through cos µ = −b. We note, then, that the propagator is common to both the discrete flow and to the interpolating continuous time flow. Using the operator equations of motion (41), it is easy to see that we have an operator invariant: This is, of course, simply the operator version of the classical invariant (27), and is precisely the Hamiltonian for the continuous time harmonic oscillator, where 4P = ω 2 . Note that I b is Q independent, and so it is clear that the same process applied to the bar evolution generated by L a will give the same result. In other words, both discrete quantum evolutions share the same invariant, which is the harmonic oscillator. The invariant can also be considered from the perspective of path integrals and the unitary operator following the method of [21]; this is elaborated in Appendix B. We can relate I b (47) to the evolution operator U b (42) in principle by a Campbell-Baker-Hausdorff expansion ( [43,44]); an explicit form is given by algebraic manipulation: So we can see clearly how the discrete quantum evolution relates to a continuous time flow. Path independence of the propagator In equation (46) we have established the propagator for an evolution in one discrete time variable; but we have in the classical case two compatible discrete flows (23). The one-step propagator in the hat direction is given in (43), whilst in the bar direction it is easily deduced by the same method: We remark that, as we have here a second time direction, we might plausibly introduce a second parameter. We ignore such considerations for the time being and allow to be the same in both time directions. In general, if we begin at a time co-ordinate (0, 0) and evolve along integer time co-ordinates to a new time (N, M), the propagator could depend not only on the endpoints, but also on the path Γ taken through the time variables, see figure 4. We associate to the path an action S Γ := S[x(n); Γ] (19). We can then define a propagator for the evolution along the time-path Γ, made up of the one-step elements (43), (49): where we have integrated over all internal points x n,m on the curve Γ. Here N Γ represents the product of normalisation factors from the relevant elements of (43), (49). We begin by considering the simple case of an evolution of one step in each direction. There are two routes to achieve this, as shown in figure 7. Either we evolve first in the hat direction, followed by an evolution in the bar direction, or vice versa. In path (i), we evolve first according to the hat evolution L b , and then according to the bar evolution L a . We evaluate the propagator as: For the alternative path (ii) we evolve first by the bar evolution L a , and then the hat evolution L b : These are both resolved by substituting Lagrangians (23) and evaluating the Gaussian integral. The result is totally symmetric under interchange of the parameters q and r, as are (51) and (52); so that We find the same propagator for either path. It is an obvious corollary of this result that, so long as we take only forward steps in time, the propagator K N,M (x a , x b ) is independent of the path taken in the time variables. We could also consider a path in the time variables allowing backward time steps. As in the classical case, we can construct an action for such a trajectory, using an appropriate orientation for the Lagrangians. In the quantum case we perform a path integral over this action, integrating over all intermediate points. As U b generates a time-step in the b direction (section 4.1), U −1 b generates the backward evolution. x Considering once more the simplest case, we imagine a trajectory around three sides of a square, shown in figure 8. Including the normalisation factors from (43) this is described by the propagator, This is easily calculated by Gaussian integrals, and yields: So we regain exactly our one step propagator from (43). Remarkably, we again achieve Lagrangian closure, but now on the quantum level. Recall that classically Lagrangian closure depended upon the equations of motion: here we have left the equations of motion behind, and yet this key result still holds. We could also consider the possibility of a loop in the discrete variables, illustrated in figure 9(i). We imagine some unspecified incoming and outgoing actions S in (x a , x 1 ) and S out (x 5 , x b ), a simple loop in discrete steps, and five integration variables x 1 , . . . , x 5 . Note that we assign two integration variables to the same vertex, as it is visited twice by the path: the following calculation will justify this choice as the correct one. x 1 We then consider the action for the loop, x 4 ), noting the orientations on the Lagrangians. With normalising factors from (43) and complex conjugations in the backward steps, we then have: The x 2 and x 4 integrals are evaluated as in (51) yielding, The quadratic term in the exponent in x 3 disappears, and so the integral dx 3 yields a Dirac delta function: δ(x 1 − x 5 ). Combined with the integral over x 5 this forces x 5 = x 1 (as expected) and we finally conclude, Diagrammatically, this is equivalent to the disappearance of the loop, shown in figure 9(ii). Loops in the discrete variables therefore "close" and do not effect the overall propagator. The proposition now allows us to calculate the general propagator for N steps in the hat direction and M steps in the bar direction, compare (50). We denote such a propagator from x a to x b by K N,M (x a , x b ). As a consequence of the path independence, it is then clear that we can calculate this as K N,M (x a , x b ) = dxK N,0 (x a , x)K 0,M (x, x b ). In other words, we can consider taking first all the hat-steps, followed by all the bar-steps. Taking our discrete propagator from (46), we can then carry out the integral as another Gaussian, but in fact the result follows immediately from the group property of the propagator, using its shared form with the continuous time case, so: which bears a clear relation to the continuous time case. Uniqueness The time-path independence for the propagator of section 4.2 is a special property of our choice of Lagrangian (23) that does not hold in general. As classically the Lagrangian 1-form obeys the closure condition (20), so in the quantum case we have time-path independence of the propagators as a natural quantum analogue. Whilst classically this closure holds only on the equations of motion, in the quantum case the path-independence occurs as we perform the path integral over intermediate variables. It emerges that, for given oscillator parameters a and b, there is a fairly unique choice of Lagrangians exhibiting timepath independence. Consider the generalised oscillator Lagrangians of equation (21) and define propagators around two corners of a square, as in equations (51) and (52). Here we allow a and b to be free oscillator parameters. N and N are undetermined normalisation constants. These paths are illustrated in figure 7. We demand equality of the exponents in these two expressions, once the integral has been carried out; in other words we demand K (x, x) = K (x, x) , up to a normalisation. Calculating these propagators via a Gaussian integral, we then derive conditions for time-path-independence on our coefficients, which can be found in Appendix C. We find the necessary conditions on the coefficients: As in (22) the constant f makes no contribution and we ignore it. The general Lagrangians (21) are therefore restricted to a symmetric form, with a specified overall constant given by the oscillator parameters a, b. Note that taking a = (P − R)/(P + R), b = (P − Q)/(P + Q) leads us to exactly the conditions of (22) and the Lagrangians (23). In conclusion: Proposition 2. For given oscillator parameters a and b, the Lagrangians (23) are the unique Lagrangians, up to constants γ and f (22), such that the multi-time propagator is path independent. In other words, demanding time-path independence of the propagator is the natural quantum analogue of the closure relation on the Lagrangian. Quantum Variational Principle: Lagrangian 1-form case Consider a quantum mechanical evolution from an initial time (0, 0) to a new time (N, M), along a timepath Γ: shown in figure 4. We can consider a propagator for the evolution K Γ (x b ; x a ) defined in (50). We have shown that, in the special case of Lagrangians (23), the propagator defined above is independent of the path Γ (it depends only on the endpoints); but that this is not true in general. For a generic Lagrangian, K Γ will depend on the time-path chosen, as shown in section 4.3. Classically, the system is defined as the critical point for the variation of the action over not only the dependent variables, but also over the independent variables, i.e., it is a critical point with respect to the variation of the time-path. This not only yields all the compatible equations of motion for the system, but also selects certain "permissible" Lagrangians which obey a closure relation (20). This then yields a system of extended EL equations of which the Lagrangian can be considered to be the solution, cf. [9]. In the quantum case, we consider the dependence of the propagator on all possible (discrete) time-paths Γ between fixed initial and final times. In general, there are an infinite number of possible time paths from (0, 0) to (N, M), including shortest time-paths as well as those with long "diversions," or loops, as illustrated in figure 10. For a generic Lagrangian, as we vary the time path, each Γ yields a different propagator (50) viewed as a functional of the path. In the special case of the Lagrangian (23), however, the propagator K Γ is independent of the path taken through the time variables, and so remains unchanged across the variation of the time-path Γ. This suggest that this path independence property is the natural quantum analogue of the Lagrangian closure condition (20). Pushing this idea one step further: viewing the propagator as a functional of the Lagrange function, the Lagrangian itself can be thought of as representing a critical point (in a properly chosen function space of Lagrange functions) for the path-dependent propagator, with regard to variations of the time-path. We suppose we can vary the path in such a way that the critical point analysis selects the path independent Lagrangian from the space of possible Lagrangians (this was the point of view put forward in [12] in the classical case). In a quantum setting this principle would be represented by a "sum over all time-paths" scenario, i.e. by means of posing a new quantum object of the form as was proposed in the continuous time-case in [29]. As a functional of the Lagrangian such an object would have a singular point for those Lagrangians which possess the quantum closure condition, i.e., those where the contributions of the pathindependent propagators over which one integrates all contribute the same amount. How to control the singular behaviour of such an object is a matter of ongoing investigation. Quantisation of the Lattice Equation In section 2 we introduced the linear lattice equation (4). Having considered the quantisation of its finite dimensional reduction, we now turn to quantisation of the lattice equation itself. Quantisation of lattice models has been previously considered from a canonical (quantum inverse scattering method) perspective [4,45], but here we will bring a Lagrangian, path integral perspective to bear on this system. Classically, we suppose the equation (4) to hold on all plaquettes in the multidimensional lattice at the same time. The equation is generated by the oriented Lagrangian: The Lagrangian itself is a critical point of the classical variational principle over surfaces: it obeys the closure property on the classical equations of motion, such that the surface can be allowed to freely vary under local moves. Indeed, it is also fairly unique, as seen in (8). How might we proceed to quantise such a system? A canonical approach is to transform (4) into an operator equation of motion, but we are concerned here with a Lagrangian approach. The clear analogy is to quantum field theory: we have a discretised space-time and a Lagrangian in two dimensions over field variables u(n) indexed by a discrete vector n. We imagine some space-time boundary ∂σ enclosing a multidimensional surface σ made up of elementary plaquettes σ i j . We can then construct an action by summing the directed Lagrangians over the surface, as we would classically: where we define the shorthand L i j (u) := L(u, u i , u j ; p i , p j ). We then consider the propagator K σ (∂σ), where all interior field variables on the surface are integrated over. The propagator depends, in principle, on the surface σ and is a function of the field variables on the boundary ∂σ, which form some boundary value problem (see a similar point made in [41]): We will see as we go on that this object is subject to infra-red divergences, as particular surface configurations produce integrations yielding volume factors. Since our main statements involves only the combinatorics of the exponential factors involving the action arising through Gaussian integrals, we tacitly assume K σ can be renormalised by an appropriate choice of normalisation factor N σ . K σ (∂σ) describes a propagator in the sense of a surface gluing procedure: two propagators K σ 1 and K σ 2 are combined to a new propagator by multiplication and integration over all variables living on the shared boundary ∂σ 1 ∩ ∂σ 2 . Thus, the one-step surface gluing can be written symbolically as where the integral is over appropriately chosen coordinates of the joined boundary. Iterating the gluing formula is tantamount to setting up a "surface-slicing" procedure for the path integral. Motivation: The pop-up cube Classically, for a Lagrangian 2-form we vary the surface σ so that the Lagrangian and equations of motion sit at a critical point: the action should be invariant under the variation of not only the dependent variables u, but also the variation of the surface itself. As we move to the quantum regime, we then naturally ask what happens to our propagator K σ (∂σ) (65) under variation of the surface σ? We consider the effect of a simple variation of the surface: from a flat surface to a popped-up cube, see figure 11. Now note that S pop [u n,m ] contains no factor of u 123 , so that the integral du 123 produces a volume factor V. Equation (67) can then be written in a matricial form: where u T = (u 3 , u 31 , u 23 ) , B T = (−s 31 u 1 − s 23 u 2 , −u 1 + s 23 u 12 , u 2 + s 31 u 12 ) , and Now, in principle, equation (68) could be solved as a set of three Gaussian integrals, but matrix A is in fact singular. The parameter identity for s i j (63): leads to det A = 0. We therefore resolve (68) by carrying out two Gaussian integrals, knowing for the third integration variable we shall be left with an exponent that is at most linear. Performing Gaussian integrations with respect to u 3 and u 31 , we therefore have: where in the first equality we note that all terms containing u 23 have vanished entirely. This is now exactly the exponent expected from the diagram (a) in figure 11. So, whilst it is clear that there are non-trivial issues to resolve with respect to volume factors and normalisation factors in (71), 5 in the critical issue of the contribution to the action in the exponent between diagrams 11(a) and 11(b), the two pictures make the same contibution. In other words, there is some sense in which the action is unchanged by the local move that transforms the surface σ by the pop-up cube. Inspired by this discovery, we consider a more general situation. Surface Independence of the propagator In the classical case, there are three elementary configurations of Lagrangians in three dimensions, that form the basis of all other possible configurations [12]. We can attach to these configurations three elementary moves in the quantum mechanical case that form the basis for deformations of the surface σ. The first move is shown in figure 12. The action and contribution to the propagator (65) for figure 12(i) are given by: In contrast, for figure 12 (ii): We have some issue in both of these cases with volume factors appearing in the evaluation; but we proceed under the assumption that these can be dealt with through some regularisation and normalisation. As shown in Appendix D, we then find that the exponents in K (ai) and K (aii) are the same. With the correct choice of normalisation and regularisation, we have identical contributions to the propagator. We then consider elementary move (b), shown in figure 13. We have the action and propagator contribution for figure 13(i): Similarly for figure 13(ii): In this case, no volume factors appear and we find K (bii) = K (bi) . So the contributions to the propagator are directly identical here. Lastly, consider elementary move (c) shown in figure 14. These bear a clear relation to figure 13: the element L jk (u) has been shifted from one diagram to the other, inducing also a slight change in the integration variables. For 14(i): Similarly, 14(ii) is derived from 13(i) with an additional integral over u. Once more we find that K (cii) = K (ci) (although this time a volume factor is involved on both sides) and the contributions to the propagator are the same. Proof. The combination of elementary moves above, combined with the pop-up of figure 11, allows us to deform any surface σ to another topologically equivalent surface σ ′ by a series of elementary moves, without changing the exponent in the propagator. This free deformation gives us independence from the surface. An obvious consequence is that the propagator (65) depends only on the surface boundary ∂σ, and the field variables specified there -i.e. it is a function only of the boundary value problem. Note that since different topologies are specified by changes of the boundary, we have not considered these explicitly. Uniqueness The Lagrangian (63) has the property that it produces a propagator (65) which is independent of variations of the surface σ. In fact, it turns out that (63) is the unique quadratic Lagrangian 2-form such that this holds. Consider a general, 3-point, quadratic Lagrangian, imposing antisymmetry under interchange of i and j: For coefficients, a subscript i indicates dependence on the lattice parameter p i , with the ordering of subscripts important. The 2-form structure requires a ji = −a i j , d ji = −d i j (a i j and d i j are anti-symmetric under interchange of the parameters). Our interest is in the subset of Lagrangians that display the surface independence property in the propagator. We therefore look for conditions on the Lagrangian such that elementary moves will leave the contribution to the action (i.e. the exponent in the propagator) unchanged. We assume that extenal factors and even volume factors can be resolved by renormalisation, so that we only consider that part of the propagator in the exponent. Consider (78) under elementary move (a) -shown in figure 12. The contributions to the propagator, K (ai) and K (aii) , are calculated according to (72) and (73). For surface independence, we require K (ai) = K (aii) . K (ai) is calculated via an integral du, as in (72). In general, the coefficient of u in the exponent may be either quadratic, linear, or zero: yielding a Gaussian integral, Dirac delta function, or volume factor, respectively. However, a Dirac delta function would force linear dependence of field variables at different lattice points: since this is undesirable, we exclude this possibility. The remaining cases divide on the totally antisymmetric coefficient a i jk := a i j + a jk + a ki (see Appendix E.1 for details). For a i jk 0 we have a Gaussian integral, and: Conversely, for a i jk = 0, we require the integral to reduce to a volume factor (linear coefficients of u in the exponent must disappear) requiring the conditions (the coefficient a i j must separate into a part depending on p i and a part depending on p j and c i j is a function of p i only). Under these conditions, This is a critical point of the variation -a volume factor appears uniquely for this special choice of Lagrangian, which can be written as: with C i j (u i , u j ) antisymmetric under interchange of i and j. This is the most general classical Lagrangian 2-form (6) as found in [12], here specialised to the quadratic case. So we have two cases for K (ai) : (81) when a i jk = 0, and (79) when a i jk 0. For K (aii) , as in (73), we have four integrations du i j du jk du ki du i jk . The integral du i jk always produces a volume factor due to the three-point form of the Lagrangian. As for K (ai) , we wish to avoid these integrals reducing to a Dirac delta function, and so we have 2 cases. The remaining integrals are either evaluated as three Gaussian integrations, or one integration reduces to a volume factor. This rests on the value of det A (see Appendix E.2 for details): For det A 0 (equivalently b i j −d i j ) we have three Gaussian integrations, producing: where B T = c jk u i − c ik u j , perm (i jk), perm (k ji) . Alternatively, when det A = 0, evaluating K (aii) requires two Gaussian integrations. We then require linear terms in the third integrand to disappear in order to prohibit the appearance of a Dirac delta function (see Appendix E.2) hence we require the conditions So, b i j is also anti-symmetric, and c i j symmetric. We can then evaluate K (aii) as: where we have introduced the totally symmetric parameter Once more there are two cases. For K (aii) , when det A = 0, we find (86), and when det A 0 we have (84). Comparing now the two configurations of the elementary move, we demand that the exponents from each configuration be the same; i.e. both make the same contribution to the propagator. More details of this comparison are given in Appendix E.3. We find a solution to the problem at the critical point of the system: where some of our integrals become singular. Allowing a i jk = 0 and det A = 0, we compare the exponent in (81) with (86). Recalling that at this critical point we have also the conditions (80), (85), we find that we require c i j = c , constant, Λ i jk = 1 − c 2 , a i j = 0 . Finally, since our Lagrangian is defined only up to an overall multiple, we let c = 1. We therefore find the unique quadratic Lagrangian: along with the condition on d i j that Λ i jk = 0. Comparing (87) with (70) we see that we require precisely d i j = s i j . But then (88) is uniquely the Lagrangian (63)! We already know from section 5.2 that this Lagrangian also exhibits surface independence for the other elementary moves. This principle of surface independence is then sufficient to determine the required Lagrangian uniquely: even more so than in the classical case (8). Proposition 4. The Lagrangian (63) is the unique quadratic Lagrangian 2-form yielding a surface independent propagator (65). Proof. (88), with the restriction Λ i jk = 0 (87), gives us that this is the unique Lagrangian exhibiting surface independence for elementary move (a). We also have from proposition 3 that Lagrangian (63) has surface independence under all other elementary moves. Quantum Variational Principle: Lagrangian 2-form case This result suggests a quantum variational principle in analogy to the one dimensional case of section 4.4. We consider the propagator over a discrete surface σ, K σ (∂σ), defined in (65). We have shown that, for the special choice of Lagrangian (63), the propagator K σ (∂σ) is independent of the surface σ. It depends only on the variables sitting on the boundary, ∂σ. Additionally, this is a very unique choice of Lagrangian: for a generic Lagrangian, K σ (∂σ) will depend also on the surface σ itself. Recall that, classically, the Lagrangian 2-form structure arises from a variational principle over surfaces as in [12]. An extended set of Euler-Lagrange equations arise as we vary not only the dependent field variables u n , but also the surface σ. This restricts the class of admissible Lagrangians to those obeying the closure property (5): it is only for such Lagrangians and equations of motion that the classical action remains invariant under variations of the surface. As we move to the quantisation, parallel to what we argued in the 1-form case, we consider the variation over all possible surfaces σ with a fixed boundary ∂σ. For a generic Lagrangian, as we vary the surface σ the propagator K σ (∂σ) (65) changes. However, for the special "integrable" choice of Lagrangians (63) the propagator K σ (∂σ) remains unchanged as we vary the surface. This therefore represents a critical (i.e., singular) point for a new quantum object which we conjecture to be a "sum over all surfaces" of which the surface-dependent propagator forms the summand 6 , viewed as a functional in a well-chosen space of Lagrange functions. Once again, controlling the singular behaviour of such an object, and arriving at mathematically concise definition is the subject of ongoing investigation. Nonetheless, we conjecture that critical/singular point analysis of such an object, leading to the selection of Lagrangians whose propagator are surface-independent, would form a key ingredient for understanding the path integral quantisation of discrete field theories that are integrable in the sense of multidimensional consistency. Discussion In his seminal paper of 1933, [36], Paul Dirac expressed his credo that the Lagrangian formulation of classical dynamics, in comparison to the Hamiltonian one, was more fundamental, and he posed the question of a Lagrangian approach to quantum mechanics. In this important precursor to Feynman's development of the path integral [37] the analogy between classical and quantum mechanics was emphasized, cf. also [48]. In this context, the related question of what would constitute a variational point of view in quantum mechanics was partly, but not fully, answered by those approaches. In the present paper we have attempted to arrive to a more complete answer to these questions in the context of integrable systems in the sense of multidimensional consistency. This is pursued by setting up a quantum analogue of the Lagrangian multiform approach. The main result, obtained within the context of quadratic Lagrangians, is that there is a quantum analogue of the closure property of [5] which underlies the classical multiform theory. The quantum analogue is formulated in terms of the multi-time propagators for these models, cf. eq. (50). There are a number of points to make in connection with the results obtained in this study. First, although the results were obtained by restricting ourselves to only quadratic Lagrangians, the multidimensional consistency aspects do not essentially rely on the linearity of the equations. In fact, most of the combinatorics at the classical level carries through for all Lagrangians associated with nonlinear quad equations in the ABS list, cf. [8]. Due to the suspected close analogy between classical theory and quantum theory in the integrable case, it is therefore to be expected that some quantization procedure for those models would exist such that the results obtained here also carry through to the quantum level for those nonlinear models. This may, however, require non-conventional quantization prescriptions in terms of suitable integrals replacing the Gaussian integrals used in the quadratic case. Initial results along this direction were obtained in [22] and [21]. The choice of Hilbert space (in the canonical quantization picture), and of integration measure (in the path integral picture) may be driven by the integrable combinatorics of those models. Second, another general feature of the models in question is the role-reversal interplay between parameters and independent variables and between the discrete and continuous models. Thus, the continuous models do not only appear as continuum limits, but more intrinsically as additional commuting flows: the classical equations hold simultaneously on a common set of solutions. On the quantum level this property extends in the fact that there is a common propagator of the underlying continuous and discrete quadratic models. If this feature is general enough to extend to the nonlinear case (which it does in the classical case) there is scope that this property can eventually be used to extract information on the time-sliced path integral from the discrete finite-step path integral. Third, turning things around and imposing the path and surface independence of the propagator for a general parameter class of quadratic Lagrangians, we have shown that this quantum MDC property leads uniquely to the Lagrangians that arise from the integrable case, in the same spirit as in [12]. In fact, the point made in that paper is that the Lagrangians themselves should be viewed as solutions of an extended set of Euler-Lagrange equations, which incorporates the stationarity under variations with respect to both the field (i.e., dependent) variables as well as the geometry in the independent variables. This poses a new paradigm in variational calculus, as it signifies a departure from the conventional point of view of most physical theories, namely that Lagrangians have to be chosen based on tertiary considerations. In this new point of view, the Lagrangians are not necessarily given in advance, but follow from the variational principle itself. We finish by making a few general remarks on further ramifications. In general it is not known how to derive a path integral formalism for non-conventional, i.e. non-Newtonian models, through a time-slicing procedure when Gaussian integrals no longer apply. Nonetheless, in integrable systems theories such non-Newtonian models do abundantly appear and often can also be readily quantized through the canonical formalism, e.g. the relativistic many-body systems of Ruijsenaars-Schneider type, [3]. This poses, in our view, a lacuna in the theory which is imperative to rectify as such integrable quantum systems cannot be simply discarded as potentially physical models. Thus, integrable systems can play a role of a litmus test for the completeness of a theory, which most reasonably should be applicable to those models for which in principle exact and rigorous computations can be performed. However, one may speculate that there is a deeper significance for those systems, since they have proved their merit in forming a fruitful breeding ground for new concepts and new understandings on a fundamental level. In fact, the ideas exposed in the present paper, based on simple toy prolems, have some interesting resemblances to proposals that that in recent years have been put forward on the quantization of scaling invariant theories [40,41,49]. A particular parallel may be drawn between path and surface independence of propagators in our examples, and certain formulations of loop quantum gravity and "sum over surfaces", [46,47]. Furthermore, the interplay between discrete and continuous, which is prominent in our examples, may perhaps feed into views that G.'t Hooft has been promoting with regard to the quantum nature of the universe, cf. [50]. The action along the classical path is then: where we have used the identities: We note two things about this result. First, there is no explicit Q dependence: all Q dependence is contained within the parameter µ, which only appears as µN. Second, we can easily extend this result to the L a (bar evolution) case, by a change of parameter. We replace µ by η, such that cos η = −a. Appendix A.2. The discrete propagator It is left for us to evaluate the discrete path integral: In the discrete case, we can consider this via a time slicing procedure without needing to worry about the problematic shrinking to zero. So we consider: (P + Q)y n y n+1 + 1 2 (P − Q)(y 2 n + y 2 n+1 ) where N is the normalising factor appearing in (44) and y 0 = y N = 0. This expression is quadratic in all y n variables, and so can be evaluated as N − 1 Gaussian integrals. This is most easily achieved by writing the equation in a matrix form (as in [39], for example). We define y T = (y 1 , . . . , y N−1 ), in order to writẽ with σ the symmetric, tri-diagonal matrix: (A.7) Hence it remains to calculate det σ. The determinant for a tri-diagonal matrix can be found by forming a recursion relation on the size of the matrix, and solving as a discrete equation. Let with initial conditions X 1 = a and X 2 = a 2 − b 2 . The solution is thus given by Now, in the case of σ, recall that a = −(P − Q)/(P + Q) = cos µ and b = −1/2, so that √ a 2 − 4b 2 = i sin µ: this leads to significant simplifications of the above expression. Working through these calculations, we then find: and therefore Appendix B. Quantum Invariants In [21], the authors investigated quantum systems possessing invariants under a one time-step path integral evolution. Begin by considering the evolution in the hat direction, generated by L b (x, x) (23). A wavefunction ψ n (x) evolves under this tranformation according to and to look for an invariant we desire ψ n and ψ n+1 to be solutions of the same eigenvalue problem, with the same eigenvalue: M x is a differential operator, and we restrict to considering the second order case: where M x is an adjoint to M x constructed under integrations by parts, and S is the resulting surface term. If we assume ψ n and ψ ′ n to vanish at infinity (a reasonable physical assumption) then the surface term S vanishes. We can also write, So the condition we require is for M x) . Following the analysis in [21], and using the given Lagrangian, we find this can only hold under the restrictions: This is precisely the quantum invariant (47). where u T = (u i j , u jk , u ki ) , Critically, we note that det A = 0, so again we have a singular integral. Carrying out two integrals in turn, so that the third integration produces a volume factor, we therefore have: Thus, the exponents in K (ai) and K (aii) are the same. With the correct choice of normalisation and regularisation, we have identical contributions to the propagator.
15,681.4
2017-02-28T00:00:00.000
[ "Physics" ]
Triangulations and Error Estimates for Interpolating Lyapunov Functions The CPA method to compute Lyapunov functions depends on a triangulation of the relevant part of the state space. In more detail, a CPA (Continuous and Piecewise Affine) function is affine on each simplex of a given triangulation and is determined by the values at the vertices of the triangulation. Two important aspects in the proof that the CPA method is always able to generate a CPA Lyapunov function if the triangulation is sufficiently fine, are (a) the geometry of the simplices of the triangulation and (b) error estimates of CPA interpolations of functions. In this paper the aspect (a) is tackled by extending the notion of (h, d)-boundedness, which so far has depended on the order of the vertices in each simplex, and it is shown that it is essentially independent of the order and can be expressed in terms of the condition number of the shape matrix. Concerning (b), existing error estimates are generalised to other norms to increase the flexibility of the CPA method. In particular, when the CPA method is used to verify Lyapunov function candidates generated by other methods. Parts of the results were presented in Giesl and Hafstein (Uniformly regular triangulations for parameterizing Lyapunov functions. In: Proceedings of the 18th International Conference on Informatics in Control, Automation and Robotics (ICINCO), 549–557, 2021). Introduction This paper is concerned with dynamical systems, whose dynamics are defined by an ordinary differential equation (ODE) and in particular with the stability of equilibria of systems. Lyapunov stability theory is of essential importance in dynamical systems and control theory and is studied in practically all textbooks and monographs on linear and nonlinear systems, cf. e.g. [2][3][4] or [5][6][7] for a more modern treatment. The canonical candidate for a Lyapunov function for a physical system is its (free) energy. In particular, a dissipative physical system must approach the state of a local minimum of the energy. For general dynamical systems, however, there is no analytical method to obtain a Lyapunov function. For this reason, various methods for the numerical generation of Lyapunov functions have emerged. To name a few, in [8,9] the numerical generation of rational Lyapunov functions was studied, in [10][11][12] sum-of-squared (SOS) polynomial Lyapunov functions were parameterized using semi-definite optimization, see also [13,14] for other approaches using polynomials, and in [15] a Zubov type PDE was approximately solved using collocation. For more numerical approaches cf. the review [16]. This article is part of the topical collection "Informatics in Control, Automation and Robotics" guest edited by Kurosh Madani, Oleg Gusikhin and Henk Nijmeijer. In [17,18] linear programming was used to parameterize continuous and piecewise affine (CPA) Lyapunov functions. In this approach, a subset of the state space is first triangulated, i.e. subdivided into simplices, and then a number of constraints are derived for a given nonlinear system, such that a feasible solution to the resulting linear programming problem allows for the parametrization of a CPA Lyapunov function for the system. In [19][20][21] it was proved that this approach, referred to as the CPA method, always succeeds in computing a Lyapunov function for a general nonlinear system with an exponentially stable equilibrium, if the simplices are sufficiently small and non-degenerate. The main advantages of the CPA method, apart from the fact that it generates true Lyapunov functions and not approximations, are that that it can be combined with faster methods to verify Lyapunov function candidates, see, e.g. [22][23][24][25][26], and that is easily adaptable to different kinds of systems, e.g. to differential inclusions [27,28] or time-discrete systems [29]. The CPA framework can even be extended to compute or verify so-called contraction metrics [30][31][32], see also the recent review [33]. The proof that the CPA method always succeeds in generating a true Lyapunov function for a system with an exponentially stable equilibrium used the concept of (h, d)bounded triangulations, see Definition 10, where h > 0 is an upper bound on the diameters of the simplices and d > 0 quantifies the degeneracy of the simplices. For the definition of (h, d)-bounded triangulations one must consider triangulations, of which the order of the vertices of each simplex has been fixed. The first contribution of this paper is to show that if T is an (h, d)-bounded triangulation in ℝ n , then any triangulation consisting of the same simplices as T , but with a different ordering of the vertices, is an (h, d * )-bounded triangulation with Thus, the property that a triangulation is (h, d)-bounded depends essentially on the simplices of the triangulation T , and not the ordering of the vertices of the simplices. The second contribution is a characterization of (h, d)bounded triangulations using the condition number of the shape-matrices of the simplices, cf. Definition 15. The advantage of this characterization is that the condition number of a matrix is a more familiar concept than the degeneracy as defined in Definition 10. The third contribution is a systematic study of the error estimates used in the CPA algorithm with respect to the norms used. The paper is organized as follows. In Section" Preliminaries", after introducing some notations, triangulations, CPA functions, shape-matrices of simplices, and (h, d)-bounded triangulations are presented. In Section "Construction of CPA Lyapunov functions" the algorithm to compute CPA Lyapunov functions is outlined and the relevance of (h, d)-bounded triangulations for the algorithm is shown. Further, the main results of the paper are proved. In Section "Error estimates in the CPA algorithm" error estimates for general norms in the CPA algorithm are derived and in Section "Conclusions and Future Work" some concluding remarks and ideas for future research are discussed. Preliminaries In this section the notation for the rest of the paper is introduced and some useful facts from linear algebra that will be used to derive the results of the paper recall are presented. Further, triangulations and CPA functions are defined. Notation Vectors in ℝ n are written in bold face and are assumed to be column vectors, e.g. With p = 1 , p = 2 , and p = ∞ there are simple formulas for the induced matrix norms: where a i are the column vectors of A, and ‖A‖ 2 = max ‖x‖ 2 =1 √ x T A T Ax is the square root of the largest eigenvalue of the symmetric and positive-semidefinite matrix A T A . For A ∈ ℝ n×n the norm equivalences for p ∈ {1, ∞} and will be useful in the following. The condition number ‖⋅‖ of a nonsingular matrix A ∈ ℝ n×n with respect to the norm ‖ ⋅ ‖ is defined as The identity matrix in ℝ n×n is denoted by I and its column vectors by e 1 , e 2 , … , e n , i.e. the standard orthonormal basis of ℝ n . The set of the permutations of a set C is denoted by Sym(C) , i.e. Sym(C) is the set of bijective mappings C → C . For an ∈ Sym({1, 2, … , n}) the permutation matrix P ∈ ℝ n×n is defined through One easily verifies that P −1 = P T and ‖P ‖ p = ‖P −1 ‖ p = 1 for p ∈ {1, 2, ∞} . Note that left-multiplication by P permutes the rows of a vector or a matrix and right-multiplication by P permutes the columns of a vector or a matrix. For example, P x = x and , has continuous derivatives of all orders up to and including k. In particular, if G is compact, then they are all bounded. If the set G and m are clear from the context or not important, one also writes g ∈ C k or says g is a C k function. Note that if n > 1 and m = 1 the vector ∇g(x) is assumed to be a column vector. Preliminaries Let us start with a simple result, which will be used later. Lemma 1 Let X ∈ ℝ n×n and let ‖ ⋅ ‖ be any sub-multiplicative matrix norm on ℝ n×n . Assume P, Q ∈ ℝ n×n are (nonsingular) matrices such that Then In particular, if ‖ ⋅ ‖ = ‖ ⋅ ‖ p with p ∈ {1, 2, ∞} and P = P and Q = Q are permutation matrices, then Proof The first statement follows immediately from and the second statement is a direct consequence of the comments in Section "Notation". ◻ P e k = e (k) for k = 1, 2, … , n. A well known useful result on rank 1 corrections of matrices is given by the next lemma from [34]. Lemma 2 (Sherman-Morrison) Let A ∈ ℝ n×n be nonsingular and u, v ∈ ℝ n be such that 1 + v T A −1 u ≠ 0 . Then and Proof (sketch) The formula for the inverse can be easily verified noting that v T A −1 u ∈ ℝ. For the determinant formula, first note that Moreover, for a vector x such that v T x ≠ 0 , one has that w 1 ∶= x and any basis w 2 , … , w n of ker(v T ) = {y ∈ ℝ n ∶ v T y = 0} are linearly independent eigenvectors of I + xv T with eigenvalues 1 + v T x (once) and 1 ( n − 1 times). Since the determinant is the multiple of all the eigenvalues it follows that det(I + xv T ) = 1 + v T x and with x = A −1 u one gets ◻ The convex combination of the vectors x 0 , x 1 , … , x m ∈ ℝ n is defined as the set The vectors x 0 , x 1 , … , x m ∈ ℝ n are said to be affinely independent, if and only if This condition is equivalent to the linear independence of the augmented vectors For affinely independent vectors x 0 , x 1 , … , x m ∈ ℝ n the set S ∶= co{x 0 , x 1 , … , x m } is called an m-simplex and the vectors x i are said to be its vertices and veS ∶= {x 0 , x 1 , … , x m } is the vertex set of S. For an m-simplex S, its diameter is defined as An n-simplex in ℝ n is often referred to simply as simplex. The so-called shape-matrix of the vertices of a simplex, i.e. an n-tuple of affinely independent vectors, is very important for the CPA algorithm, because it is used to measure the (geometrical) regularity of a simplex. It is defined in terms of an n-tuple containing its vertices and is, therefore, dependent of their order in the tuple. The goal of this paper is to show that it is essentially only dependent on the set of vectors and not on the order. Hence, for an n-tuple C = x 0 , x 1 , … , x n , x i ∈ ℝ n , set(C) is defined as the set containing the elements in C, i.e. set(C) ∶= {x 0 , x 1 , … , x m }. Definition 3 Let x 0 , x 1 , … , x n ∈ ℝ n be affinely independent vectors and C = x 0 , x 1 , … , x n an n-tuple. The shape-matrix of C is defined by That is, (x i − x 0 ) T is the i-th row vector of X C . For S = coset(C) the matrix X C is said to be the shape-matrix of the simplex S. Note that an n-simplex S has (n + 1)! potentially different shape-matrices, corresponding to the permutations of its vertices. In the next lemma certain quantities of different permutations are related to each other. Proof Proof of the second statement: If (0) = 0 , then the restriction of to {1, 2, … , n} is in Sym({1, 2, … , n}) . Define the permutation matrix P ∈ ℝ n×n using this restriction. Then X C = P X C and the statement follows immediately by Lemma 1. Proof of the first statement: Let Then and Laplace expansion of the first column of AX a C gives Then holds tr ue for the augmented matr ix. Since | det(P )| = 1 = det(A) it follows that ◻ Triangulations and CPA Functions In this section triangulations and the associated CPA functions, that are the motivation for this paper, are introduced. For our purposes it is sometimes advantageous to have the order of the vertices of every simplex in a triangulation fixed, similar as in [25]. . SN Computer Science for all , ∈ L . The domain of T is defined as and its complete set of vertices is denoted by Further, the diameter of T is defined as If every simplex S ∈ T is uniquely associated to a corresponding n-tuple C = x 0 , x 1 , … , x n of its vertices, T is said to be a triangulation with ordered vertices. Example 1 An example of a triangulation of ℝ n is the standard triangulation T std , see Fig. 1 and e.g. [35], which consists of the simplices fo r a l l z ∈ ℕ n 0 , a l l J ⊂ {1, 2, … , n} , a n d a l l ∈ Sym({1, 2, … , n}) . The functions R J ∶ ℝ n → ℝ n , defined for every J ⊂ {1, 2, … , n} are given by where J (i) denotes the characteristic function equal to one if i ∈ J and equal to zero if i ∉ J . It is easy to see that D T std = ℝ n and V T std = ℤ n . Given a triangulation T , a continuous and piecewise affine function, i.e. CPA function, can be defined by fixing its values at the vertices of the simplices, i.e. V T . Definition 6 (CPA function) Let T be a triangulation in ℝ n . Denote by CPA[T] the set of all continuous functions that are affine on each simplex S ∈ T , i.e. for each S ∈ T there exists a vector w ∈ ℝ n and a number a ∈ ℝ such that Define ∇V ∶= w . Further, x has a unique representation as the convex combination of the vertices of S , i.e. there are unique numbers It is not difficult to see that and from the condition (3) it immediately follows, that even though x ∈ S ∩ S for S ∈ T , the representation (4) is unique. Hence, each V ∈ CPA[T] is completely determined by its values in the vertex set V T . Further, from V(x 0 ) = ∇V ⋅ x 0 + a and one obtains With the n-tuple C = x 0 , x 1 , … , x n and the a corresponding shape-matrix X C it follows that This section is concluded by showing an example of how the value of a CPA function at a point is determined. Furthermore, Construction of CPA Lyapunov Functions Let us elaborate why ‖X −1 C ‖ p for shape-matrices X C is of so much interest for computing CPA Lyapunov functions. Our reference is [21]. To prove that the CPA method always succeeds in computing a CPA Lyapunov functions for the system (1) with f ∈ C 2 (ℝ n , ℝ n ) , one uses the fact that by converse theorems there exists a C 2 Lyapunov function W for the system, cf. e.g. [36][37][38][39] or [4][5][6][7] for a more accessible discussion. Such a Lyapunov function W is interpolated over a triangulation T by a CPA function V ∈ CPA[T] by fixing The function V is said to be the CPA interpolation of W on T . Thus, on a simplex S ∶= coset(C) ∈ T , C = x 0 , x 1 , … , x n , the unique convex combination of the vertices for an x ∈ S is used to set Then For every triangulation T with D T ⊂ K , where K ⊂ ℝ n is compact, one has It follows that V approximates W arbitrarily well if T , D T ⊂ K , is a triangulation consisting of simplices with small enough diameters. However, this is not sufficient if one additionally wants ∇V to closely approximate ∇W , i.e. convergence in C 1 (D T ;ℝ) and not only in C(D T ;ℝ) , cf. e.g. the proof of [21,Theorem 5]. This is demonstrated in the next example, where some shape-matrices for a simplex in the plane (triangle) are computed and the gradients of two potential Lyapunov functions are compared to the gradient of their CPA interpolations. Example 9 D e f i n e t h e v e c t o r s The triangle is equilateral if 3 k . Now consider two different orderings of the vertices. For C 0 ∶= (x 0 , x 1 , x 2 ) one has the shape-matrix with inverse Then which has the eigenvalues Now swap the first two vertices and set C 1 ∶= (x 1 , x 0 , x 2 ) , describing the same simplex, but with a different order of the vertices. The shape-matrix becomes SN Computer Science with inverse and has the eigenvalues Thus One clearly sees that ‖X −1 0 ‖ 2 ≠ ‖X −1 1 ‖ 2 . Later it will be shown that a convenient sufficient condition for the convergence ‖∇V − ∇W‖ 2 → 0 for any W ∈ C 2 (S;ℝ) and its CPA interpolation V is that for a constant d > 0 one has Lemma 13 will show that if (7) . In our example That is, depending on h and k: From the formula for d 0 , one sees that condition (7) translates with X ∶= X 0 to for some constants L, H. Alternatively, one can use norm equivalence to see that which delivers again the same condition (8) for (7) with X ∶= X 1 . Let us link these considerations to estimates on the gradient of a Lyapunov function and its CPA interpolation. Consider two potential Lyapunov functions. As the first potential Lyapunov consider W 1 (x, y) = x 2 + y 2 with gradient ∇W 1 (x, y) = (2x, 2y) T . By formula (5) for the gradient ∇V 1 of its CPA interpolation V 1 on S is from which follows. S i m i l a r l y , f o r t h e p o t e n t i a l L y a p un ov f u n c t i o n W 2 (x, y) = (x + y) 2 w i t h g r a d i e n t ∇W 2 (x, y) = (2x + 2y, 2x + 2y) T one obtained for the gradient ∇V 2 of its CPA interpolation V 2 on S that i.e. In both cases one sees that a sufficient condition for ‖∇V i − ∇W i (x, y)‖ 2 → 0 , i = 1, 2 , when h → 0 and k → 0 , is that kh −1 is bounded, which corresponds to the condition L > 0 in (8). To prove that the gradient ∇V of the CPA interpolation V of the C 2 Lyapunov function W approximates ∇W arbitrarily well for appropriate small simplices, let us first consider a fixed simplex S ∶= coset(C) ∈ T , C = x 0 , x 1 , … , x n . In the CPA algorithm certain linear constraints are to be fulfilled at the vertices x i of the simplex S. Note the estimate for every i = 0, 1, … , n . Let us first consider the term ‖∇W(x i ) − ∇W(x 0 )‖ p . Consider the continuously differentiable function g(t) ∶= ∇W(t(x i − x 0 ) + x 0 ) − ∇W(x 0 ) . Then, by the Mean Value Theorem one obtains for some s ∈ (0, 1) and where H W ∶ ℝ n → ℝ n×n the Hessian matrix of W, that Thus, ‖∇W(x i ) − ∇W(x 0 )‖ p can be made arbitrarily small by using a simplex S with small enough diameter. The term ‖∇V − ∇W(x 0 )‖ p is more problematic and is not necessarily small for a small simplex S as was demonstrated in Example 9. Let us analyse this in more detail and derive sufficient conditions. By Taylor's Theorem one has for every j = 1, 2, … , n that for some z j on the line segment between x 0 and x j . Thus Now with one gets Note that the j-th component of the vector v − X C ∇W(x 0 ) , see (10), is given by and can thus be bounded using (9), Thus and Above it was shown that the Lyapunov function W is approximated arbitrary well in the C 1 norm on the simplex S given that diam(S) and ‖X −1 C ‖ p diam(S) 2 are small. Note in particular, that on a compact set K ⊂ ℝ n one has and the CPA interpolation V ∈ CPA[T] of the Lyapunov function W is arbitrarily close to W in the C 1 norm for any triangulation with ordered vertices T with sufficiently small diam(T) and max S ∈T ‖X C ‖ p diam(T) 2 . Because of this, the proof in [21] that the CPA method always succeeds in computing a Lyapunov function if one exists, uses a sequence of finite triangulations T k where the simplices become smaller, i.e. diam(T k ) → 0 as k → ∞ , but also such that max S ∈T k ‖X C ‖ p diam(T k ) 2 → 0 as k → ∞ , or, as a sufficient condition, that Now note that when the simplex S ∶= coset(C) is scaled down, i.e. the vertices of C (or S) are multiplied with a number 0 < s < 1 , then This leads to the following strategy of obtaining a suitable sequence of triangulations T k for proving that the algorithm in [21] succeeds in computing a Lyapunov function on any compact set C , that is contained in the basin of attraction of the equilibrium at the origin. For simplicity some adaptations that have to be made close to the equilibrium are ignored, cf. [21] for the details. The starting point is a triangulation T 0 with D T 0 = ℝ n that is uniformly regular as defined in Definition 10 below. Then an adequate sequence of triangulations T k is generated from the uniformly regular triangulation T 0 . For this fix a constant s fulfilling 0 < s < 1 and define Then by (11) one has diam(T k ) ≤ s k diam(T k ) and with d * as in Definition 10 the degeneracy of T k is upper bounded by d * . It follows that V and ∇V approximate W and ∇W arbitrarily close on C with increasing k. Definition 10 Let T = S ∈L be a triangulation with ordered vertices, S = coset(C ) , where the C are the associated n-tuples of vertices. 1. The degeneracy of T is defined to be the value 2. The triangulation with ordered vertices T is said to be (h, d)-bounded for constants 0 < h, d < ∞ , if diam(T) < h and the degeneracy of T is bounded from above by d. The triangulation with ordered vertices T is said to be uniformly regular if it is (h, d)-bounded for some constants 0 < h, d < ∞. 4. Let T * be a triangulation (not with ordered vertices) that consists of the same simplices as T and assume that T is uniformly regular. Then T * is said to be uniformly regular. Remark 11 Some comments on the last definition are in order: 1. p = 2 was used in the definition of degeneracy to fix the numerical value, but in principle any 1 ≤ p ≤ ∞ can be used to the same effect because of norm equivalences in ℝ n×n . 2. Obviously all triangulations consisting of a finite number of simplices are uniformly regular and this concept is only interesting for infinite triangulations. The triangulation in (12) is a triangulation of a compact set and therefore finite, however, the algorithm to compute CPA Lyapunov functions uses a sequence of finer and finer triangulations arising from scaling down an infinite triangulation and this infinite triangulation should be uniformly regular. 3. An equivalent condition for a uniformly regular triangulation with ordered vertices T is that there exist constants 0 < h * , d * < ∞ such that where X C is the shape-matrix corresponding to the simplex S ∈ T . This is shown in Lemma 12 below. 4. A priori it is not obvious that uniformly regular is properly defined for triangulations that don't have ordered vertices. This however, is proved in Lemma 13, and indeed if T is a triangulation with ordered vertices that has degeneracy d, then all triangulations with ordered vertices consisting of the simplices of T will have degeneracy no larger than d * ∶= d(1 + d √ n − 1). 5. An example of uniformly regular triangulation is the standard triangulation from Example 6; see e.g. [35], where it is shown that it is uniformly regular. However, there are many more examples of uniformly regular triangulations with useful approximate symmetries that can be adapted to the system (1) at hand [40,41]. As explained above, the success of the CPA algorithm to compute Lyapunov functions is shown using a sequence of triangulations T k such that each triangulation T k is (h k , d) -bounded, where h k → 0 as k → ∞ and d > 0 is a constant independent of k. Our first main result is Lemma 12, which shows that the concept of (h, d)-bounded triangulations can equivalently be formulated in terms of the norm and the condition number of the shape-matrices of the triangulation. Lemma 12 Let C = (x 0 , x 1 , … , x n ) be an n-tuple of affinely independent vectors in ℝ n , X C be its corresponding shapematrix, and S = coset(C ) the corresponding simplex. Then and and ◻ The next lemma shows that the concept of a uniformly regular triangulation is properly defined for a triangulation (not with ordered vertices). This is shown by demonstrating that if a triangulation with ordered vertices T in ℝ n is (h, d)-bounded for some particular ordering of the vertices of the simplices, then it is (h, d * )-bounded for any ordering with d * = d(1 + d √ n − 1). Lemma 13 Let T = S ∈L be a triangulation with ordered vertices, S = coset(C ) , where the C are the associated n-tuples of vertices. Assume T * = S ∈L is a triangulation consisting of the same simplices as T , but with a (possibly) different ordering of the vertices, i.e. a different set of n-tuples C * associated to the simplices. Then T * is (h, d * ) Proof The case n = 1 is trivial. Thus assume in the following that n ≥ 2. Let C ∶= (x 0 , x 1 , … , x n ) be the n-tuple of vertices associated to the simplex S ∈ T . Its shapem a t r i x i s (1) , … , x (n) ) is the n-tuple of vertices associated to the simplex S in T * . If (0) = 0 , then the shapematrix X C * has the same rows as the shape-matrix X C , just in a (possibly) different order. Then it follows immediately by Lemma 1 that ‖X −1 C * ‖ 2 = ‖X −1 C ‖ 2 and thus If (0) ≠ 0 , then there is an i ∈ {1, 2, … , n} such that (i) = 0 . Define ∈ Sym({1, 2, … , n}) through (i) = (0) and (k) = (k) for k = 1, 2, … , i − 1, i + 1, … , n , i.e. k ≠ i , and denote by P the permutation matrix defined through P e k = e (k) . Then which shows (13). Now and by Lemma 4 | det which is a contradiction because X C * and A −1 are invertible and u ≠ 0. Thus one obtains by Lemma 2 that Further, again by Lemma 2, It is easy to see that Note that R −1 i = R i = R T i and recall that P −1 = P T , from which and follows. Thus ‖R i P ‖ 2 = ‖(R i P ) −1 ‖ 2 = 1 and it follows by Lemma 1 that Hence, and then follows. Since the simplex S ∈ T * was arbitrary, it has been shown that T * is (h, d * )-bounded. ◻ The following proposition is a direct consequence of Lemma 13. Proposition 14 Assume T k , k ∈ ℕ 0 , is a sequence of triangulations with ordered vertices in ℝ n , such that T k is (h k , d k ) -bounded, h k → 0 as k → ∞ , and d k ≤ d for all k ∈ ℕ 0 . Let T * k , k ∈ ℕ 0 , be a sequence of triangulations with ordered vertices such that T * k consists of the simplices of T k for every k ∈ ℕ 0 , but with a (possibly) different ordering of the vertices of the simplices. Then there are constants d * By Lemma 13 one can talk about an (h, d)-bounded triangulation T = {S } ∈L even though the vertices of the simplices are not ordered. The understanding is then that no matter how the vertices of the simplices are ordered, the resulting triangulation with ordered vertices in ℝ n is (h, d)bounded in the sense of Definition 10. Thus, one can define uniformly regular for triangulations, of which the vertices of the simplices are not necessarily ordered. Let us put this in a formal definition. Definition 15 (Uniformly regular triangulations) A triangulation T in ℝ n (not with ordered vertices) is said to be uniformly regular if any, and then all, triangulation with ordered vertices that consists of the same simplices as T is uniformly regular. Error Estimates in the CPA Algorithm In this section, some important error estimates for CPA Lyapunov functions V ∶ D T → ℝ when using linear constraints in the CPA algorithm are discussed. Here T is a triangulation and D T its domain. Let c ∶ D T → ℝ be a function that is convex on every simplex S ∈ T ; a sufficient condition, but not necessary, is that c is a convex function on D T . The essential idea of the CPA algorithm is to state constraints at the vertices of a simplex S = coset(C) ∈ T , C = (x 0 , x 1 , … , x n ) , that are linear in the values of a function V ∈ CPA[T] , such that R e c a l l t h a t Further recall that x ∈ S can be written uniquely as a convex combination of the vertices of S, i.e. ∑ n i=0 i x i where i ≥ 0 and ∑ n i=0 i = 1. Because c is convex on S one gets and by writing one sees that a sufficient condition for (14) is that Assume that for a q ∈ [1, ∞] upper bounds functions for nonlinear systems, see e.g. [21,42]. The values E f i,∞ obviously depend on the simplex S ∈ T and are chosen individually for each simplex S ∈ T . Additionally, the E f i,∞ for a given simplex S = co(x 0 , x 1 , … , x n ) ∈ T depend on the vertex x 0 , which one can choose freely among the vertices of the simplex; the inequality (18) will imply (14) for any choice. This is important because often one is interested in an equilibrium at the origin, i.e. f(0) = 0 , and c(x) ≥ 0 with c(x) = 0 , if and only ifx = 0 . In this case 0 ∈ S must imply that 0 is a vertex of S and one must choose x 0 = 0 . Note that if x 0 = 0 , then E f 0,1 = E f 0,∞ = 0 and (18) with i = 0 is trivially fulfilled. Conclusions and Future Work The computation of Lyapunov functions using CPA (continuous and piecewise affine functions) fixes a triangulation of the phase space and determines the values of the function at the vertices. If those values satisfy certain inequalities depending on the system (1) in question, then the (unique) CPA interpolation of these values is a Lyapunov function [19][20][21]. This method can be used in two ways: either the values are determined by solving a linear optimisation problem, or the method is used to verify values that have been found with a different method. This paper addressed two aspects of the method: the method requires a sequence of simplices that are not degenerate. The degeneracy so far was dependent on the ordering of the vertices in the simplices. The first contribution of this paper is to eliminated the dependence of the degeneracy on the ordering of the vertices of the simplices in the triangulation. Thus, the degeneracy can be defined for the simplices as geometrical objects. Further, a characterization of the degeneracy in terms of the condition number of the shapematrices was provided. The second contribution is to generalise the error estimates used in the CPA method to general p-norms. While the cases p = 1 and p = ∞ are particularly useful as they result in linear optimisation problems, any case p ∈ (1, ∞) can be useful to verify Lyapunov function candidates that have been computed with a different method. For future work, it would be very interesting to investigate if one can use a Lyapunov function candidate computed by a non-exact method, i.e. a numerical approximation to a Lyapunov function that might fail to fulfill the conditions for a Lyapunov function in some areas, as a starting point in solving the linear program to generate a true CPA Lyapunov function for the system in question. Additionally, the localization of the area where the decrease condition of a Lyapunov function holds true for a complete Lyapunov functions candidate, generated as in [43][44][45][46], would constitute an important step in algorithmically localizing chain-recurrent sets [47][48][49] in dynamical systems. Declarations Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
8,520.4
2023-04-28T00:00:00.000
[ "Mathematics" ]
Time domain Green's function for an infinite sequentially excited periodic planar array of dipoles Periodic arrays of radiating or scattering elements play an important role in phased array antennas, frequency selective surfaces and related applications. To gain an understanding of the sparsely explored time domain (TD) behavior of such structures, we have initiated a systematic investigation of relevant canonical TD dipole-excited Green's functions (GF), which so far include those for infinite and truncated line periodic arrays, parameterized in terms of TD-Floquet waves (FW) and truncation-induced TD-FW-modulated tip diffractions. Such waves on semi-infinite and finite square arrays of dipoles have been investigated in the frequency domain, and shown to be useful in practical array applications. This article extends our TD studies to an infinite periodic sequentially pulsed planar array. Like the predecessor GFs, this canonical prototype is simple enough to admit closed form exact solutions, whose interpretation is discussed phenomenologically aided by asymptotic parameterization in terms of instantaneous frequencies. Preliminary numerical results demonstrate the efficiency of the TD-FW algorithms. I. Introduction Periodic arrays of radiating or scattering elements play an important role in phased array antennas, frequency selective surfaces and related applications. To gain an understanding of the sparsely explored time domain (TD) behavior of such structures, we have initiated a systematic investigation of relevant canonical TD dipole-excited Green's functions (GF), which so far include those for infinite and truncated line periodic arrays [l], [2], parameterized in terms of TD-Floquet waves (FW) and truncation-induced TD-FW-modulated tip diffractions.Such waves on semi-infinite and finite square arrays of dipoles have been investigated in the frequency domain (FD) [3], [4], and shown to be useful in practical array applications [5].The present contribution extends our TD studies to an infinite periodic sequentially pulsed planar array.Like the predecessor GFs [!], [2], this canonical prototype is simple enough to admit closed form exact solutions, whose interpretation is discussed phenomenologically, aided by asymptotic parameterization in terms of instantaneous frequencies.Preliminary numerical results demonstrate the efficiency of the TD-FW algorithms. Statement of the Problem The geometry of the planar array of dipoles oriented along the Jo direction and excited by transient currents in free space is shown in Figla.The period of the array is dl and dz in the z1 and zz directions, respectively.The E field component is simply related to the Jo-directed magnetic scalar potential A which shall be used throughout.A caret A tags timedependent quantities; bold face symbols define vector quantities; izl, iz2 and i, denote unit vectors along zl, 22, and z, respectively FD and T4quantities are related by the Fourier transform pair A(w) = IFm A(t)e-jwtdt, A(t) = & J_", A(w)eJwtdw.The phased array FD and TD dipole currents J ( w ) and j ( t ) , respectively, are given by with 7 = 71 i,,+yz i,,, xm, = mdl i,,+ndz i,,, and 6(x'xm,) = 6(ximdl)b(zLndz).In the m, n-dependent element current amplitudes multiplying the delta function in (1) the FD portions wyl& and wyzdz account for an assumed (linear) phase difference between adjacent elements in the xi and xz directions, respectively.Combined in the vector 7, y1 and yz denote interelement phase gradients normalized with respect to w.The TD portion identifies sequentially pulsed dipole elements, with the element at x = x, , turned on at time t,, = The wavenumber k,,p,(w) = kz1,,izl + kzl,,iz2 is given by the two Floquet-type dispersion relations with p , q = 0, fl, f2, ....The vector apq = al,,iZl + ( Y Z , ~~~~, represents the part of kt,pq = w-y + apq that does not depend on w , will be extensively used throughout the formulation.Thus, in the frequency domain, Poisson summation converts the effect of the infinite periodic array of individual phased m, windexed dipole radiations collectively into an infinite superposition of linearly smoothly phased p , q-indexed equivalent planar aperture distributions that furnish the initial conditions for propagating (PFW) and evanescent (EFW) Floquet-type waves.In the TD, the m, n-indexed sequentially pulsed dipoles are converted collectively into smoothly phased, p , q-indexed impulsive source distributions b(t -7 .x') which travel with phase speed y-' (y = 1 7 1 = d-) in the 7 direction. On the right side of (2), this yields the collective FW-phased plane waves Here, k,"," = kt,pq + kZ& denotes the total FW,, propagation vector, and k,,,(w) = (IC2 -kZl2, -k&,-'/', where IC = w / c , with k the ambient wavenumber and c the ambient wave speed.The square root function is defined so that S'mk,,, 5 0 in the top Riemann sheet, consistent with the radiation condition at p = CO.In (4), Floquet waves with transversedomain propagation constants kt,p, < k or kt,pq > k, with kt,pq = (ki1,, + kzz,p)-1/2, characterize PFW or EFW, respectively, in the z-direction.Owing to the exponential attenuation of EFW, along z, the EFW portion of E&, A,"," converges rapidly away from the array plane and a few terms may suffice for an adequate approximation of the total radiated field. The same operations applied to the right hand side of ( 2 ) , or direct FD inversion of (4), yields the TD-FW The integrand in (5) contributes only for those real (z;,z!J-values which satisfy T + 7 .(xx') + e-'R(x') = 0 , r = t -7 .x.For the radiating case (y < e-'), this condition defines time dependent "equal delay" ellipses with major axis along the phasing direction 7 (see Fig. la).For the nonradiating case (y < e-l), with Bey, 2 0 and %my, 5 0, the ellipses are replaced by single branch hyperbolas.The integral in ( 5) is evaluated using first the change of coordinates (x'x) '7 = u l y , ( x ' -x ) .( i + x ~) = uzy.For the radiatingcase, the resulting ul-inner integral has been reduced in [ l ] for a line array of dipoles, and the u2 integral is then evaluated via the formula J ! l e-j'" cos [ b m ] / m d u = Jo(d-), leading to the exact expression with b, , = 7 .a,,, TO = yzz, and U ( T ) = 0 or 1 for T < 0 or r > 0, respectively. The phenomenology and interpretation of ( 6) is directly analogous to that for the line dipole array [l], where we explain the complex-valued TD-field in (6) and define-a p , q-paired "physically observable" TD-FW, yielding the real field A : ; : + AT;; = 2 Re A ; : " .Note that all the zkp, f q contributions arrive simultaneously at a stationary observer.Asymptotic Inversion. This phase contributes to the inverse Fourier integral through the a:ymptotic local frequencies wpg(r, t ) which satisfy the saddle point condition $$(U) = 0, and parameterize the TD-FW wave dynamics.The solutions are real in the causal domain t > to = 7 .x + r , ( T > TO).As in [l], [2], the pqth TD-FW obtained via FD inversion asymptotics is parameterized by these instantaneous frequencies, and its form agrees with the large argument (for Jo) asymptotic approximation of (6). VI. Band Limited Pulse Excitation When each dipole in (1) radiates a practically useful band-limited (BL) pulse G(t -7 .xmn), the corresponding band-limited TD-FW 2gwsBL for p , q # 0 can be evaluated by including the pulse spectrum G(w) in the impulsive inversion integral.For wideband (short duration) pulses, G(w) can be considered slowly varying with respect to the phase &U) [7], and can therefore be approximated by its value at the saddle point frequencies wpq,,(t), i = 1,2.The asymptotic BL-TD field A F y is found by multiplying the ordinary asymptotic AFqy by G(wpq,j). For p = q =, O, the FD-FW is not inverteble by w-asymptotics and is calculated convolving G ( t ) with the TD-FW Akw(t). Fig. 1 . Fig. 1. a) Infinite periodic planar array of electric dipoles.Problem-matched coordinates: ( u I , ~)with SI in the direction of the phasing 7.For t > to =(&st arrival time), contributions arrive at the observer simultaneously from time dependent expanding "equal delay" ellipses.b) Radiated field; parameters in Sec.VI.
1,887.2
2000-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Method to improve the noise figure and saturation power in multi-contact semiconductor optical amplifiers: simulation and experiment The consequences of tailoring the longitudinal carrier density along the active layer of a multi-contact bulk semiconductor optical amplifier (SOA) are investigated using a rate equation model. It is shown that both the noise figure and output power saturation can be optimized for a fixed total injected bias current. The simulation results are validated by comparison with experiment using a multi-contact SOA. The inter-contact resistance is increased using a focused ion beam in order to optimize the carrier density control. A chip noise figure of 3.8 dB and a saturation output power of 9 dBm are measured experimentally for a total bias current of 150 mA. ©2013 Optical Society of America OCIS codes: (250.5980) Semiconductor optical amplifiers; (060.4510) Optical communications. References and links 1. J. Mørk, M. L. Nielsen, and T. W. Berg, “The dynamics of semiconductor optical amplifiers: modeling and applications,” Opt. Photonics News 14(7), 42–48 (2003). 2. Y. Liu, E. Tangdiongga, Z. Li, H. de Waardt, A. M. J. Koonen, G. D. Khoe, X. Shu, I. Bennion, and H. J. S. Dorren, “Error free 320Gb/s all-optical wavelength conversion using a single semiconductor optical amplifier,” J. Lightwave Technol. 25(1), 103–108 (2007). 3. A. Borghesani, “Semiconductor optical amplifiers for advanced optical applications,” in International Conference on Transparent Optical Networks, ICTON 2006, 119–122. 4. L. H. Spiekman, “Ubiquitous amplification: applications of the semiconductor optical amplifier,” in the Joint International Conference on Optical Internet and Next Generation Network (COIN-NGNCON 2006), 292–294. 5. D. F. Welch, F. A. Kish, S. Melle, R. Nagarajan, M. Kato, C. H. Joyner, J. L. Pleumeekers, R. P. Schneider, J. Back, A. G. Dentai, V. G. Dominic, P. W. Evans, M. Kauffman, D. J. H. Lambert, S. K. Hurtt, A. Mathur, M. L. Mitchell, M. Missey, S. Murthy, A. C. Nilsson, R. A. Salvatore, M. F. Van Leeuwen, J. Webjorn, M. Ziari, S. G. Grubb, D. Perkins, M. Reffle, and D. G. Mehuys, “Large-scale InP photonic integrated circuits: enabling efficient scaling of optical transport networks,” IEEE J. Sel. Top. Quantum Electron. 13(1), 22–31 (2007). 6. H. J. Kim and J. I. Song, “All-optical frequency downconversion technique utilizing a four-wave mixing effect in a single semiconductor optical amplifier for wavelength division multiplexing radio-over-fiber applications,” Opt. Express 20(7), 8047–8054 (2012). 7. R. Bonk, G. Huber, T. Vallaitis, S. Koenig, R. Schmogrow, D. Hillerkuss, R. Brenot, F. Lelarge, G. H. Duan, S. Sygletos, C. Koos, W. Freude, and J. Leuthold, “Linear semiconductor optical amplifiers for amplification of advanced modulation formats,” Opt. Express 20(9), 9657–9672 (2012). 8. F. Crottini, P. Salleras, M. A. Moreno, B. Dupertuis, B. Deveaud, and R. Brenot, “Noise figure improvement in semiconductor optical amplifiers by holding beam at transparency scheme,” IEEE Photon. Technol. Lett. 17(5), 977–979 (2005). 9. R. Brenot, F. Pommereau, O. Le Gouez, J. Landreau, F. Poingt, L. Le Gouezigou, B. Rousseau, F. Lelarge, F. Martin, and G. H. Duan, “Experimental study of the impact of optical confinement on saturation effects in SOA,” in Optical Fiber Communications Conference (OFC 2005) paper OME50. 10. E. Staffan Bjorlin and J. E. Bowers, “Noise figure of vertial-cavity semiconductor optical amplifiers,” IEEE J. Quantum Electron. 38(1), 61–66 (2002). 11. K. Morito, S. Tanaka, S. Tomabechi, and A. Kuramata, “A broadband MQW semiconductor optical amplifier with high saturation output power and low noise figure,” IEEE Photon. Technol. Lett. 17(5), 974–976 (2005). 12. K. Carney, R. Lennox, R. Maldonado-Basilio, S. Philippe, A. L. Bradley, and P. Landais, “Noise controlled semiconductor optical amplifier based on lateral cavity laser,” Electron. Lett. 46(18), 1288–1289 (2010). #180286 $15.00 USD Received 21 Nov 2012; revised 15 Feb 2013; accepted 4 Mar 2013; published 14 Mar 2013 (C) 2013 OSA 25 March 2013 / Vol. 21, No. 6 / OPTICS EXPRESS 7180 13. G. Bendelli, K. Komori, S. Arai, and Y. Suematsu, “A new structure for high-power TW-SLA,” IEEE Photon. Technol. Lett. 3(1), 42–44 (1991). 14. G. Giuliani and D. D’Alessandro, “Noise analysis of conventional and gain-clamped semiconductor optical amplifiers,” J. Lightwave Technol. 18(9), 1256–1263 (2000). 15. M. Yoshino and K. Inoue, “Improvement of saturation output power in a semiconductor laser amplifier through pumping light injection,” IEEE Photon. Technol. Lett. 8(1), 58–59 (1996). 16. S. S. Saini, J. Bowser, R. Enke, V. Luciani, P. J. S. Heim, and M. Dagenais, “A semiconductor optical amplifier with high saturation power, low noise figure and low polarization dependent gain over the C-band,” in Lasers and Electro-Optics Society (LEOS 2004), 102–103. 17. R. Lennox, K. Carney, R. Maldonado-Basilio, S. Philippe, A. L. Bradley, and P. Landais, “Impact of bias current distribution on the noise figure and power saturation of a multicontact semiconductor optical amplifier,” Opt. Lett. 36(13), 2521–2523 (2011). 18. T. Mukai and Y. Yamamoto, “Noise in an AlGaAs semiconductor laser amplifier,” IEEE J. Quantum Electron. 18(4), 564–575 (1982). 19. E. Desurvire, “On the physical origin of the 3dB noise figure limit in laser and parametric optical amplifiers,” Opt. Fiber Technol. 5(1), 40–61 (1999). 20. M. Shtaif, B. Tromborg, and G. Eisenstein, “Noise spectra of semiconductor optical amplifiers: relation between semiclassical and quantum descriptions,” IEEE J. Quantum Electron. 34(5), 869–878 (1998). 21. H. A. Haus, “The noise figure of optical smplifiers,” IEEE Photon. Technol. Lett. 10(11), 1602–1604 (1998). 22. T. Briant, P. Grangier, R. Tualle-Brouri, A. Bellemain, R. Brenot, and B. Thedrez, “Accurate determination of the noise figure of polarization dependent optical amplifiers: theory and experiment,” J. Lightwave Technol. 24(3), 1499–1503 (2006). 23. D. M. Baney, P. Gallion, and R. S. Tucker, “Theory and measurement techniques for the noise figure of optical amplifiers,” Opt. Fiber Technol. 6(2), 122–154 (2000). 24. M. J. Connolly, Semiconductor Optical Amplifiers (Kluwer Academic Publishers, 2002), Chap. 3. 25. C. Gallep, A. Rieznik, H. Fragnito, N. Frateschi, and E. Conforti, “Black-box model for the complete characterization of the spectral gain and noise in semiconductor optical amplifiers,” Opt. Express 14(4), 1626– 1631 (2006). 26. J. Park and Y. Kawakami, “Time-domain models for the performance simulation of semiconductor optical amplifiers,” Opt. Express 14(7), 2956–2968 (2006). 27. M. J. Adams, J. V. Collins, and I. D. Henning, “Analysis of semiconductor laser optical amplifiers,” IEE Proc-J 132, 58–63 (1985). 28. T. Durhuus, B. Mikkelsen, and K. E. Stubkjaer, “Detailed dynamic model for semiconductor optical amplifiers and their crosstalk and intermodulation distortion,” J. Lightwave Technol. 10(8), 1056–1069 (1992). 29. M. J. Connelly, “Wideband semiconductor optical amplifier steady-state numerical model,” IEEE J. Quantum Electron. 37(3), 439–447 (2001). 30. H. T. Friis, “Noise figures of radio receivers,” Proc. IRE 32, 419–422 (1944). 31. Y. Yamamoto and K. Inoue, “Noise in amplifiers,” J. Lightwave Technol. 21(11), 2895–2915 (2003). 32. F. Surre and P. Landais, “A semiconductor optical amplifier with a reduced noise figure,” UK patent GB0821602.0, Feb. 9, 2011. Introduction The potential of semiconductor optical amplifiers (SOA) as non-linear entities within optical communication systems has resulted in a surge in experimental and theoretical investigations over the last two decades [1]. SOAs have garnered interest as candidates to replace current electrical methods of signal processing within optical networks, with the capacity to work at bit rates of 320Gb/s [2]. The benefits when operated in saturation have also been demonstrated as high speed wavelength converters and remote modulators [3][4][5]. In addition SOA present economic advantages, particularly pertaining to fabrication, low power consumption and suitability for integrated photonics [6]. Despite these attractive features, the role of SOAs in linear amplification systems is limited. While there is some use of SOAs in such applications [7], the Erbium doped fiber amplifier (EDFA) is the commonly chosen option. One of the major limitations of SOAs as linear components is a high noise figure (NF). To date, a number of methods for NF reduction have been proposed. An improvement in steady state NF was realized through optical injection of a holding beam in both co-and counter-propagation geometries, essentially resulting in a modification of the carrier density profile [8]. More recent works include alterations to the confinement factor [9], vertical cavity SOAs (which have the added advantage of low modal loss) [10], and various device structures ranging from unique waveguide termination to low internal loss structures [11]. The use of a lateral lasing cavity to clamp a portion of the SOA waveguide, in order to shape the carrier density profile, has also been considered [12]. The results of many of these methods have been largely positive in terms of NF but have presented other disadvantages. For example, solutions involving a holding beam necessitate additional equipment and expense. Reducing the confinement factor to decrease NF also decreases the optical gain, and vertical-cavity SOAs suffer from a limited gain bandwidth due to the use of a resonant cavity. An additional drawback of SOAs is that the saturation output power (P sat ) can be low, limiting the dynamic range of input powers over which the signals can be amplified cleanly without distortion. This can be a significant penalty when the SOA is being used as a multichannel amplifier, due to the effects of channel crosstalk. As with the NF, researchers have investigated ways to increase P sat such that the range of operation is increased without undermining other device characteristics. Techniques to increase saturation output power include reducing the confinement factor [9], the use of flared waveguides [13], gain clamping [14], pump beams [15] and varying the contact resistance in order to increase the current density along the waveguide [16]. These techniques have various disadvantages, such as reduced gain when reducing the confinement factor, increased cost of fabrication for gain clamped devices and, similar to attempts to reduce the NF, the additional cost and system complexities associated with pump beam schemes. Previously, it was experimentally demonstrated that varying the carrier density profile in a multi-contact SOA could influence the chip-NF and P sat [17]. The focus of this paper is to present a rate equation model for the numerical investigation of the physics underlying this phenomenon. Simulation results for different carrier density profiles are discussed. It will be shown that the carrier density profile can be tailored to minimize the chip NF or to optimize P sat for a fixed total bias current. The model is validated by comparison with experimental results obtained using a multi-contact bulk SOA. Additionally, it is shown that good control of the inter-contact resistance is important in order to realize the potential for NF reduction. The inter-contact resistance of the fabricated devices is increased using a Focused Ion Beam (FIB) technique and yields closer agreement between the experimental and simulation results. Noise figure As a signal is amplified it experiences degradation due to additive noise, an unavoidable consequence of spontaneous emission. The fundamental limits on information systems are governed by this noise, and so mitigation is of the utmost importance. Many publications have dealt with the theoretical determination of optical noise and its origins in great detail [18][19][20][21][22][23]. A widely accepted definition exists to facilitate consistent measurement in the laboratory [23]. The noise is quantified in the optical and electronic domains by the NF, defined as the degradation of the signal-to-noise ratio (SNR) of a signal by propagation through active and passive elements. In linear units this is expressed as, where SNR in and SNR out represent the SNR of the input and output signals, respectively. For clarity, when nf is given in lower case, it is calculated in linear scale, and in upper case, NF, denotes a decibel (dB) scale. The resultant NF formula, which takes into account the dominant individual sources of noise, is given in terms of experimentally obtained optical parameters [23], 10 2 1 10 log . The first term in parenthesis represents the noise associated with beating between the signal and the co-polarized amplified spontaneous emission (ASE) within the measurement bandwidth where G is the single pass gain, ν is the optical frequency of the injected signal, and ρ ASE is the ASE power spectral density within this bandwidth. This is the dominant source of noise in SOAs of high gain. The second term in brackets relates to the shot noise of the signal itself. For appreciable gain the latter may be neglected, as the shot noise is relatively insignificant at higher signal powers. Saturation output power As the signal power injected into an SOA is increased, or as amplified spontaneous emission increases with gain, the carriers in the active region used for amplification are depleted, causing the gain to decrease. The saturation output power is defined as the output power emitted by the SOA at the point at which the gain has reduced by 3dB. The saturation output power is given by [24], , , , sat out sat out where A is active region area in the plane perpendicular to propagation, Γ confinement factor, and the saturation output intensity I sat,out is given by, where G 0 is the unsaturated gain. The saturation intensity I sat is inversely proportional to the spontaneous carrier lifetime τ. Thus a decrease in confinement factor or spontaneous carrier lifetime or an increase in waveguide area can give rise to an increased saturation power [9]. These approaches are not incompatible with our solution, which is described below. Structure of simulation A variety of approaches to modelling SOAs is found in the literature [25,26]. The simulation tool presented herein has been developed based on a travelling wave model [27,28]. The SOA is modelled in n subsections, each of them electrically isolated from its neighbours. The values for all variables are calculated in each subsection. The carrier density is set at an initial value and is updated from the determined values of the ASE and signal fields. This process continues over a defined number of iterations. Values for carrier density as well as ASE and signal fields are used as initial conditions for the next iteration. The model has been developed to investigate the effects of tailoring the longitudinal carrier density and see how it may provide control of the NF and power saturation. The noise model used in the simulation is a deterministic model. In this case the level of ASE is determined for the signal wavelength only, using the material gain approximation from [29]. This noise model is less accurate for very low input powers, where the effect of wide-band ASE on the carrier density is proportionally greater. However, the powers used in the simulation study are appropriate to the deterministic model, and as will be shown there is good agreement between the trends of the experimental data and the measurements. Furthermore, the model considers the ideal case in which the carrier density profile is determined by the injected bias current only, in which case each sub-section is perfectly isolated and carrier diffusion is not taken into account. An overview of the concept of the simulation is given in Fig. 1, which depicts the sectioning of the SOA from subsection 1 to subsection n, including the level of carrier density and forward and backward travelling electric fields. The ASE intensity and the signal intensity at position z along the waveguide are calculated as a function of angular frequency, ω, by slowly varying envelope functions. The calculation uses the set initial value of the carrier density and the material gain, which is determined from the physical properties of the SOA specified in the simulation. These physical properties are based on typical values found in the literature. The values of the ASE intensity, I m , for each successive subsection are determined by the values in the previous subsection according to boundary conditions. These boundary conditions govern how the facet reflectivity affects the signal and the ASE, and are defined as follows: where m indicates subsection number. The above relations determine the behaviour of the ASE intensity travelling in both the forward (z + ) and backward (z -) directions at the subsection boundaries and the facets, where r 1,2 is the reflectivity of facet 1 or 2. A similar equation set determines the behaviour of the signal intensity with respect to time, t, and position, z. The boundary conditions for the signal envelope functions are: where ω p0 and ω 0 are the gain peak angular frequency and signal angular frequency, respectively. Using the values for the ASE envelope function, modified by the boundary conditions above, the spontaneous emission photon density is obtained: where N m is the carrier density in subsection m, G m is the single pass gain (expressed in linear scale) in subsection m calculated from the carrier density and material gain [29], R r (N m ) is the radiative recombination rate, α is the internal loss coefficient and β is the effective spontaneous emission factor, a measure of the spontaneous emission coupled to the traveling mode. The photon density for the signal is also obtained in a similar way using the signal envelope function: The values for the spontaneous emission photon density and the signal photon density are then used to solve the carrier density rate equation: where i m is the bias current injected at an individual subsection m, q is the charge of the carriers and V is the volume of the active region in subsection m. R(N m ) represents the recombination rates equal to AN m + BN m 2 + CN m 3 , where A is the non-radiative recombination coefficient, B the radiative recombination coefficient, and C the Auger recombination coefficient. This entire process comprises a single iteration of the model. The solved value of the carrier density is used to calculate the ASE and signal fields for the next iteration. Once convergence is reached, the gain and the NF are then obtained. The NF of each subsection in the SOA is determined from the population inversion factor, n sp , which is integral to the concept of reducing the NF, as detailed below. Calculation of the noise figure from carrier density The NF is, in general, proportional to the population inversion factor n sp , which is defined as, where γ and α represent the stimulated emission and absorption rates, respectively. These values are dependent on the level of carrier density in the SOA. Thus expressing the optical NF Eq. (2) in terms of n sp , allows for the qualitative argument that increasing the conduction band population level has the effect of decreasing the NF. The equation for the additive noise power of an amplifier is given by, where B 0 is the measurement bandwidth and G is the single pass chip gain. As ρ ASE is equal to the noise power per unit bandwidth, Eq. (11) can be re-written as, Replacing ρ ASE in Eq. (2) with Eq. (12) yields an expression for the noise figure in terms of n sp and gain, expressed in linear units, For the situation where G >> 1, this expression reduces to nf = 2n sp , which, in the case of total population inversion (n sp = 1), leads to a quantum limit of the NF of 3dB [19]. Reduction of noise figure In our simulation, the SOA is modeled in 24 subsections. In Fig. 2 the carrier density and the population inversion are presented as a function of subsection. The dependence of the inversion factor on the carrier density can clearly be seen. This dependence explains why increasing carrier density reduces the NF. However we cannot simply pump the device indefinitely for an improved NF due to the effects of ASE saturation and Auger recombination at higher biases. Therefore, for a given overall bias current i tot , it is proposed to reduce the total NF (NF tot ) with a very specific carrier density profile, compared with the carrier density profile for a standard SOA. To control the NF at i tot , the carrier density at the designated input of the device is held at a high value, increasing the gain and lowering the NF at this point. Backward travelling ASE cannot be neglected as it consumes carriers at the input of the device, which would otherwise available for signal gain, and so this effect must be reduced. A low carrier density is created in the output end of the device to satisfy this requirement. This is the carrier density profile shown at the top of Fig. 2 (profile 1). where nf m and g m are the NF and gain, respectively, of a particular subsection m. Each successive term in the equation is reduced in value compared with the last, due to the presence of an additional gain term at every stage. The NF is added to cumulatively by each successive element in the equation, which are reducing in magnitude. Thus the first few terms make the most significant contribution to the overall value of nf tot . This principle is illustrated further by Fig. 3 below. Under the same i tot , the contribution of each subsection to the increase in overall NF (in dB) for three different distributions of carrier density, profiles 1 and 2, as well as profile from a standard SOA. Compared with both the standard profile and profile 2, the increase in NF for profile 1 in the initial subsections is smaller. This falls in line with what we expect based on Eq. (14), in so far as we try to minimize the input NF. In addition, while the increase in NF for this profile in the final subsections will be larger than the other cases, the magnitude is relatively insignificant, and so cannot be resolved in this figure. It should be noted that this figure represents the cumulative addition of NF to the initial NF of the first modeled subsection. The individual NF of this subsection is below the quantum limit of 3dB in each bias configuration. This anomaly is explained due to the sectioning of the model. If this particular section was examined on its own, it would itself have to be sectioned in order to accurately determine the NF, which would then be greater than 3dB. The above scenarios illustrate the concept by comparing three extreme examples. From this comparison it is clear that to achieve the lowest NF tot , the subsections at the input of the device must be maintained at high carrier density. A three contact device is simulated by our model. A total of 80 mA is injected into the first contact, and the bias current applied to the middle and output contact is varied from 0 -90mA. The NF values are plotted below in Fig. 4. With a high bias injected into the first simulated contact there is an evolution of the NF reduction as the middle contact is increased and the output bias held constant below 30 mA. As predicted previously, the NF is expected to decrease from its maximum value, for lower middle and output biases, to its minimum for a carrier profile approaching that of profile 1. The increase in NF in the upper-right portion of the graph compared with the upper-left portion indicates that the NF cannot be minimized by arbitrarily pumping the SOA, and that therefore, for a specific total bias current, a specific carrier density profile exists where the NF is optimized. Controlling saturation power In contrast to the conditions for reduced NF, in order to increase the saturation output power of the SOA the current density in the device must be increased. The reason for this lies in the fact that the saturation power is inversely proportional to the spontaneous carrier lifetime, which in turn is inversely proportional to the carrier density (see Eq. (4). Thus operating at a higher bias current will increase P sat . This also has the effect of increasing the ASE, which causes a decrease in the stimulated emission lifetime, further increasing the saturation power. However, for a fixed total bias current, the most efficient distribution of carriers is not an equal injection of current throughout the entire SOA. For example for the carrier density profile 2, as the signal propagates and is amplified along the waveguide, increasing the carrier density linearly reduces the carrier lifetime as the signal intensity increases, ensuring that the gain remains unsaturated at any given point. By contrast, for profile 1, the low carrier density at the output facet, where the signal intensity is stronger than the input, causes saturation in this section. This idea is demonstrated in Fig. 5, which shows the simulated evolution of the signal photon density for three bias profiles as it propagates through the waveguide. In this context, photon density is the number density of photons in a given section m. A signal power of 5 dBm at a wavelength of 1570 nm is injected to saturate the SOA. The reduction in gain due to saturation can be considered as a decrease in the slope of the curves representing the signal photon density. In the final subsections (18-24), this decrease in slope can be seen for profile 1, and to a lesser extent the standard profile. However, for profile 2 the gain remains unsaturated in these subsections. For clarity in the remainder of the Paper we refer to profile 1 as the low noise profile and profile 2 as the high P sat profile. Figure 6 shows the output saturation power of the modeled SOA as a function of the bias current supplied to the input and middle contacts. The output contact bias current is held constant at 90 mA, in order to maximize the gain where the signal is strongest. As expected, the value for P sat increases with the total bias current, reaching a maximum value of ~16 dBm when all contacts are biased at 90 mA. The red line highlights the P sat values for a total bias current of 150 mA. The reason for this limit is poor thermal bonding between the SOA chip and the mount, and leads to a drop in output power beyond this bias current. Within this limit, the highest P sat value is obtained with an input section current of 10 mA and a middle section current of 50 mA. Figure 7 shows the simulated gain, NF and saturation power values for various bias conditions modeled for an SOA with three contacts. The middle contact bias is held constant whilst the two facet contacts are varied. This represents a transition from the low noise to high P sat profiles for a constant total injected bias. A signal with a power of −15 dBm at 1570 nm is injected. The maximum gain is observed while operating in the standard condition, with equal current injection to all contacts, corresponding to a single contact SOA. The magnitude of the gain decreases as the carrier density profile becomes less symmetrical. As expected, the NF is observed to decrease as the bias condition approaches that of the previously discussed low noise profile, and the saturation power increases for the opposite bias distribution, like that shown as the high P sat profile. It is observed that the NF varies over a range of 2.7 dB, while the saturation output power has a range of 4.2dB. Therefore, it is noted that controlling the carrier density profile has a larger impact on the power saturation than on the NF. Model results The experimentally characterized multi-contact SOA studied in this paper is 700 μm long and has three contacts. After fabrication there is a slight variation in the size of the contacts. In order to more accurately represent the actual contact sizes of the SOA, 8 of the 24 subsections used in our simulation are representative of the input contact, 9 subsections for the middle contact and 7 subsections for the end contact. For the low noise profile, 80 mA is injected at the input contact, 50 mA in the middle contact and 20 mA in the output contact, and viceversa for the high P sat case. The standard SOA case is modeled by injecting 50 mA into each contact. The total bias for all three cases is limited to 150 mA. These dimensions and current limitation are determined by the prototypes device. The parameters used in the simulation are given in Table 1 in the Appendix. These parameters were based on both known and typical values [28,29]. Figure 8 shows the modeled distribution of the ASE photon density along the waveguide for the three relevant bias configurations. Of particular note is the reduction of ASE for both high saturation power and low noise bias cases. This phenomenon is due to reduced amplification of spontaneous emission in regions of low current density, reducing ASE at the facets. Fig. 8. Simulated evolution of ASE photon density along waveguide for the standard, low noise and high P sat cases Figure 8 shows the modeled distribution of the ASE photon density along the waveguide for the three relevant bias configurations. Of particular note is the reduction of ASE for both high saturation power and low noise bias cases. This phenomenon is due to reduced amplification of spontaneous emission in regions of low current density, reducing ASE at the facets. The simulated gain and NF for an injected power of 15 dBm at various wavelengths are plotted in Fig. 9 for the three bias conditions. The minimum NF determined by the simulation is 4.5 dB at 1570 nm, for the low noise bias configuration. The maximum gain is observed in the standard bias configuration and is measured as 18.1 dB at a wavelength of 1555 nm. A gain difference of approximately 0.7 dB is observed between the low noise and high P sat cases. This discrepancy is explained by a slight difference in the size of the individual contacts of the SOA, which leads to different levels of carrier density per equivalent section for the low noise case vis-à-vis the high P sat case. A reduction in the NF of approximately 0.6 dB between the low noise configuration and the standard configuration is visible in the simulated data. Whereas a high current density at the input of the device leads to a lower NF, increasing the injected current along the length of the waveguide towards the output facet increases the saturation power of the device, allowing it to operate as a linear amplifier for a wider range of input powers. Fig. 10. Gain (dash) as a function of the output power for the simulated device with an injected signal at 1570 nm. The overall bias current of these three SOAs is 150 mA. Figure 10 shows the gain as a function of output power for the simulated device. The injected signal is at the peak gain wavelength for the low noise and high P sat profiles, which is 1570 nm for the simulation. An increase in saturation power of 1.7 dB, from 10.8 dBm to 12.5 dBm, is observed between the standard SOA case and the high saturation power case. Experimental results Based on the previous computational investigation, a multi-contact bulk SOA has been fabricated and tested. The concept of reducing the NF based on tailoring the carrier density profile has been previously demonstrated in a single contact SOA incorporating a lateral lasing cavity in order to control the carrier density distribution [12]. The multi-contact SOA configuration is cheaper to process and, as can be seen from the simulation results, offers the possibility to operate in different modes. In practice, one may employ numerous current sources to drive each contact independently, or use a single source in conjunction with a resistor network [32]. The device under test is a bulk InP/InGaAsP SOA, angled and antireflection coated, with a length of approximately 700 μm. Three electrodes are used for current injection into three sections of length 236 μm, 254 μm and 210 μm. Electrical isolation between the contacts is provided by a 10 μm slot, and resistance between sections is measured to be 300 Ω. For manufacturing reasons, the current in each contact is limited to 90 mA. A schematic of the SOA is shown in Fig. 11. A preliminary experimental characterization of this device has been previously reported in [17]. In this section the experimental results provide validation of the modeling approach and, furthermore, comparison with the model also sheds further insight on some of the experimental observations. Fig. 11. Schematic of multi-contact SOA, with three sections, driven by three separate current sources. For the experimental characterization, the low noise profile consists of 90 mA injected at the input contact, 50 mA in the middle and 10 mA at the output contact, with the opposite injection scheme for the high P sat case. As for the simulation, the standard case consists of 50 mA injected to each contact. The reason for the difference in bias current in the simulated case is due to a difference in the transparency current in the low bias contact between the simulation and the actual device. Noise figure measurement The experimental setup is described in detail in [17]. The gain and NF data are acquired using the optical NF Eq. (3). To measure the level of co-polarized ASE, necessary for the NF calculation, a free space polariser is set to the polarization of the input signal and is then placed at the output of the SOA in the absence of an input signal. The ASE is then measured both with the polariser and without, using a free space power meter. The losses in the setup from various elements such as mirrors and fibre couplers are taken into account by measuring the signal before and after these elements. Most importantly, the modal mismatch loss between the elliptical mode of the SOA and the circular optical fibre mode is determined by measuring the output signal of the SOA after free space coupling through lenses, and subsequently after coupling from the lenses into a fiber. The difference will be the modal mismatch, and assuming reversible optical paths, should also determine the modal mismatch at the SOA input. The ASE and signal powers used in the calculation in Eq. (3) are determined from the SOA output spectrum. The ASE is calculated by extrapolating underneath the signal, and is then corrected for the level of co-polarization. Results The gain and NF for an injected power of −15 dBm at various wavelengths are plotted in Fig. 12 for the three selected bias conditions. The minimum NF observed for the experimental data is 5.0 ± 0.2 dB at a wavelength of 1568 nm, measured for the low noise bias configuration. This compares favorably to NF values recorded for commercial SOAs. The corresponding minimum NF determined by the simulation is 4.5 dB at 1570nm, for the low noise bias configuration. The maximum gain is again observed in the standard bias configuration and is measured as 17.5 ± 0.2 dB at a wavelength of 1563 nm. The gain difference of approximately 0.7dB observed between the low noise and high P sat cases in the simulated data is also present here. This discrepancy can be explained by a slight difference in the size of the individual sections of the SOA, as previously discussed. A reduction in the NF of approximately 0.3 dB between the low noise configuration and the standard configuration is visible at certain wavelengths, although this is not consistent across the entire bandwidth and is a smaller effect than anticipated. However, there is a clear reduction in the NF by approximately 2 dB at 1570 nm for the standard and low noise profiles compared with the high P sat profile. The trend of the experimental data is in agreement with the simulation data presented in Fig. 9. In Fig. 7 it can be clearly seen that the slope of the NF versus bias configuration over the range from the low noise profile to the standard profile is relatively flat, whereas the NF is more sensitive to the change in bias over the range from the standard profile to high P sat profile. Furthermore, Fig. 13 shows the gain as a function of output power for the three aforementioned bias configurations. The injected signal is at the measured peak gain wavelength of 1568 nm. A saturation power of 9 dBm is measured for the high P sat bias case. This represents a 1.5 dB increase over the standard case and 3 dB increase over the low NF case. This result corroborates the trend seen in the simulations. Figure 7 clearly shows that P sat is sensitive to the changes in the bias configuration over the full range of bias profiles and that a similar level of change in P sat is expected as the profile is varied from the low noise case to the standard case and from the standard profile to the high P sat profile. The experimental results validate the modeling approach and present an excellent match with the simulation results presented in Fig. 10. It should be acknowledged that the P sat value of 9dBm is relatively low compared to a commercial SOA. It is common, however, for commercial booster SOAs to be significantly longer than our device, as device length directly increases saturation power, and also to be pumped at higher bias currents. It is noted however that the simulation predicts a greater reduction of the NF of 0.6 dB relative to the standard case. It is considered that the lessening of the effect in the experimental device maybe due to carrier leakage across the sections. This would broaden the carrier profile, and impair the noise reduction effect. As previously discussed the simulation considered full isolation between sections and no carrier diffusion was taken into account, To explore further whether improved control of the carrier density profile can produced a further reduction in the NF relative to the standard profile, it was attempted to increase resistance between adjacent contacts using a Focused Ion Beam technique. The results are presented in Section 5. FIB experiment As discussed above, diffusion of carriers between the sections of the SOA could reduce the effectiveness of the injected bias current profile. Using a focused ion beam (FIB) technique, the resistance between the electrical contacts along the top of the SOA is increased. The FIB uses a focused beam of Ga + ions to sputter atoms from the surface of the sample in question. In this way, the slots separating the electrical contacts can be etched deeper into the InP cladding of the SOA, and thereby increase the resistance. A second multi-contact SOA was used to test this approach. Compared with the SOA described earlier, the sections of the second SOA are less uniform in size. The input and middle sections are approximately 271 μm long, while the output section is much shorter, at 156.9 μm. Consequently, the standard bias profile for this device, correcting for the section sizes, is 59, 58, 33 mA, from input to output. Similarly, the corrected low noise profile is 92, 43, 17 mA. The inter-sectional resistances measured before the FIB etching process are 300 Ω at slot A (between input and middle sections), and 350 Ω at slot B (between middle and output sections). As a result of the FIB process, slot A is etched down to a depth of 1500 nm, while slot B is etched to approximately 700 nm. The FIB technique is susceptible to the presence of excess surface charge which can cause the sample to drift and makes it difficult to achieve the same depth for both slots. The measured resistance values for slot A and B after the FIB process are 600 Ω and 400 Ω, respectively. The FIB etching depth is less than that of the active region in order to avoid excessive effect on the confinement factor. If there is an effect due to the presence of air slots, it would be to reduce the refractive index in the cladding and thus increase confinement in the active region due to index guiding. This would have the effect of increasing NF [9]. The measured gain values are shown in Fig. 14, for both the standard profile and the low noise profile, before and after the FIB etching. Of note for the standard profile is the reduction of gain compared with the initial value before the etching, dropping from 23.5 dB at 1518 nm to 21 dB at approximately 1524 nm. In addition to this, the gain reduction is accompanied by a decrease in the gain bandwidth. The gain reduction is thought to be a result of the aforementioned drift, resulting in a removal of some of the gold contact material around slot B and subsequently, a localized gain reduction. The gain reduction in the low noise profile is not as severe, likely due to the difference in current distribution within the device. The NF as a function of the input signal wavelength for both bias current profiles is shown in Fig. 15. Again data are shown for the unmodified SOA and post-FIB etching cases. The NF curves for the standard bias configuration do not show any substantial change. The minimum NF measured in this case is ~4.3 dB at 1570 nm for the unmodified SOA and post-FIB etching. It is expected that an increase in section resistance should not affect the NF of the standard profile. This result also indicates that the confinement factor remains unaffected. In contrast, a marked decrease in NF is observed for the low noise case, after the FIB process. The minimum NF measured for the unmodified SOA is ~4.3 dB at 1570 nm. Post-FIB etching, the measured NF at this wavelength is 3.8 dB, a decrease of 0.5 dB relative to both the low noise profile of the unmodified SOA and the standard profile, before and after the FIB. It can be noted that this level of reduction is close to the reduction of 0.6 dB predicted in the simulation data. Furthermore, this NF is extremely low, approaching the 3 dB limit, and is, to our knowledge, the lowest published for a bulk material SOA. These results show that the NF for the low noise profile can be further reduced relative to that of the standard profile with improved isolation between sections. Of course, FIB etching is not an ideal solution for optimization of the resistance between sections, but it was a suitable option for modification of the post-fabrication devices. However, it is expected that a solution at the fabrication stage would yield even greater gains. Conclusion In this paper, we presented a rate equation model to investigate the impact of tailoring the carrier density profile along the length of the active layer of an SOA. It is demonstrated that different profiles can be used to optimize the NF of the device or the output power saturation for a fixed total bias current. The reduction in gain inherent to the two aforementioned profiles is a disadvantage. However, for applications such as low noise pre-amplification, the gain of the SOA is not as significant a factor as the NF. Similarly, for a power booster, when heavily saturated, the gain of the high P sat mode should be on par with that of the standard SOA profile. It was shown that the different carrier density profiles could be implemented using a multi-contact SOA device. The agreement between the experimental results and the simulation results provided validation of the modeling approach and demonstrated a practical and versatile scheme for controlling both the NF and saturation output power of a SOA based on a multi-contact bulk design. The inherent flexibility of this scheme is advantageous in terms of cost savings and reduced complexity of linear amplification schemes. Experiments on prototype devices showed a chip NF of 3.8 dB could be achieved for a low noise (preamplifier) configuration. An increase in saturation power of 3dB was obtained when switching from the pre-amplifier to the booster configuration.
10,199
2013-03-25T00:00:00.000
[ "Engineering", "Physics" ]
ExaFSA: Parallel Fluid-Structure-Acoustic Simulation In this paper, we present results of the second phase of the project ExaFSA within the priority program SPP1648—Software for Exascale Computing. Our task was to establish a simulation environment consisting of specialized highly efficient and scalable solvers for the involved physical aspects with a particular focus on the computationally challenging simulation of turbulent flow and propagation of the induced acoustic perturbations. These solvers are then coupled in a modular, robust, numerically efficient and fully parallel way, via the open source coupling library preCICE. Whereas we made a first proof of concept for a three-field simulation (elastic structure, surrounding turbulent acoustic flow in the near-field, and pure acoustic wave propagation in the far-field) in the first phase, we removed several scalability limits in the second phase. In particular, we present new contributions to (a) the initialization of communication between processes of Introduction The simulation of fluid-structure-acoustic interactions is a typical example for multiphysics simulations. Two fundamentally different physical sound sources can be distinguished: structural noise and flow-induced noise. As we are interested in accurate results for the resulting sound emissions induced from the turbulent flow, it is decisive to include not only the turbulent flow, but also the structure deformation and the interaction between both. High accuracy requires the use of highly resolved grids. As a consequence, the use of massively parallel supercomputers is inevitable. When we are interested in the sound effects far away from a flow induced fluttering structure, the simulation becomes too expensive, even for supercomputing architectures. Hence, we introduce an assumption, we call it the "far-field". Far from the structure and, thus, the noise generation, we assume a homogeneous background flow and restrict the simulation in this part of the domain to the propagation of acoustic waves. This results in an overall setup with two coupling surfacesbetween the elastic structure and the surrounding flow, and between the near-field and the far-field in the flow domain (see Fig. 1 for an illustrative example). Such a complex simulation environment implies several new challenges compared to Multiphysics fluid-structure-acoustic scenario as used in our simulations in Sect. 6. The domain is decomposed into a near-field 'incompressible flow region' F = NA , a far-field 'acoustic only region' FA , and an 'elastic structure region' S . Note that the geometry is not scaled correctly for better illustration "single-physics" simulations: (a) multi-scale properties in space and time (smallscale processes around the structure, multi-scale turbulent flow in the near-field, and large-scale processes in the acoustic far-field), (b) different optimal discretization and solver choices for the three fields, (c) highly ill-conditioned problem, if formulated and discretized as a single large system of equations, (d) challenging load-balancing due to different computational load per grid unit depending on the local physics. Application examples for fluid-structure-acoustic simulations can be found in several technical systems: wind power plants, fans in air conditioning systems of buildings, cars or airplanes, car mirrors and other design details of a car frame, turbines, airfoil design, etc. Fluid-structure interaction simulations as a sub-problem of our target system have been in the focus of research in computational engineering for many years, mainly aiming at capturing stresses in the structure more realistically than with a pure flow simulation. A main point of discussion in this field is the question whether monolithic approaches-treating the coupled problem as a single large system of equations-or partitioned methods-glueing together separate simulation modules for structures and fluid flow by means of suitable coupling numerics and tools-are more appropriate and efficient. Monolithic approaches require a new implementation of the simulation code as well as the development of specialized iterative solvers for the ill-conditioned overall system of equations, but can achieve very high efficiency and accuracy [3,12,19,23,38]. Partitioned approaches, on the other hand, offer large flexibility in choosing optimal solvers for each field, adding additional fields, or exchanging solvers. The difficulty here lies in both a stable, accurate, and efficient coupling between independent solvers applying different numerical methods and in establishing efficient communication and load balancing between the used parallel codes. For numerical coupling, numerous efficient data mapping methods [5,26,27,32] have been published along with efficient iterative solvers [2,7,13,20,29,35,39,41]. In [6], various monolithic and partitioned approaches have been proposed and evaluated in terms of a common benchmark problem. Three-field fluid-structure-acoustic interaction in the literature has so far been restricted to near-field simulations due to the intense computational load [28,33]. To realize a three-field fluid-structure-acoustic interaction including the far-field, we use a partitioned approach and couple existing established "single-physics" solvers in a black-box fashion. We couple the finite volume solver FASTEST [18], the discontinuous Galerkin solver Ateles [42], and the finite element solver CalculiX [14] by means of the coupling library preCICE [8]. We compare this approach to a less flexible white-box coupling implemented in APESmate [15] as part of the APES framework and make use of the common data-structure within APES [31]. The assumption which is confirmed in this paper is, that the white-box approach is more efficient, but puts some strict requirements on the codes to be coupled, while the black-box approach is a bit less efficient, but much more flexible with respect to the codes that can be used. Our contributions to the field of fluid-structure-acoustic interaction, which we summarize in this paper, include: 1. For the near-field flow, we introduce a volume coupling between background flow and acoustic perturbations in FASTEST accounting for the multi-scale properties in space and time by means of different spatial and time resolution. 2. For both near-field flow and far-field acoustics, we achieved portability and performance optimization of Ateles and FASTEST for vector machines by means of code transformation. 3. In terms of inter-field coupling, we (a) increased the efficiency of inter-code communication by means of a new hierarchical implementation of communication initialization and a modified communicator concept, (b) we improved the robustness and efficiency of radial basis function mapping, (c) we identified correct interface conditions between near-field and far-field, optimized the position of the interface, and ensured correct boundary conditions by overlapping near-field and far-field, (d) we developed and implemented implicit quasi-Newton coupling numerics that allow for a simultaneous execution of all involved solvers. 4. For a substantially improved inter-code load balancing, we use a regressionbased performance model for all involved solvers and perform an optimization of assigned cores. 5. We present a comparison of our black-box and to the white-box approach for multi-physics coupling. These contributions have been achieved as a result of the project ExaFSAa cooperation between the Technische Universität Darmstadt, the University of Siegen, the University of Stuttgart, and the Tohoku University (Japan) in the Priority Program SPP 1648-Software for Exascale Computing of the German Research Foundation (DFG) in close collaboration with the Technical University of Munich. In the first funding phase (2013-2016), we showed that efficient yet robust coupled simulations are feasible and can be enhanced with an in-situ visualization component as an additional software part, but we still reached limits in terms of scalability and load balancing [4,9]. This paper focuses on results of the second funding phase (2016-2019) and demonstrates significant improvements in scalability and accuracy as well as robustness based on the above-mentioned contributions. In the following, we introduce the underlying model equations of our target scenarios in Sect. 2 and present our solvers and their optimization in Sect. 3 as well as the black-box coupling approach and new contributions in terms of coupling in Sect. 4. In Sect. 5, we compare black-box coupling to an alternative, efficient, but solver-specific and, thus, less flexible white-box coupling for uni-directional flowacoustic coupling. Finally, results for a turbulent flow over a fence scenario are presented in Sect. 6. Model In this section, we shortly introduce the underlying flow, acoustic and structure models of our target application. We use the Einstein summation convention throughout this section. Governing Equations The multi-physics scenario we investigate describes an elastic structure embedded in a turbulent flow field. The latter is artificially decomposed into a near-field and a far-field. See Fig. 1 for an example. Near-Field Flow In the near-field region F = NA , the compressible fluid flow is modeled by means of the density ρ, the velocity u i and the pressure p. As we focus on a low Mach number regime, we can split these variables into an incompressible part ρ, u i , p, and acoustic perturbations ρ , u i , p : The incompressible flow is described by the Navier-Stokes equations 1 where ρ is the density of the fluid, and f i summarizes external force density terms. The incompressible stress tensor τ ij for a Newtonian fluid is described by with μ representing the dynamic viscosity and δ ij the Kronecker-Delta. 1 To capture the moving structure within the near-field, we actually formulate all near-field equations in an arbitrary Lagrian-Eulerian perspective. For the relative mesh velocity, we use a block-wise elliptic mesh movement as described in [30]. As we do not show fluid-structure interaction in this contribution, however, we formulate all near-field equations in a pure Eulerian perspective for the sake of simplicity. Acoustic Wave Propagation The propagation of acoustic perturbations in both the near-field and the far-field is modeled by the linearized Euler equations, where in the far-field a constant background state is assumed (which implies ∂p ∂t = 0). Here c is the speed of sound. In the near-field, the background flow quantities u i and p are calculated from (2), whereas they are assumed to be constant in the acoustic far-field. The respective constant value is read from the coupling interface with the near-field, which implies that the interface has to be chosen such that the background flow values are (almost) constant at the coupling interface. In both cases, the coupling between background-flow and acoustic perturbations is unidirectional from the background flow to the acoustic equations (4) by means of p and u i . Elastic Structure The structural subdomain S is governed by the equations of motion, here in Lagrangian description: With x S i = X S i + ϑ i being the position of a particle in the current configuration, X S i is the position of a particle in the reference configuration, and ϑ i the displacement. F ij is the deformation gradient. S ij is the second Piola-Kirchhoff tensor, and ρ S describes the structural density. The Cauchy stress tensor τ S ij relates to S ij via We assume linear elasticity to describe the stress-strain relation. The coupling between fluid and structure is bi-directional by means of dynamic and kinematic conditions, i.e., equality of interface displacements/velocities and stresses, i.e., at I = S ∩ F with F = ∂ F and S = ∂ S . Solvers and Their Optimization Following a partitioned approach, the respective subdomains of the multi-physics model as described in Sect. 2 (elastic structure domain, near-field, and far-field) are treated by different solvers. We employ the flow solver FASTEST presented in Sect. 3.1 to solve for the incompressible flow equations, Eq. (2), and near-field acoustics equations, Eq. (4), the Ateles solver described in Sect. 3.2 for the far-field acoustics equations, Eq. (4), and finally the structural solver CalculiX introduced in Sect. 3.3 for the deformation of the obstacle, Eq. (5). For performance optimization of FASTEST and Ateles, we make use of the Xevolver framework, which has been developed to separate system-specific performance concerns from application codes. We report on the optimization of both solvers further below. FASTEST FASTEST is used to solve both the incompressible Navier-Stokes equations (2) and the linearized Euler equations (4) in the near-field. Capabilities and Numerical Methods The flow solver FASTEST [24] solves the three-dimensional incompressible Navier-Stokes equations. The equations are discretized utilizing a second-order finite-volume approach with implicit timestepping, which is also second order accurate. Field data are evaluated on a non-staggered, body-fitted, and block-structured grid. The equations are solved according to the SIMPLE scheme [11], and the resulting linear equation system is solved by ILU factorization [36]. Geometrical multi-grid is employed for convergence acceleration. The code generally follows a hybrid parallelization strategy employing MPI and OpenMP. FASTEST can account for different flow phenomena, and has the capability to model turbulent flow with different approaches. In our test case example, we employ a detached-eddy simulation (DES) based on the ζ − f turbulence model [30]. In addition, FASTEST contains a module to solve the linearized Euler equations to describe low Mach number aeroacoustic scenarios, which are solved by a second order Lax-Wendroff scheme with various limiters. Since all equation sets are discretized on the same numerical grid, advantage can be taken from the multi-grid capabilities to account for the scale discrepancies of the fluid flow and the acoustics. Since the spatial scales of the acoustics are considerably larger than those of the flow, a coarser grid level can be used for them. In return, the finer temporal scales can be considered by sub-cycling a CFD time step with various CAA time steps. This way a very efficient implementation of the viscous/acoustic splitting approach can be realized. Performance Optimization Concerning performance optimization, one interesting point of FASTEST is that some of its kernels were once optimized for old vector machines, and thus important kernels have their vector versions in addition to the default ones. The main difference between the two versions is that nested loops in the default version are collapsed into one loop in the vector version. Since the loops skip accessing halo regions, the compiler is not able to automatically collapse the loops, resulting in short vector lengths even if the compiler can vectorize them. To efficiently run the solver on a vector system, performance engineers usually need to manually change the loop structures. In this project, Xevolver is used to express the differences between the vector and default versions as code transformation rules. In other words, vectorization-aware loop optimizations are expressed as code transformations. As a result, the default version can be transformed to its vector version, and the vector version does not need to be maintained any longer to achieve high performance on vector systems. That is, the FASTEST code can be simplified without reducing the vector performance by using the Xevolver approach. Ten rules are defined to transform the default kernels in FASTEST to their vector kernels. Those code transformations plus some system-independent minor code modifications for removing vectorization obstacles can reduce the execution time on the NEC SX-ACE vector system by about 85%, when executing a simple test case that models a three-dimensional Poiseuille flow through a channel based on the Navier-Stokes equations, in which the mesh contains two blocks with 426,000 cells each. The code execution on the SX-ACE vector processor works about 2.7 times faster than on the Xeon E5-2695v2 processor, since the kernel is memory-intensive and the memory bandwidth of SX-ACE is 4× higher than that of Xeon. Therefore, it is clearly demonstrated that the Xevolver approach is effective to achieve both high performance portability and high code maintainability for FASTEST. Ateles In our project, Ateles is used for the simulation of the acoustic far-field. Since acoustics scales need to be transported over a large distance, Ateles' high-order DG scheme can show its particular advantages of low dissipation and dispersion error in this test case. Capabilities and Numerical Methods The solver Ateles is integrated in the simulation framework APES [31]. Ateles is based on the Discontinuous Galerkin (DG) discretization method, which can be seen as a hybrid method, combining the finite-volume and finite-element methods. DG is well suited for parallelization and the simulation of aero-acoustic problems, due to its inherent dissipation and dispersion properties. This method has several outstanding advantages, that are among others the high-order accuracy, the faster convergence of the solution with increasing scheme order and fewer elements compared to a low order scheme with a higher number of elements, the local h-p refinement as well as orthogonal hierarchical bases. The DG solver Ateles includes different equation systems, among others the compressible Navier-Stokes equations, the compressible inviscid Euler equations and the linearized Euler equations (used in this work for the acoustics far-field). For the time discretization, Ateles makes use of the explicit Runge-Kutta time stepping scheme, which can be either second or fourth order. Performance Optimization Analyzing the performance of Ateles, originally developed assuming x86 systems, we found out that four kinds of code optimization techniques are needed for a total of 18 locations of the code in order to migrate the code to the SX-ACE system. Those techniques are mostly for collapsing the kernel loop and also for directing the NEC compiler to vectorize the loop. In this project, all the techniques are expressed as one common code transformation rule. The rule can take the option to change its transformation behaviors appropriately for each code location. This means that, to achieve performance portability between SX-ACE and x86 systems, only one rule needs to be maintained in addition to the Ateles application code. We executed a small testcase solving Maxwell equations with an 8th order DG scheme on 64 grid cells. The code transformation leads to 7.5× higher performance. The significant performance improvement is attributed to loop collapse and insertion of appropriate compiler directives, which increases the vectorization length by a factor of 2 and the vectorization ratio from 71.35% to 96.72%. Finally, in terms of the execution time, the SX-ACE performance is 19% the performance of Xeon E5-2695v2. The code optimizations for SX-ACE reduce the performances of Xeon and Power8 by 14% and 6%, respectively. In this way, code optimizations for a specific system are often harmful to other systems. However, by using Xevolver, such a system-specific code optimization is expressed separately from the application code. Therefore, the Xevolver approach is obviously useful for achieving high performance portability across various systems without complicating the application code. CalculiX As structure solver, we use the well-established finite element solver CalculiX [14], developed by Guido Dhont und Klaus Wittig. 2 While CalculiX also supports static and thermal analysis, we only use it for dynamic non-linear structural mechanics. As our main research focus is not the structural computation per se, but the coupling within a fluid-structure-acoustic framework, we merely regard CalculiX as a black box. The preCICE adapter of CalculiX has been developed in [40]. A Black-Box Partitioned Coupling Approach Using preCICE Our first and general coupling approach for the three-field simulation comprising (a) the elastic structure, (b) the near-field flow with acoustic equations, and (c) the far-field acoustic propagation follows a black-box idea, i.e., we only use input and output data of dedicated solvers at the interfaces between the respective domains for numerical coupling. Such a black-box coupling requires three main functional coupling components: intercode-communication, data-mapping between non-matching grids of independent solvers, and iterative coupling in cases with strong bi-directional coupling. preCICE is an open source library 3 that provides software modules for all three components. In the first phase of the ExaFSA project, we ported preCICE from a server-based to a fully peer-to-peer communication architecture [9,39], increasing the scalability of the software from moderately to massively parallel. To this end, all coupling numerics needed to be parallelized on distributed data. During the second phase of the ExaFSA project, we focused on several costly initialization steps and further necessary algorithmic optimizations. In the following, we shortly sketch all components of preCICE with a particular focus on innovations introduced in the second phase of the ExaFSA project and on the actual realization of the fluid-acoustic coupling between near-field and far-field and the fluid-structure coupling. (Iterative) Coupling To simulate fluid-structure-acoustic interactions such as in the scenario shown in Fig. 1, two coupling interfaces have to be considered with different numerical and physical properties: (a) the coupling between fluid flow and the elastic structure requires an implicit bi-directional coupling, i.e., we exchange data in both directions and iterate in each time step until convergence; (b) the coupling between fluid flow and the acoustic far-field is uni-directional (neglecting reflections back into the nearfield domain), i.e., results of the near-field fluid flow simulation are propagated to the far-field solver as boundary values once per time step. In order to fulfil the coupling conditions at the fluid-structure interface as given in Sect. 2, we iteratively solve the fixed-point equation where f represents the stresses, u the velocities at the interface F S , S the effects of the structure solver on the interface (with stresses as an input and velocities as an output), F the effects of the fluid solver on the interface (with interface velocities as an input and stresses as an output). preCICE provides a choice of iterative methods accelerating the plain fixed-point iteration on Eq. (8). The most efficient and robust schemes are our quasi-Newton methods that are provided in a linear complexity (in terms of interface degrees of freedom) and fully parallel optimized versions [35]. As most of our achievements concerning iterative methods fall within the first phase of the ExaFSA project, we omit a more detailed description and refer to previous reports instead [9]. For the uni-directional coupling between the fluid flow in the near-field and the acoustic far-field, we transfer perturbation in density, pressure, and velocity from the flow domain to the far-field as boundary conditions at the interface. We do this once per acoustic time step, which is chosen to be the same for near-field and farfield acoustics, but which is much smaller than the fluid time step size (and the fluid-structure coupling), as described in Sect. 3.1. Both domains are time-dependent and subject to mutual influence. In an aeroacoustic setting, the near-field subdomain NA and far-field subdomain FA , with boundaries NA = ∂ NA and FA = ∂ FA are fixed, which means, all background information in the far-field are fixed to a certain value. Therefore there is only influence of NA onto FA , as backward propagation can be neglected. Then the continuity of shared state variables on the interface boundary IA = NA ∩ FA is Data Mapping Our three solvers use different meshes adapted to their specific problem domain. To map data between the meshes, preCICE offers three different interpolation algorithms: (a) Nearest-neighbor interpolation is based on finding the geometrically nearest neighbor, i.e. the vertex with the shortest distance from the target or source vertex. It excels in its ease of implementation, perfect parallelizability, and low memory consumption. (b) Nearest-projection mapping can be regarded as an extension to the nearest-neighbor interpolation, working on nearest mesh elements (such as edges, triangles or quads) instead of merely vertices and interpolating values to the projection points. The method requires a suitable triangulation to be provided by the solver. (c) Interpolation by radial-basis functions is provided. This method works purely on vertex data and is a flexible choice for arbitrary mesh combinations with overlaps and gaps alike. In the second phase of the ExaFSA project, we improved the performance of the data mapping schemes in various ways. All three interpolation algorithms contain a lookup-phase which searches for vertices or mesh elements near a given set of positions. As there is no guarantee regarding ordering of vertices, this resulted in O (n · m) lookup operations, n, m ∈ N being the size of the respective meshes. In the second phase, we introduced a tree-based data structure to facilitate efficient spatial queries. The implementation utilizes the library Boost Geometry 4 and uses an rtree in conjunction with the r-star insertion algorithm. The integration of the tree is designed to fit seamlessly into preCICE and avoids expensive copy operations for vertices and mesh elements of higher dimensionality. Consequently, the complexity of the lookup-phase was reduced to O log a n · m with a being a parameter of the tree, set to ≈5. The tree index is used by nearest-neighbor, nearestprojection, and RBF interpolation as well as other parts in preCICE and provides a tremendous speedup in the initialization phase of the simulation. In the course of integrating the index, the RBF interpolation profited from a second performance improvement. In contrast to the nearest-neighbor and nearestprojection schemes it creates an explicit interpolation matrix. Setting values one by one results in a large number of small memory allocations with a relatively large percall overhead. To remedy this, a preallocation pattern is computed with the help of the tree index. This results in a single memory allocation, speeding up the process of filling the matrix. A comparison of the accuracy and runtime of the latter two interpolation methods is provided in Sect. 5. Communication Smart and efficient communication is paramount in a partitioned multi-physics scenario. As preCICE is targeted at HPC systems, a central communication instance would constitute a bottleneck and has to be avoided. At the end of phase one, we implemented a distributed application architecture. The main objective in its design is not a classical speed-up (as it is for parallelism) but not to deteriorate the scalability of the solvers and rendering a central instance unnecessary. Still, a so-called master process exists, which has a special purpose mainly during the initialization phase. At initialization time, each solver gives its local portion of the interface mesh to preCICE. By a process called re-partitioning, the mesh is transferred to the coupling partner and partitioned there, i.e., the coupling partner's processes select interface data portions that are relevant for their own calculations. The partitioning pattern is determined by the requirements of the selected mapping scheme. The outcome of this process is a sparse communication graph, where only links between participants exist that share a common portion of the interface. While this process was basically in place at the end of phase one, it was refined in several ways. MPI connections are managed by means of a communicator which represents an n-to-m connection including an arbitrary number of participants. The first imple-mentation used only one communication partner per communicator, essentially creating only 1-to-1 connections. To establish the connections, every connected pair of ranks had to exchange a connection token generated by the accepting side. This exchange is performed using the network file system, as the only a-priori existing communication space common to both participants. However, network file systems tend to perform badly with many files written to a single directory. To reduce the load on the file system, a hash-based scheme was introduced as part of the optimizations in phase two. With that, writing of the files is distributed among several directories, as presented in [26]. This scheme features a uniform distribution of files over different directories and, thus, minimizes the files per directory. However, this obviously resulted in a large number of communicators to be created. As a consequence, large runs hit system limits regarding the number of communicators. Therefore, a new MPI communication scheme was created as an alternative. It uses only one communicator for an all-to-all communication, resulting in significant performance improvements for the generation of the connections. This approach also solves the problem of the high number of connection tokens to be published, though only for MPI. As MPI is not always available or the implementation is lacking, the hash-based scheme of publishing connection tokens is still required for TCP based connections. Load Balancing In a partitioned coupled simulation solvers need to exchange boundary data at the beginning of each iteration, which implies a synchronization point. If computational cores are not distributed in an optimal way among solvers, one solver will have to wait for the other one to finish its time step. Thus, the load imbalance reduces the computational performance. In addition, in a one way coupling scenario, if the data receiving solver is much slower than the other one, the sending partner has to wait until the other one is ready to receive (in synchronized communication) or store the data in a buffer (in asynchronous communication). In the first phase, the distribution of cores over solvers was adjusted manually and only synchronized communication was implemented, resulting in idle times. Regression Based Load Balancing We use the load balancing approach proposed in [37] to find the optimal core distribution among solvers: we first model the solver performance against the number of cores for each domain and then optimize the core distribution to minimize the waiting time. Since mathematical modeling of the solvers' performance can be very complicated, we use an empirical approach as proposed in [37], first introduced in [10], to find an appropriate model. Assuming we have a given set of m data points, consisting of pairs (p, f p ) mapping the number of ranks p to the run-time f p , we want to find a function f (p) which predicts the run-time against p. Therefore, we use the Performance Model Normal Form (PMNF) [10] as a basis for our prediction model: where the superscript i denotes the respective solver, n is a a-priori chosen number of terms, i k , j k ∈ N 0 and c k is the coefficient for the kth regression term. The next step is to optimize the core distribution such that we achieve minimal overall run time which can be expressed by the following optimization problem: This optimization problem is a nonlinear, possibly non-convex integer program. It can be solved by the use of branch and bound techniques. But, if we assume that the f i are all monotonically decreasing, i.e., assigning more cores to a solver never increases the run-time, we can simplify the constraints to P = l i=0 p i and solve the problem by brute-forcing all possible choices for p i . That is, we iterate over all possible combinations of core numbers and choose the pair that minimizes the total run-time. For more details, please refer to [37]. Asynchronous Communication and Buffering For our fluid-structure-acoustic scenario shown in Fig. 1, we perform an implicitly coupled simulation of the elastic structure interacting with the incompressible flow over a given discrete time step (marked simply as 'Fluid' in Fig. 2). This is followed by many small time steps for the acoustic wave propagation in the near-field, which are coupled in a loose, unidirectional way to the far-field acoustic solver (executing the same small time steps). To avoid waiting times of the far-field solver while we compute the fluid-structure interactions in the near-field, we would like to 'stretch' the far-field calculations such that they consume the same time as the sum of fluid-structure time steps and acoustic steps in the near-field (see Fig. 2). To achieve this, we introduced a fully asynchronous buffer layer, by which the sending participant was decoupled from the receiving participant, as shown in Fig. 2. Special challenges to tackle were the preservation of the correct ordering of messages, especially for TCP communication which does not implement such guarantees in the protocol. Isolated Performance of preCICE In this section, we show numerical results for preCICE only. This isolated approach is used to show the efficiency of the communication initialization. In addition, . . . Fig. 2 Coupling scenario between participant A (performing a time step for the incompressible fluid (or fluid-structure interaction) followed by many time steps of the near-field acoustic simulation (NFA)) and participant B (performing the same small acoustic steps for the far-field (FFA) after receiving acoustic data from the near-field solver). Without buffering, inevitable idle times for participant B are created. NFA is linked to FFA through send operations. Therefore, the runtimes of NFA and FFA are matched through careful load-balancing. Shown here: A send buffer decouples NFA and FFA solver for send operations, prevents idle times, and allows for a more flexible processor assignment we show stand-alone upscaling results. Other aspects are considered elsewhere: (a) the mapping accuracy is analyzed in Sect. 5, (b) the effectiveness of our load balancing approach as well as the buffering for uni-directional coupling are covered in Sect. 6. If not denoted otherwise, the following measurements are performed on the supercomputing systems SuperMUC 5 and HazelHen. 6 Mapping Initialization: Preallocation and Matrix Filling As described previously, one of the key components of mapping initialization is the spatial tree which allows for performance improvements by accelerating the interpolation matrix construction. As it becomes obvious from Fig. 3, the spatial tree was able to provide us a with an acceleration of more than two orders of magnitude. Communication For communication and its initialization, we only present results for the new single-communicator MPI based solution. For TCP socket communication that still requires the exchange of many connection tokens by means of the file system, we only give a rough factor of 2.5 that we observed in terms of acceleration of communication initialization. Note that this factor can be potentially higher as the number of processes and, thus, connections grows larger, and that the hashbased approach removed the hard limit of ranks per participant inherent to the old approach. In Figs. 4, 5 and 6, we compare performance results for establishing an MPI connection among different ranks using many-communicators for 1-to-1 connections with using a single communicator representing an n-to-m connection. In our academic setting, both Artificial Solver Testing Environment (ASTE) participants run on n cores. On SuperMUC, each rank connects to 0.4n ranks, on HazelHen, with a higher number of ranks per node, each rank connects to 0.3n ranks. The amount of data transferred between each connected pair of ranks is held constant with 1000 rounds of transfer of an array of 500, and 4000 double values from participant B to participant A. Each measurement is performed five times of which the fastest and the slowest runs are ignored and the remaining three are averaged. We present timings from rank zero, which is synchronized with all other ranks by a barrier, making the measurements from each rank identical. Note, that the measurements are not directly comparable between SuperMUC and HazelHen due to the different number of cores per node and that the test case is even more challenging than actual coupled simulations. In an actual simulation, the number of partner ranks per rank of a participant is constant with increasing number of cores on both sides. Figure 4 shows the time to publish the connection token. The old approach requires to publish many tokens, which obviously becomes a performance bottleneck as the simulation setup moves to higher number of ranks. The new approach, on the other hand, only publishes one token. It is omitted in the plot, as the times are negligible (<2 ms). In Fig. 5, the time for the actual creation of the communicator is presented. The total number of communication partners per communicator is smaller with the old many-communicator concept (as the communication topology is sparse). However, the creation of many 1-to-1 communicators is substantially slower than the creation of one all-to-all communicator for both HPC systems. Finally, in Fig. 6 the performance for an exchange of data sets of two different sizes is presented. The results for single-and many-communicator approaches are mostly on par with the notable exception of the SuperMUC system. There, the new approach suffers a small but systematic slow-down for small message sizes. We argue that this is a result of vendor specific settings of the MPI implementation. Data Mapping As described above, we have further improved the mapping initialization, in particular by applying a tree-based approach to identify data dependencies induced by the mapping between grid points of the non-matching solver grids and to assemble the interpolation matrix for RBF mapping. Accordingly, we show both the reduction of the matrix assembly runtime (Fig. 3) and the scalability of the mapping, including setting up the interpolation system and the communication initialization. These performance tests of preCICE are measured using a special testing application called ASTE. 7 This application behaves like a solver to preCICE but provides artificial data. It is used to quickly generate input data and decompose it for upscaling tests. ASTE generates uniform, rectangular, two-dimensional meshes on set to zero. The mesh is then decomposed using a uniform approach, thus producing partitions of same size as far as possible. Since we mainly look at the mapping part which is only executed as one of the participants, we limit the upscaling to this participant. The other participant always uses one node (28 resp. 24 processors). The mesh size is kept constant, i.e., we perform a strong scaling. The upscaling of an RBF mapping with Gaussian basis functions is shown in Fig. 7. Black-Box Coupling Versus White-Box Coupling with APESMate In the above section, we have evaluated the performance of the black-box coupling tool preCICE. In this section, we introduce an alternative approach that allows to couple different solvers provided within the framework APES [31]. Blackbox data mapping in preCICE only requires point values (nearest neighbor and RBF mapping), and in some cases (nearest projection) connectivity information on the coupling interface. The white-box coupling approach of APESmate [25] has knowledge about the numerical schemes within the domain, since it is integrated in the APES suite, and has access to the common data-structure TreELM [22]. APESmate can directly evaluate the high order polynomials of the underlying Discontinuous Galerkin scheme. Thus, the mapping in preCICE is more generally applicable, while the approach in APESmate is more efficient in the context of high order scheme. Furthermore, APESmate allows the coupling of all solvers of the APES framework, both in terms of surface and in terms of volume coupling. The communication between solvers can be done in a straightforward way as all coupling participants can be compiled as modules into one single application. Each subdomain defines its own MPI sub-communicator, a global communicator is used for the communication between the subdomains. During the initialization process, coupling requests are locally gathered from all subdomains and exchanged in a round-robin fashion. As all solvers in APES are based on an octree data-structure and a space-filling curve for partitioning, it is rather easy to get information about the location of each coupling point on the involved MPI ranks. In the following, we compare both accuracy and runtime of the two coupling approaches for a simple academic test case that allows to control the 'difficulty' of the mapping by adjusting order and resolution of the two participants. Test Case Setup We consider the spreading of a Gaussian pressure pulse over a cubic domain of size 5 × 5 × 5 unit length, with an ambient pressure of 100,000 Pa and a density of 1.0 kg/m 3 . The velocity vector is set to 0.0 for all spatial directions. To generate a reference solution, this test case is computed monolithically using the inviscid Euler equations. For the coupled simulations, we decompose the monolithic test case domain into an inner and an outer domain. The resolution and the discretization order of the inner domain are kept unchanged. In the outer domain, we choose the resolution and the order such that the error is balanced with that of the inner domain. See [15] for the respective convergence study. To be able to determine the mapping error at the coupling interface between inner and outer domain, we choose the time horizon such that the pressure pulse reaches the outer domain, but is still away from the outer boundaries to avoid any influences from the outer boundaries. The test case is chosen in a way, that the differences between the meshes at the coupling interface increase, thus increasing the difficulty to maintain the overall accuracy in a black-box coupling approach. Table 1 provides an overview of all combinations of resolution and order in the outer domain used for our numerical experiments, where the total number of elements per subdomain is given as nElements, the number of coupling points with nCoupling points and the scheme order by nScheme order, respectively. For time discretization, we consider the explicit two stage Runge-Kutta scheme with a time step size of 10 −6 for all simulations. Mapping Accuracy In terms of mapping accuracy, it is expected, that the APESmate coupling is order-preserving, and by that not (much) affected by the increasing differences between the non-matching grids at the coupling interface, while pre-CICE should show an increasing accuracy drop when the points become less and less matching. This is the case for increasing order of the discretization in the outer domain. Figure 8 illustrates first results. As can be clearly seen, the whitebox coupling approach APESmate provides outstanding results by maintaining the overall accuracy of the monolithic solution for all different variations of the coupled simulations, independent of the degree to which the grids are non-matching (increasing with increasing order used in the outer domain). For the interpolation methods provided by preCICE, the error increases considerably with increasing differences between the grids at the interface. As the error of the interpolation methods depends on the distances of the points (see Fig. 8), the error is dominated by the large distance of the integration points in the middle of the surface of an octree grid cell in the High Order Discontinuous Galerkin discretization. Accuracy Improvement by Regular Subsampling We can decrease the L2 error of NP and RBF and improve the solution of the coupled simulation by providing values at equidistant points on the Ateles side as interpolation support points. The number of equidistant points is equal to the number of coupling points, hence as high as the scheme order. With this new implementation, the error shown in Fig. 9a decreases considerably compared to the results in Fig. 8. We achieve an acceptable accuracy for all discretization order combinations. However, the regular subsampling of values in the Ateles solver increases the overall computational time substantially as can be seen in Fig. 9b. To improve the NP interpolation, it turned out that in addition to providing equidistant points, oversampling was required to increase the accuracy. Our investigation showed, that an oversampling factor of 3 is needed to achieve almost the same accuracy as APESmate. In spite of the additional cost of many newly generated support points, the runtime does not increase as much as for RBF, since for the RBF a linear equation system has to be computed, while for NP a simple projection needs to be done. Figure 10 shows a summary of all tested methods for the interpolation/evaluation before and after improvements. The integrated coupling approach APESmate provides not just very accurate results, but also low runtimes. At this point, we want to recall that this is as expected-the whitebox approach makes use of all internal knowledge, which gives it advantages in terms of accuracy and efficiency. On the other hand, this internal knowledge binds it to the solvers available in the framework, while preCICE can be applied to almost all available solvers. Further details regarding this investigation can be found in [15,16]. Results This section presents a more realistic test case, the turbulent flow over a fence, to assess the overall performance of our approach. Analyses for accuracy and specific isolated aspects are integrated in the sections above. Flow over a Fence Test Case Setup As a test case to assess the overall scalability, we simulate the turbulent flow over a (flexible) fence and the induced acoustic far-field as already shown schematically in Fig. 1. The FSI functionality of FASTEST has been demonstrated earlier many times, e.g. in [34]. Thus we focus on the acoustic coupling. As boundary conditions, we use a no-slip wall at the bottom and the fence surface, an inflow on the left with u bulk , outflow convective boundary conditions on the right, periodic boundary conditions in y-directions, and slip conditions at the upper boundary for the near-field flow. For the acoustic perturbation, we apply reflection conditions at the bottom and the fence surface, zero-gradient condition at all other boundaries. The acoustic far-field solver uses Dirichlet boundary conditions at its lower boundary (see also Eq. (9)). Therefore, the upper near-field boundary is not the coupling interface, but we instead overlap near-field and far-field as shown in Fig. 11. Figures 12 and 13 show a snapshot of the near-field flow and the near-field and far-field acoustic pressure, respectively. Fluid-Acoustics Coupling with FASTEST and Ateles To demonstrate the computational performance of our framework using FASTEST for the flow simulation in the near-field, the high-order DG solver Ateles for the far-field acoustic wave propagation, and preCICE for coupling, we show weak scalability measurements for the interaction between near-field flow simulation and far-field acoustics. We keep both the mesh and the number of MPI ranks in the near-field flow simulation fixed. In the far-field computed with Ateles, we refine the mesh to better capture the acoustic wave propagation. We use a multi-level mesh with a fine mesh at the coupling interface to allow a smooth solution at the coupling interface between the near-field and the far-field. We refine the far-field mesh in two main steps: in the first step, we only refine the mesh at the coupling interface. In the next step, we first refine the whole mesh, and again the mesh at the coupling interface in the third and fourth step. Due to the refinement at the coupling interface, the number of Ateles ranks participating in the interface increases such that this study also shows that the preCICE communication does not deteriorate scalability. Table 2 gives an overview of the configurations used for the weak scaling study. To find the optimal core distribution for all setups, the load balancing approach proposed in Sect. 4 is used. This analysis shows that for the smallest mesh resolution with 24,864 elements in the far-field, the optimal core distribution is 424 cores for the near-field domain and 196 cores for the far-field. For all other setups, we assume perfect scalability, i.e., we choose the number of cores proportional to the number of degrees of freedom in the weak scaling study and increase the number of cores Scalability study for the interaction between the near-field flow simulation and the far-field acoustics: Summary of mesh details and core numbers for weak scaling. In the FASTEST simulation of the near-field flow simulation, we use 52,822,016 elements. In the far-field, Ateles uses discretization order 9 simultaneously by a factor of two in both fields for strong scaling. The scalability measurements are shown in Fig. 14. The results show that the framework scales almost perfectly up to 6528 cores. Fluid-Acoustics Coupling with Only Ateles In Sect. 5 we investigated the suitability of different interpolation methods for our simulations. In this section, we present a strong scaling study for an Ateles-Ateles coupled simulation of the flow over a fence test case. The fence is modelled in Ateles using the newly implemented immersed boundary method, enabling highorder representation of complex geometries in Ateles [1]. We solve the compressible Navier-Stokes equations in the flow domain with a scheme order of 4 and a four step mixed implicit-explicit Runge-Kutta time stepping scheme, with a time step size of 10 −7 . The total number of elements in the flow domain is 192,000. For the far-field, we use the same setup as for the FASTEST-Ateles coupling. The linearized Euler equations in the far-field can be solved in a DG setting in the modal formulation, which makes the solver very cheap even for very high order. In the near-field domain, the non-linear Navier-Stokes equations are solved with a more expensive hybrid nodal-modal approach. Due to this, and the different spatial discretizations and scheme orders, both domains have different computational load, which requires load balancing. We use static load balancing, since neither the mesh nor the scheme order vary during runtime. As both solvers are instances of Ateles, we apply the SpartA algorithm [21], which allows re-partitioning of the workload according to weights per elements, which are computed during runtime. Those weights are then used to re-distribute the elements according to the workload among available processes (see [17] for more details). The total number of processes used for this test case are 14,336 processes, which is equal to one island on the system. As mentioned previously, the total workload per subdomain does not change, therefore we start our measurements by providing the lower subdomain 100 processes and the upper subdomain 12 processes, which is equal to 4 nodes on the system. This number per subdomain is then doubled for each run, the ratio is kept the same. Figure 15 shows the strong scaling measurements for both coupling approaches (APESmate and preCICE) executed on the SuperMUC Phase2 system. As can be clearly seen, both coupling setups Ateles-APESmate-Ateles and Ateles-preCICE-Ateles scale almost ideally, however with a lower absolute runtime for the APESmate coupling as expected. Summary and Conclusion We have presented a partitioned simulation environment for the massively parallel simulation of fluid-structure-acoustic interactions. Our setup uses the flow and acoustic solvers in the finite volume software FASTEST, the acoustic solvers in the discontinuous Galerkin framework Ateles as well as the black-box fully parallel coupling library preCICE. In particular, we could show that with a careful design of the coupling tool as well as of solver details, we can achieve a bottleneck-free numerically and technically highly scalable solution. It turned out that efficient initialization of point-to-point communication relations and mapping matrices between the involved participants, sophisticated inter-code load balancing and asynchronous communication using message buffering are crucial for large-scale scenarios. With these improvements, we advanced the limits of scalability of partitioned multiphysics simulations from less than a hundred cores to more than 10,000 cores. Beyond that, we reach a problem size that is not required by the given problem as well as scalability limits of the solvers. The coupling itself is not the limiting factor for the given problem size and degree of parallelism. To be able to use also vector architectures in an efficient sustainable way, we adapted our solvers with a highly effective code transformation approach.
11,571
2020-01-01T00:00:00.000
[ "Physics", "Computer Science" ]
Model of nanodegradation processes in electronic equipment of NPP Kozloduy From the complex studies it was proof that the main degradation processes in the three groups of elements for the extended period of time are slow; do not lead to a hopping change in basic parameters and to catastrophic failures. This gives grounds to suggest a common diffusion model, which is limited to the following: -in electronic components containing a p-n junction, is performed diffusion of residual cooper atoms, that are accumulated in the area of a spatial charge under the influence of the electric field and the local temperature, creating micro-shunt regions; -in the contactor systems whose contact surfaces are made of metal alloys under the influence of increased temperature starts decomposition of a homogeneous alloy. Conditions are created for diffusion of individual atoms to the surface, micro-phases of homogeneous atoms are formed and modify the contact resistances; -in the course of time in the insulating materials are changed the mechanisms of polarization, double bonds and dipoles are disrupting, leading to the release of carbon atoms. The latter diffuse at elevated temperatures and form conductive cords, which amend the dielectric losses and the specific resistance of the materials. Introduction A general physic model for slow degradation of electronic systems equipment with continuing resource is developed on the base of analyses of elemental diffusion processes in three separate groups of the component base: semiconductor diodes and transistors; electric contactor systems; dielectric materials. Physical models of degradation processes in the three types of specimen are based on diffusion processes that allow accepting a general driving mechanismdiffusion of material particles, independently of their origin. There are accepted as general parameters of degradation the following factors: • coefficient of ideality factor β for selected diodes and coefficient of conductivity h21 for selected transistors; • microhardness H and contact resistance Rk for selected mechanical contactors; • dielectric losses tgδ, dielectric permittivity ε and high voltage voltampere features are selected insulation materials. Semiconductor diodes and transistors Measurement of electrical parameters of semiconductor specimens is performed and analyzed by theoretical model proposed by A Popov [1] and used into methodology developed from us [2][3][4]. It is based on shunt areas in p-n junction of diodes and transistors in continuous operation which change their voltampere characteristics and respectively the rest of the current parameters. Diffusion and accumulation of residual impurity atoms with small covalent radius in dislocation defects that occur in process of technological formation of active semiconductor structure are shown in cases (a, b, c in figure 1). They are considered as main reasons for formation of real shunting areas (case a). Another cases (b,c) form single deep levels and clusters. Formation of shunt areas and clusters that create deep levels into the forbidden zone influences most strongly on voltampere characteristic of p-n junction and in particular of the coefficient β (ideality coefficient) into the formula: The change of ideality coefficient shows that the tilt of volt-ampere characteristics of the p-n junction alters and respectively the current parameters of the simple diode created on the base of it are changed. In regard to bipolar transistor created on the base of two p-n junctions (emitter and collector transitions) the current of collector transition Iko changes mostly, the ratio of electricity transition by current h21 as well as and the slope of voltampere family features. The explanation of this degradation phenomenon can be traced to the model based on the modification of the spatial charge of the p-n junction because the formation of shunt areas after accumulation of impurity atoms on the dislocation defects. The spatial charge is presented in figure 2 in a three-dimensional version with shunt field. Ideality coefficient of this p-n junction is presented with this expression [8,9]: where ω = xn+xp is the width of the spatial charge, L-diffusion length of the carriers and b is the ratio between the mobility of electrons and holes. Shunt areas formation and deep levels in sensitive concentrations, influencing on current parameters of diode and of transistor increase the value of the ideality factor as is shown in figure 3. In this case the relation between the width of the spatial charge and diffusion length of carriers increase (become bigger than zero) because of the spatial charge length increasing. The changes of a static current transfer constant h21 of a non-operated (upper curve) and 10-year operated (lower curve) transistor КТ310 2Б are presented in figure 4. About aging process it is proposed following mechanism: • diffusion of impurities with a small covalent radius into dislocations intersecting p-n junction, and forming a shunt fields with a high conductivity; • as main diffusing element are considered copper atoms contained in the synthetic semiconductor crystals in high enough concentrations, and are with significantly smaller covalent radius than that of the main atoms of the basic crystal; • effective coefficient of copper atoms diffusion is defined assuming that the quantity of accumulated impurities is proportionally to the operational time and temperature of p-n junction.Radius of the dislocation tube is assumed to be equal to the lattice parameter of < 10 Å. The copper diffuses into silicon as positively charged ion. The coefficient of diffusion into n-type and proper silicon is [5]: Positively charged copper ion form couples with negatively charged acceptor ion-boron and consequently a portion of the dissolved copper ions are stationary. The effective diffusion coefficient of copper in the silicon alloyed with boron, [6] is: (4) In p-type silicon the copper forms copper-copper pairs. The analysis shows that they are formed between atom in node and atom in internode. The time of the dissociation of the copper-copper couples is [7]: This equation gives time constant from 16 days to 9.5 years. The average value is about 8 month. Process with such a constant can be completely decisive for the slow degradation of silicon diodes and transistors. The lifetime in n-and p-silicon vary in a different manner depending on copper concentration. Influence of lifetimes and its changes on voltampere characteristics of p-n junctions is well described from the model of Stafeev [8,9]. Taken the real values for lifetimes of electrons and pcarriers, measured into alloyed with copper n-Si and p-Si, are obtained 2 at low alloying. At higher level of alloying where lifetimes are obtained, is measured ≈ 3. By this model smooth growing of could be explained with lifetimes amendment at low alloying. The large changes of (3-7) are due to amendments of times at very high alloying with copper. From the analysis of the above models, combined with the experimentally obtained data it is built a theoretical model for the slow degradation of diodes and transistors, which are a major elemental base in electron equipment for the management and control of NPP. For diodes the analytical form of the model is represented by equation (6) and graphically presentation in figure 5: For transistors the analytical form of the model is represented by equation (7) and graphical presentation in figure 6: (7) Figure 7 shows that the relative increase of the ideality factor β of tested diodes is in the range 1.15 -1.8. A similar picture was seen in the relative change of the transmission of the tested specimens. On the basis of the known values of the coefficient of diffusion of cooper into silicon and the dynamics of the formation and destruction of couples of cooper ions and precipitates were evaluated time constants of the degradation processes of about 15 days to more than 9 years. Electro-contact systems The second main group of elementary base of electronic equipment of NPP were electro contact systems, for which are studied: i) ampere-volt features of couple of pairs of contacts from each contactor for determine of contactor resistance Rk;-ii) micro-hardness H of constant surfaces of unused and used contactors independently of light and dark areas where they are detached; iii) ampere-volt characteristics for defining of surface-resistance Rs, separately of light and dark regions of contact areas (of unused and used contactors). Measurement and analysis of contact resistance of contacts in mechanical contactor systems The first and most important performance parameter for all contactor devices with mechanical metal contacts (or metallurgical contacts, such as thermocouples and some other control devices) is their contact resistance R k . It is extremely sensitive to any degradation processes and its quantitative values are an indicator of whether the contactors have entered a phase of ageing or are in conditions of normal operation. Characteristic for the beginning of this degradation process is the relative increase of the contact resistance between the contacting pads, measured in closed contact. For its determination a method of comparative measurement of the residual potential difference between the contact spots in operating position of the used and unused contactors of the same type is used. Measurement is done by a four electrode methoda modified two probe method (Figure 9a). According to the equivalent scheme as shown (Figure 9b) the residual potential difference U12, registered by the voltmeter is determined by cross resistance of the contacts. Elements R 1 and R 2 represent the electric resistance of the leading rails and R k -the aggregate contact resistance of the two contact spots (R k =R k1 +R k2 ): The potential probes are located in close proximity to the contact spots, so that the measured potential difference U12 reads the voltage drop mainly on a high resistive component of the near contact layers, designated with Rk. Reference measurements of used and unused contactors of the same type replace the absolute geometric calibration. This paragraph describes measurements done to study the contact resistance of mechanical contactors. Described measurements are non-destructive and are carried out under normal pressure on contacts: of an activated contactor for normally open contacts and of non-activated contactor for normally closed contacts and in currents comparable to the nominal ones for operation. After summarizing the results, on selected representative contact couples are modeled volt-ampere features. Furthermore, again on the selected contacts are made electric probe measurements, as well as, a study of micro-hardness and position by an X-ray microprobe. Potential probes are placed as close as possible to the contact spots, so that the measured potential difference U12 reads the voltage drop mainly on the high resistive component of the near contact layers, denoted by Rk. Measurements described are nondestructive and conducted under standard pressing on contacts: activated contactor for normal open contacts and non-activated for normal closed contacts, at nominal current operation. Measurements are compared to new and old contactors of the same type. Measurements of micro-hardness The study of micro-hardness contactor spots are performed using micro-hardnessmeter. Measurements are conducted at room temperature. The pressure on the intender P, could be selected in the interval 0,5-200g. By software is calculated the average micro-hardness for a given loading: where the value of the constant A depends from the type of the selected indenter. At the same time are calculated the deviations from the average value of H, as well as the depth of indenter penetration into the specimen, d. In order to examine the profile of change in the micro-hardness as a function of the penetration depth of indenter, measurements are carried out over a wide range of load -by at least such that it is possible to measure the fingerprint, to one in which to reach a steady characteristic for a given material micro-hardness. A comparison of the profiles of unused and used specimens can show whether there have been changes in the mechanical properties of the surface layer under the influence of the operating conditions (corrosive atmosphere, high temperature, humidity, radiation). The method may also be used in carrying out the phase analysis. If the specimen is multiphase and different phases are large enough to be intendered can be measured micro-hardness of each phase. Since it depends on the composition, structure and type of processing the material, the resulting information [3], it appears useful for the study of the complex system and a contact in particular for their residual resource. This is shown in the figure 11. Residual resource contactor elements is evaluated by comparing the slopes of the volt-ampere features of worked and non-working patterns and causes of degradation phenomena are determined by measurements of micro-hardness and phase analysis of alloys constituting the contact spots. degradation determination is used: i) linear schedule in two points, and determines the rate of degradation for a period of 8 years; ii) for a basic mechanism of aging is considered diffusion of the metal atoms in the alloy in less percentage as the formation of microphases and destructure of the contactor alloy. As the main factors facilitating this process is determined: temperature, operating current and time. Almost all unused contacts have linear volt-ampere futures and close contact resistances of several mΏ. A large part of the used contactors are with linear volt-ampere characteristics but with higher contact resistances and they have additional working time. The average characteristics of both types are shown in figure12. Similarities in the relative degradation changes of the three investigated parameters and significantly greater number of contact sites of contactors many design variations have give the reason and an opportunity to find a unifying idea to explain the underlying processes causing degradation of contactors during operation process. Insulation materials Electronic equipment management and control of various systems of power Units № 5 and № 6 comprising insulating material is tested for the presence of degradation processes occurring in continuous use in background controlled in sense of radiation according safety standards in NPP Kozloduy by different weather conditions. Selected systems are operating in continuous mode about 10 years and they have taken separate blocks containing different dielectric materials (paint coatings, substrates for printed circuit boards, cable insulation and electrical protection elements). There were measured dielectric permittivity ε, dielectric losses factor tgδ, and specific resistivity ρ of used dielectric materials and insulators for cables of electronic equipment of NPP. The resulting values were averaged over all measured patterns and compared with available tabulated values for the specific insulating materials. The comparison is made with the table data, since it is well known that in the dielectric material was observed degradation changes in time without being subject to any real exploitation. The measurement of main parameters of the identified models is performed and analyzed by a theoretical model according to which upon degradation of dielectric and insulation materials increases the electrical conductivity at operating voltages. This is related to a change in the polarization mechanism of dielectric dipole structure or the occurrence of double injection in a dielectric, which can be considered as wideband semiconductor. Volt-ampere characteristics It is examines the linearity of the volt-ampere feature of the tested specimens, as the voltage is changed from about 200 to 1000V with a step of 50V. If the ratio current/voltage is not preserved, these are symptoms of several possible processes, namely: a change of the polarizing mechanism, forming conductive cords, or the occurrence of a double injection. Received data from three types are presented in figure 13a,b,c: a) the majority (about 80%) have no strong deviations from the linear character; b) in another group (about 10%) there are horizontal initial sections in the volt-ampere features which are offset relative to zero of the abscissa. This is an indication of the beginning of surface leakages; c) in a third group (about 10%) are observed sections with a negative resistance at about 500V, which suggests degradation processes associated with a double injection. For the specimens of three types of defined shape and size is possible also to determine independently the dielectric loss factor by the separate measurement of the two components of the conductivity and by bridge allows operation in two modes: -direct measurement of the dielectric loss factor; -separate measurement of the capacity (for the determination of relative permittivity) and the active impedance (for possible calculating the dielectric loss factor). Preliminary comparison of the measured values with same tabular data for the discussed materials show predominantly falling within the tolerances as is shown in for example in table 1. Preliminary analysis of the measurements shows that the overwhelming number of cases there is an expected trend of degradation modification of the parameter, namely: reduction of dielectric permittivity and specific resistance and increase of dielectric loss factor. The fact that the relative degradation changes of the basic parameters of such a large group of diverse materials (for example is shown on figure14 for tgδ) have similar numerical intervals shows some community of processes causing degradation. This and the results of the study of the previous two groups of components (diodes and transistors and contactors) allow the construction of a general explanation of the observed changes in the parameters of electronic components and materials in the process of operation. Conclusions From the conducted studies, it is clear that the main degradation processes in the three groups of elements for an extended period of time are slow do not lead to a hopping change in basic parameters and to catastrophic failures. This gives grounds to suggest a common diffusion model, which is limited to the following: in electronic components containing a p-n junction, is performed diffusion of residual copper atoms, that are accumulated in the area of a spatial charge under the influence of the electric field and the local temperature, creating shunt channels; in the contactor systems whose contact surfaces are made of metal alloys under the influence of increased temperature starts decompositions of a homogeneous alloy. Conditions are created for diffusion of individual atoms. to the surface, microphases of homogeneous atoms are formed and modify the contact resistances;-in the course of time in the insulating materials are changed the mechanisms of polarization, double bonds and dipoles are disrupting, leading to the release of carbon atoms. The latter diffuse at elevated temperature and form conductive cords, which amend the dielectric losses and specific resistance of the materials. Therefore, the total physical model of slow degradation in electronic systems can be built on diffusion processes of free atoms under the influence of local temperature and electric field. These processes are uncontrollable, their speed is slow and the time of change of the resource can be predicted by using the developed methods.
4,247.6
2014-12-03T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]